User login
Teaching Quality Improvement to Internal Medicine Residents to Address Patient Care Gaps in Ambulatory Quality Metrics
ABSTRACT
Objective: To teach internal medicine residents quality improvement (QI) principles in an effort to improve resident knowledge and comfort with QI, as well as address quality care gaps in resident clinic primary care patient panels.
Design: A QI curriculum was implemented for all residents rotating through a primary care block over a 6-month period. Residents completed Institute for Healthcare Improvement (IHI) modules, participated in a QI workshop, and received panel data reports, ultimately completing a plan-do-study-act (PDSA) cycle to improve colorectal cancer screening and hypertension control.
Setting and participants: This project was undertaken at Tufts Medical Center Primary Care, Boston, Massachusetts, the primary care teaching practice for all 75 internal medicine residents at Tufts Medical Center. All internal medicine residents were included, with 55 (73%) of the 75 residents completing the presurvey, and 39 (52%) completing the postsurvey.
Measurements: We administered a 10-question pre- and postsurvey looking at resident attitudes toward and comfort with QI and familiarity with their panel data as well as measured rates of colorectal cancer screening and hypertension control in resident panels.
Results: There was an increase in the numbers of residents who performed a PDSA cycle (P = .002), completed outreach based on their panel data (P = .02), and felt comfortable in both creating aim statements and designing and implementing PDSA cycles (P < .0001). The residents’ knowledge of their panel data significantly increased. There was no significant improvement in hypertension control, but there was an increase in colorectal cancer screening rates (P < .0001).
Conclusion: Providing panel data and performing targeted QI interventions can improve resident comfort with QI, translating to improvement in patient outcomes.
Keywords: quality improvement, resident education, medical education, care gaps, quality metrics.
As quality improvement (QI) has become an integral part of clinical practice, residency training programs have continued to evolve in how best to teach QI. The Accreditation Council for Graduate Medical Education (ACGME) Common Program requirements mandate that core competencies in residency programs include practice-based learning and improvement and systems-based practice.1 Residents should receive education in QI, receive data on quality metrics and benchmarks related to their patient population, and participate in QI activities. The Clinical Learning Environment Review (CLER) program was established to provide feedback to institutions on 6 focused areas, including patient safety and health care quality. In visits to institutions across the United States, the CLER committees found that many residents had limited knowledge of QI concepts and limited access to data on quality metrics and benchmarks.2
There are many barriers to implementing a QI curriculum in residency programs, and creating and maintaining successful strategies has proven challenging.3 Many QI curricula for internal medicine residents have been described in the literature, but the results of many of these studies focus on resident self-assessment of QI knowledge and numbers of projects rather than on patient outcomes.4-13 As there is some evidence suggesting that patients treated by residents have worse outcomes on ambulatory quality measures when compared with patients treated by staff physicians,14,15 it is important to also look at patient outcomes when evaluating a QI curriculum. Experts in education recommend the following to optimize learning: exposure to both didactic and experiential opportunities, connection to health system improvement efforts, and assessment of patient outcomes in addition to learner feedback.16,17 A study also found that providing panel data to residents could improve quality metrics.18
In this study, we sought to investigate the effects of a resident QI intervention during an ambulatory block on both residents’ self-assessments of QI knowledge and attitudes as well as on patient quality metrics.
Methods
Curriculum
We implemented this educational initiative at Tufts Medical Center Primary Care, Boston, Massachusetts, the primary care teaching practice for all 75 internal medicine residents at Tufts Medical Center. Co-located with the 415-bed academic medical center in downtown Boston, the practice serves more than 40,000 patients, approximately 7000 of whom are cared for by resident primary care physicians (PCPs). The internal medicine residents rotate through the primary care clinic as part of continuity clinic during ambulatory or elective blocks. In addition to continuity clinic, the residents have 2 dedicated 3-week primary care rotations during the course of an academic year. Primary care rotations consist of 5 clinic sessions a week as well as structured teaching sessions. Each resident inherits a panel of patients from an outgoing senior resident, with an average panel size of 96 patients per resident.
Prior to this study intervention, we did not do any formal QI teaching to our residents as part of their primary care curriculum, and previous panel management had focused more on chart reviews of patients whom residents perceived to be higher risk. Residents from all 3 years were included in the intervention. We taught a QI curriculum to our residents from January 2018 to June 2018 during the 3-week primary care rotation, which consisted of the following components:
- Institute for Healthcare Improvement (IHI) module QI 102 completed independently online.
- A 2-hour QI workshop led by 1 of 2 primary care faculty with backgrounds in QI, during which residents were taught basic principles of QI, including how to craft aim statements and design plan-do-study-act (PDSA) cycles, and participated in a hands-on QI activity designed to model rapid cycle improvement (the Paper Airplane Factory19).
- Distribution of individualized reports of residents’ patient panel data by email at the start of the primary care block that detailed patients’ overall rates of colorectal cancer screening and hypertension (HTN) control, along with the average resident panel rates and the average attending panel rates. The reports also included a list of all residents’ patients who were overdue for colorectal cancer screening or whose last blood pressure (BP) was uncontrolled (systolic BP ≥ 140 mm Hg or diastolic BP ≥ 90 mm Hg). These reports were originally designed by our practice’s QI team and run and exported in Microsoft Excel format monthly by our information technology (IT) administrator.
- Instruction on aim statements as a group, followed by the expectation that each resident create an individualized aim statement tailored to each resident’s patient panel rates, with the PDSA cycle to be implemented during the remainder of the primary care rotation, focusing on improvement of colorectal cancer screening and HTN control (see supplementary eFigure 1 online for the worksheet used for the workshop).
- Residents were held accountable for their interventions by various check-ins. At the end of the primary care block, residents were required to submit their completed worksheets showing the intervention they had undertaken and when it was performed. The 2 primary care attendings primarily responsible for QI education would review the resident’s work approximately 1 to 2 months after they submitted their worksheets describing their intervention. These attendings sent the residents personalized feedback based on whether the intervention had been completed or successful as evidenced by documentation in the chart, including direct patient outreach by phone, letter, or portal; outreach to the resident coordinator; scheduled follow-up appointment; or booking or completion of colorectal cancer screening. Along with this feedback, residents were also sent suggestions for next steps. Resident preceptors were copied on the email to facilitate reinforcement of the goals and plans. Finally, the resident preceptors also helped with accountability by going through the residents’ worksheets and patient panel metrics with the residents during biannual evaluations.
Evaluation
Residents were surveyed with a 10-item questionnaire pre and post intervention regarding their attitudes toward QI, understanding of QI principles, and familiarity with their patient panel data. Surveys were anonymous and distributed via the SurveyMonkey platform (see supplementary eFigure 2 online). Residents were asked if they had ever performed a PDSA cycle, performed patient outreach, or performed an intervention and whether they knew the rates of diabetes, HTN, and colorectal cancer screening in their patient panels. Questions rated on a 5-point Likert scale were used to assess comfort with panel management, developing an aim statement, designing and implementing a PDSA cycle, as well as interest in pursuing QI as a career. For the purposes of analysis, these questions were dichotomized into “somewhat comfortable” and “very comfortable” vs “neutral,” “somewhat uncomfortable,” and “very uncomfortable.” Similarly, we dichotomized the question about interest in QI as a career into “somewhat interested” and “very interested” vs “neutral,” “somewhat disinterested,” and “very disinterested.” As the surveys were anonymous, we were unable to pair the pre- and postintervention surveys and used a chi-square test to evaluate whether there was an association between survey assessments pre intervention vs post intervention and a positive or negative response to the question.
We also examined rates of HTN control and colorectal cancer screening in all 75 resident panels pre and post intervention. The paired t-test was used to determine whether the mean change from pre to post intervention was significant. SAS 9.4 (SAS Institute Inc.) was used for all analyses. Institutional Review Board exemption was obtained from the Tufts Medical Center IRB. There was no funding received for this study.
Results
Respondents
Of the 75 residents, 55 (73%) completed the survey prior to the intervention, and 39 (52%) completed the survey after the intervention.
Panel Knowledge and Intervention
Prior to the intervention, 45% of residents had performed a PDSA cycle, compared with 77% post intervention, which was a significant increase (P = .002) (Table 1). Sixty-two percent of residents had performed outreach or an intervention based on their patient panel reports prior to the intervention, compared with 85% of residents post intervention, which was also a significant increase (P = .02). The increase post intervention was not 100%, as there were residents who either missed the initial workshop or who did not follow through with their planned intervention. Common interventions included the residents giving their coordinators a list of patients to call to schedule appointments, utilizing fellow team members (eg, pharmacists, social workers) for targeted patient outreach, or calling patients themselves to reestablish a connection.
In terms of knowledge of their patient panels, prior to the intervention, 55%, 62%, and 62% of residents knew the rates of patients in their panel with diabetes, HTN, and colorectal cancer screening, respectively. After the intervention, the residents’ knowledge of these rates increased significantly, to 85% for diabetes (P = .002), 97% for HTN (P < .0001), and 97% for colorectal cancer screening (P < .0001).
Comfort With QI Approaches
Prior to the intervention, 82% of residents were comfortable managing their primary care panel, which did not change significantly post intervention (Table 2). The residents’ comfort with designing an aim statement did significantly increase, from 55% to 95% (P < .0001). The residents also had a significant increase in comfort with both designing and implementing a PDSA cycle. Prior to the intervention, 22% felt comfortable designing a PDSA cycle, which increased to 79% (P < .0001) post intervention, and 24% felt comfortable implementing a PDSA cycle, which increased to 77% (P < .0001) post intervention.
Patient Outcome Measures
The rate of HTN control in the residents' patient panels did not change significantly pre and post intervention (Table 3). The rate of resident patients who were up to date with colorectal cancer screening increased by 6.5% post intervention (P < .0001).
Interest in QI as a Career
As part of the survey, residents were asked how interested they were in making QI a part of their career. Fifty percent of residents indicated an interest in QI pre intervention, and 54% indicated an interest post intervention, which was not a significant difference (P = .72).
Discussion
In this study, we found that integration of a QI curriculum into a primary care rotation improved both residents’ knowledge of their patient panels and comfort with QI approaches, which translated to improvement in patient outcomes. Several previous studies have found improvements in resident self-assessment or knowledge after implementation of a QI curriculum.4-13 Liao et al implemented a longitudinal curriculum including both didactic and experiential components and found an improvement in both QI confidence and knowledge.3 Similarly, Duello et al8 found that a curriculum including both didactic lectures and QI projects improved subjective QI knowledge and comfort. Interestingly, Fok and Wong9 found that resident knowledge could be sustained post curriculum after completion of a QI project, suggesting that experiential learning may be helpful in maintaining knowledge.
Studies also have looked at providing performance data to residents. Hwang et al18 found that providing audit and feedback in the form of individual panel performance data to residents compared with practice targets led to statistically significant improvement in cancer screening rates and composite quality score, indicating that there is tremendous potential in providing residents with their data. While the ACGME mandates that residents should receive data on their quality metrics, on CLER visits, many residents interviewed noted limited access to data on their metrics and benchmarks.1,2
Though previous studies have individually looked at teaching QI concepts, providing panel data, or targeting select metrics, our study was unique in that it reviewed both self-reported resident outcomes data as well as actual patient outcomes. In addition to finding increased knowledge of patient panels and comfort with QI approaches, we found a significant increase in colorectal cancer screening rates post intervention. We thought this finding was particularly important given some data that residents' patients have been found to have worse outcomes on quality metrics compared with patients cared for by staff physicians.14,15 Given that having a resident physician as a PCP has been associated with failing to meet quality measures, it is especially important to focus targeted quality improvement initiatives in this patient population to reduce disparities in care.
We found that residents had improved knowledge on their patient panels as a result of this initiative. The residents were noted to have a higher knowledge of their HTN and colorectal cancer screening rates in comparison to their diabetes metrics. We suspect this is because residents are provided with multiple metrics related to diabetes, including process measures such as A1c testing, as well as outcome measures such as A1c control, so it may be harder for them to elucidate exactly how they are doing with their diabetes patients, whereas in HTN control and colorectal cancer screening, there is only 1 associated metric. Interestingly, even though HTN and colorectal cancer screening were the 2 measures focused on in the study, the residents had a significant improvement in knowledge of the rates of diabetes in their panel as well. This suggests that even just receiving data alone is valuable, hopefully translating to better outcomes with better baseline understanding of panels. We believe that our intervention was successful because it included both a didactic and an experiential component, as well as the use of individual panel performance data.
There were several limitations to our study. It was performed at a single institution, translating to a small sample size. Our data analysis was limited because we were unable to pair our pre- and postintervention survey responses because we used an anonymous survey. We also did not have full participation in postintervention surveys from all residents, which may have biased the study in favor of high performers. Another limitation was that our survey relied on self-reported outcomes for the questions about the residents knowing their patient panels.
This study required a 2-hour workshop every 3 weeks led by a faculty member trained in QI. Given the amount of time needed for the curriculum, this study may be difficult to replicate at other institutions, especially if faculty with an interest or training in QI are not available. Given our finding that residents had increased knowledge of their patient panels after receiving panel metrics, simply providing data with the goal of smaller, focused interventions may be easier to implement. At our institution, we discontinued the longer 2-hour QI workshops designed to teach QI approaches more broadly. We continue to provide individualized panel data to all residents during their primary care rotations and conduct half-hour, small group workshops with the interns that focus on drafting aim statements and planning interventions. All residents are required to submit worksheets to us at the end of their primary care blocks listing their current rates of each predetermined metric and laying out their aim statements and planned interventions. Residents also continue to receive feedback from our faculty with expertise in QI afterward on their plans and evidence of follow-through in the chart, with their preceptors included on the feedback emails. Even without the larger QI workshop, this approach has continued to be successful and appreciated. In fact, it does appear as though improvement in colorectal cancer screening has been sustained over several years. At the end of our study period, the resident patient colorectal cancer screening rate rose from 34% to 43%, and for the 2021-2022 academic year, the rate rose further, from 46% to 50%.
Given that the resident clinic patient population is at higher risk overall, targeted outreach and approaches to improve quality must be continued. Future areas of research include looking at which interventions, whether QI curriculum, provision of panel data, or required panel management interventions, translate to the greatest improvements in patient outcomes in this vulnerable population.
Conclusion
Our study showed that a dedicated QI curriculum for the residents and access to quality metric data improved both resident knowledge and comfort with QI approaches. Beyond resident-centered outcomes, there was also translation to improved patient outcomes, with a significant increase in colon cancer screening rates post intervention.
Corresponding author: Kinjalika Sathi, MD, 800 Washington St., Boston, MA 02111; ksathi@tuftsmedicalcenter.org
Disclosures: None reported.
1. Accreditation Council for Graduate Medical Education. ACGME Common Program Requirements (Residency). Approved June 13, 2021. Updated July 1, 2022. Accessed December 29, 2022. https://www.acgme.org/globalassets/pfassets/programrequirements/cprresidency_2022v3.pdf
2. Koh NJ, Wagner R, Newton RC, et al; on behalf of the CLER Evaluation Committee and the CLER Program. CLER National Report of Findings 2021. Accreditation Council for Graduate Medical Education; 2021. Accessed December 29, 2022. https://www.acgme.org/globalassets/pdfs/cler/2021clernationalreportoffindings.pdf
3. Liao JM, Co JP, Kachalia A. Providing educational content and context for training the next generation of physicians in quality improvement. Acad Med. 2015;90(9):1241-1245. doi:10.1097/ACM.0000000000000799
4. Johnson KM, Fiordellisi W, Kuperman E, et al. X + Y = time for QI: meaningful engagement of residents in quality improvement during the ambulatory block. J Grad Med Educ. 2018;10(3):316-324. doi:10.4300/JGME-D-17-00761.1
5. Kesari K, Ali S, Smith S. Integrating residents with institutional quality improvement teams. Med Educ. 2017;51(11):1173. doi:10.1111/medu.13431
6. Ogrinc G, Cohen ES, van Aalst R, et al. Clinical and educational outcomes of an integrated inpatient quality improvement curriculum for internal medicine residents. J Grad Med Educ. 2016;8(4):563-568. doi:10.4300/JGME-D-15-00412.1
7. Malayala SV, Qazi KJ, Samdani AJ, et al. A multidisciplinary performance improvement rotation in an internal medicine training program. Int J Med Educ. 2016;7:212-213. doi:10.5116/ijme.5765.0bda
8. Duello K, Louh I, Greig H, et al. Residents’ knowledge of quality improvement: the impact of using a group project curriculum. Postgrad Med J. 2015;91(1078):431-435. doi:10.1136/postgradmedj-2014-132886
9. Fok MC, Wong RY. Impact of a competency based curriculum on quality improvement among internal medicine residents. BMC Med Educ. 2014;14:252. doi:10.1186/s12909-014-0252-7
10. Wilper AP, Smith CS, Weppner W. Instituting systems-based practice and practice-based learning and improvement: a curriculum of inquiry. Med Educ Online. 2013;18:21612. doi:10.3402/meo.v18i0.21612
11. Weigel C, Suen W, Gupte G. Using lean methodology to teach quality improvement to internal medicine residents at a safety net hospital. Am J Med Qual. 2013;28(5):392-399. doi:10.1177/1062860612474062
12. Tomolo AM, Lawrence RH, Watts B, et al. Pilot study evaluating a practice-based learning and improvement curriculum focusing on the development of system-level quality improvement skills. J Grad Med Educ. 2011;3(1):49-58. doi:10.4300/JGME-D-10-00104.1
13. Djuricich AM, Ciccarelli M, Swigonski NL. A continuous quality improvement curriculum for residents: addressing core competency, improving systems. Acad Med. 2004;79(10 Suppl):S65-S67. doi:10.1097/00001888-200410001-00020
14. Essien UR, He W, Ray A, et al. Disparities in quality of primary care by resident and staff physicians: is there a conflict between training and equity? J Gen Intern Med. 2019;34(7):1184-1191. doi:10.1007/s11606-019-04960-5
15. Amat M, Norian E, Graham KL. Unmasking a vulnerable patient care process: a qualitative study describing the current state of resident continuity clinic in a nationwide cohort of internal medicine residency programs. Am J Med. 2022;135(6):783-786. doi:10.1016/j.amjmed.2022.02.007
16. Wong BM, Etchells EE, Kuper A, et al. Teaching quality improvement and patient safety to trainees: a systematic review. Acad Med. 2010;85(9):1425-1439. doi:10.1097/ACM.0b013e3181e2d0c6
17. Armstrong G, Headrick L, Madigosky W, et al. Designing education to improve care. Jt Comm J Qual Patient Saf. 2012;38:5-14. doi:10.1016/s1553-7250(12)38002-1
18. Hwang AS, Harding AS, Chang Y, et al. An audit and feedback intervention to improve internal medicine residents’ performance on ambulatory quality measures: a randomized controlled trial. Popul Health Manag. 2019;22(6):529-535. doi:10.1089/pop.2018.0217
19. Institute for Healthcare Improvement. Open school. The paper airplane factory. Accessed December 29, 2022. https://www.ihi.org/education/IHIOpenSchool/resources/Pages/Activities/PaperAirplaneFactory.aspx
ABSTRACT
Objective: To teach internal medicine residents quality improvement (QI) principles in an effort to improve resident knowledge and comfort with QI, as well as address quality care gaps in resident clinic primary care patient panels.
Design: A QI curriculum was implemented for all residents rotating through a primary care block over a 6-month period. Residents completed Institute for Healthcare Improvement (IHI) modules, participated in a QI workshop, and received panel data reports, ultimately completing a plan-do-study-act (PDSA) cycle to improve colorectal cancer screening and hypertension control.
Setting and participants: This project was undertaken at Tufts Medical Center Primary Care, Boston, Massachusetts, the primary care teaching practice for all 75 internal medicine residents at Tufts Medical Center. All internal medicine residents were included, with 55 (73%) of the 75 residents completing the presurvey, and 39 (52%) completing the postsurvey.
Measurements: We administered a 10-question pre- and postsurvey looking at resident attitudes toward and comfort with QI and familiarity with their panel data as well as measured rates of colorectal cancer screening and hypertension control in resident panels.
Results: There was an increase in the numbers of residents who performed a PDSA cycle (P = .002), completed outreach based on their panel data (P = .02), and felt comfortable in both creating aim statements and designing and implementing PDSA cycles (P < .0001). The residents’ knowledge of their panel data significantly increased. There was no significant improvement in hypertension control, but there was an increase in colorectal cancer screening rates (P < .0001).
Conclusion: Providing panel data and performing targeted QI interventions can improve resident comfort with QI, translating to improvement in patient outcomes.
Keywords: quality improvement, resident education, medical education, care gaps, quality metrics.
As quality improvement (QI) has become an integral part of clinical practice, residency training programs have continued to evolve in how best to teach QI. The Accreditation Council for Graduate Medical Education (ACGME) Common Program requirements mandate that core competencies in residency programs include practice-based learning and improvement and systems-based practice.1 Residents should receive education in QI, receive data on quality metrics and benchmarks related to their patient population, and participate in QI activities. The Clinical Learning Environment Review (CLER) program was established to provide feedback to institutions on 6 focused areas, including patient safety and health care quality. In visits to institutions across the United States, the CLER committees found that many residents had limited knowledge of QI concepts and limited access to data on quality metrics and benchmarks.2
There are many barriers to implementing a QI curriculum in residency programs, and creating and maintaining successful strategies has proven challenging.3 Many QI curricula for internal medicine residents have been described in the literature, but the results of many of these studies focus on resident self-assessment of QI knowledge and numbers of projects rather than on patient outcomes.4-13 As there is some evidence suggesting that patients treated by residents have worse outcomes on ambulatory quality measures when compared with patients treated by staff physicians,14,15 it is important to also look at patient outcomes when evaluating a QI curriculum. Experts in education recommend the following to optimize learning: exposure to both didactic and experiential opportunities, connection to health system improvement efforts, and assessment of patient outcomes in addition to learner feedback.16,17 A study also found that providing panel data to residents could improve quality metrics.18
In this study, we sought to investigate the effects of a resident QI intervention during an ambulatory block on both residents’ self-assessments of QI knowledge and attitudes as well as on patient quality metrics.
Methods
Curriculum
We implemented this educational initiative at Tufts Medical Center Primary Care, Boston, Massachusetts, the primary care teaching practice for all 75 internal medicine residents at Tufts Medical Center. Co-located with the 415-bed academic medical center in downtown Boston, the practice serves more than 40,000 patients, approximately 7000 of whom are cared for by resident primary care physicians (PCPs). The internal medicine residents rotate through the primary care clinic as part of continuity clinic during ambulatory or elective blocks. In addition to continuity clinic, the residents have 2 dedicated 3-week primary care rotations during the course of an academic year. Primary care rotations consist of 5 clinic sessions a week as well as structured teaching sessions. Each resident inherits a panel of patients from an outgoing senior resident, with an average panel size of 96 patients per resident.
Prior to this study intervention, we did not do any formal QI teaching to our residents as part of their primary care curriculum, and previous panel management had focused more on chart reviews of patients whom residents perceived to be higher risk. Residents from all 3 years were included in the intervention. We taught a QI curriculum to our residents from January 2018 to June 2018 during the 3-week primary care rotation, which consisted of the following components:
- Institute for Healthcare Improvement (IHI) module QI 102 completed independently online.
- A 2-hour QI workshop led by 1 of 2 primary care faculty with backgrounds in QI, during which residents were taught basic principles of QI, including how to craft aim statements and design plan-do-study-act (PDSA) cycles, and participated in a hands-on QI activity designed to model rapid cycle improvement (the Paper Airplane Factory19).
- Distribution of individualized reports of residents’ patient panel data by email at the start of the primary care block that detailed patients’ overall rates of colorectal cancer screening and hypertension (HTN) control, along with the average resident panel rates and the average attending panel rates. The reports also included a list of all residents’ patients who were overdue for colorectal cancer screening or whose last blood pressure (BP) was uncontrolled (systolic BP ≥ 140 mm Hg or diastolic BP ≥ 90 mm Hg). These reports were originally designed by our practice’s QI team and run and exported in Microsoft Excel format monthly by our information technology (IT) administrator.
- Instruction on aim statements as a group, followed by the expectation that each resident create an individualized aim statement tailored to each resident’s patient panel rates, with the PDSA cycle to be implemented during the remainder of the primary care rotation, focusing on improvement of colorectal cancer screening and HTN control (see supplementary eFigure 1 online for the worksheet used for the workshop).
- Residents were held accountable for their interventions by various check-ins. At the end of the primary care block, residents were required to submit their completed worksheets showing the intervention they had undertaken and when it was performed. The 2 primary care attendings primarily responsible for QI education would review the resident’s work approximately 1 to 2 months after they submitted their worksheets describing their intervention. These attendings sent the residents personalized feedback based on whether the intervention had been completed or successful as evidenced by documentation in the chart, including direct patient outreach by phone, letter, or portal; outreach to the resident coordinator; scheduled follow-up appointment; or booking or completion of colorectal cancer screening. Along with this feedback, residents were also sent suggestions for next steps. Resident preceptors were copied on the email to facilitate reinforcement of the goals and plans. Finally, the resident preceptors also helped with accountability by going through the residents’ worksheets and patient panel metrics with the residents during biannual evaluations.
Evaluation
Residents were surveyed with a 10-item questionnaire pre and post intervention regarding their attitudes toward QI, understanding of QI principles, and familiarity with their patient panel data. Surveys were anonymous and distributed via the SurveyMonkey platform (see supplementary eFigure 2 online). Residents were asked if they had ever performed a PDSA cycle, performed patient outreach, or performed an intervention and whether they knew the rates of diabetes, HTN, and colorectal cancer screening in their patient panels. Questions rated on a 5-point Likert scale were used to assess comfort with panel management, developing an aim statement, designing and implementing a PDSA cycle, as well as interest in pursuing QI as a career. For the purposes of analysis, these questions were dichotomized into “somewhat comfortable” and “very comfortable” vs “neutral,” “somewhat uncomfortable,” and “very uncomfortable.” Similarly, we dichotomized the question about interest in QI as a career into “somewhat interested” and “very interested” vs “neutral,” “somewhat disinterested,” and “very disinterested.” As the surveys were anonymous, we were unable to pair the pre- and postintervention surveys and used a chi-square test to evaluate whether there was an association between survey assessments pre intervention vs post intervention and a positive or negative response to the question.
We also examined rates of HTN control and colorectal cancer screening in all 75 resident panels pre and post intervention. The paired t-test was used to determine whether the mean change from pre to post intervention was significant. SAS 9.4 (SAS Institute Inc.) was used for all analyses. Institutional Review Board exemption was obtained from the Tufts Medical Center IRB. There was no funding received for this study.
Results
Respondents
Of the 75 residents, 55 (73%) completed the survey prior to the intervention, and 39 (52%) completed the survey after the intervention.
Panel Knowledge and Intervention
Prior to the intervention, 45% of residents had performed a PDSA cycle, compared with 77% post intervention, which was a significant increase (P = .002) (Table 1). Sixty-two percent of residents had performed outreach or an intervention based on their patient panel reports prior to the intervention, compared with 85% of residents post intervention, which was also a significant increase (P = .02). The increase post intervention was not 100%, as there were residents who either missed the initial workshop or who did not follow through with their planned intervention. Common interventions included the residents giving their coordinators a list of patients to call to schedule appointments, utilizing fellow team members (eg, pharmacists, social workers) for targeted patient outreach, or calling patients themselves to reestablish a connection.
In terms of knowledge of their patient panels, prior to the intervention, 55%, 62%, and 62% of residents knew the rates of patients in their panel with diabetes, HTN, and colorectal cancer screening, respectively. After the intervention, the residents’ knowledge of these rates increased significantly, to 85% for diabetes (P = .002), 97% for HTN (P < .0001), and 97% for colorectal cancer screening (P < .0001).
Comfort With QI Approaches
Prior to the intervention, 82% of residents were comfortable managing their primary care panel, which did not change significantly post intervention (Table 2). The residents’ comfort with designing an aim statement did significantly increase, from 55% to 95% (P < .0001). The residents also had a significant increase in comfort with both designing and implementing a PDSA cycle. Prior to the intervention, 22% felt comfortable designing a PDSA cycle, which increased to 79% (P < .0001) post intervention, and 24% felt comfortable implementing a PDSA cycle, which increased to 77% (P < .0001) post intervention.
Patient Outcome Measures
The rate of HTN control in the residents' patient panels did not change significantly pre and post intervention (Table 3). The rate of resident patients who were up to date with colorectal cancer screening increased by 6.5% post intervention (P < .0001).
Interest in QI as a Career
As part of the survey, residents were asked how interested they were in making QI a part of their career. Fifty percent of residents indicated an interest in QI pre intervention, and 54% indicated an interest post intervention, which was not a significant difference (P = .72).
Discussion
In this study, we found that integration of a QI curriculum into a primary care rotation improved both residents’ knowledge of their patient panels and comfort with QI approaches, which translated to improvement in patient outcomes. Several previous studies have found improvements in resident self-assessment or knowledge after implementation of a QI curriculum.4-13 Liao et al implemented a longitudinal curriculum including both didactic and experiential components and found an improvement in both QI confidence and knowledge.3 Similarly, Duello et al8 found that a curriculum including both didactic lectures and QI projects improved subjective QI knowledge and comfort. Interestingly, Fok and Wong9 found that resident knowledge could be sustained post curriculum after completion of a QI project, suggesting that experiential learning may be helpful in maintaining knowledge.
Studies also have looked at providing performance data to residents. Hwang et al18 found that providing audit and feedback in the form of individual panel performance data to residents compared with practice targets led to statistically significant improvement in cancer screening rates and composite quality score, indicating that there is tremendous potential in providing residents with their data. While the ACGME mandates that residents should receive data on their quality metrics, on CLER visits, many residents interviewed noted limited access to data on their metrics and benchmarks.1,2
Though previous studies have individually looked at teaching QI concepts, providing panel data, or targeting select metrics, our study was unique in that it reviewed both self-reported resident outcomes data as well as actual patient outcomes. In addition to finding increased knowledge of patient panels and comfort with QI approaches, we found a significant increase in colorectal cancer screening rates post intervention. We thought this finding was particularly important given some data that residents' patients have been found to have worse outcomes on quality metrics compared with patients cared for by staff physicians.14,15 Given that having a resident physician as a PCP has been associated with failing to meet quality measures, it is especially important to focus targeted quality improvement initiatives in this patient population to reduce disparities in care.
We found that residents had improved knowledge on their patient panels as a result of this initiative. The residents were noted to have a higher knowledge of their HTN and colorectal cancer screening rates in comparison to their diabetes metrics. We suspect this is because residents are provided with multiple metrics related to diabetes, including process measures such as A1c testing, as well as outcome measures such as A1c control, so it may be harder for them to elucidate exactly how they are doing with their diabetes patients, whereas in HTN control and colorectal cancer screening, there is only 1 associated metric. Interestingly, even though HTN and colorectal cancer screening were the 2 measures focused on in the study, the residents had a significant improvement in knowledge of the rates of diabetes in their panel as well. This suggests that even just receiving data alone is valuable, hopefully translating to better outcomes with better baseline understanding of panels. We believe that our intervention was successful because it included both a didactic and an experiential component, as well as the use of individual panel performance data.
There were several limitations to our study. It was performed at a single institution, translating to a small sample size. Our data analysis was limited because we were unable to pair our pre- and postintervention survey responses because we used an anonymous survey. We also did not have full participation in postintervention surveys from all residents, which may have biased the study in favor of high performers. Another limitation was that our survey relied on self-reported outcomes for the questions about the residents knowing their patient panels.
This study required a 2-hour workshop every 3 weeks led by a faculty member trained in QI. Given the amount of time needed for the curriculum, this study may be difficult to replicate at other institutions, especially if faculty with an interest or training in QI are not available. Given our finding that residents had increased knowledge of their patient panels after receiving panel metrics, simply providing data with the goal of smaller, focused interventions may be easier to implement. At our institution, we discontinued the longer 2-hour QI workshops designed to teach QI approaches more broadly. We continue to provide individualized panel data to all residents during their primary care rotations and conduct half-hour, small group workshops with the interns that focus on drafting aim statements and planning interventions. All residents are required to submit worksheets to us at the end of their primary care blocks listing their current rates of each predetermined metric and laying out their aim statements and planned interventions. Residents also continue to receive feedback from our faculty with expertise in QI afterward on their plans and evidence of follow-through in the chart, with their preceptors included on the feedback emails. Even without the larger QI workshop, this approach has continued to be successful and appreciated. In fact, it does appear as though improvement in colorectal cancer screening has been sustained over several years. At the end of our study period, the resident patient colorectal cancer screening rate rose from 34% to 43%, and for the 2021-2022 academic year, the rate rose further, from 46% to 50%.
Given that the resident clinic patient population is at higher risk overall, targeted outreach and approaches to improve quality must be continued. Future areas of research include looking at which interventions, whether QI curriculum, provision of panel data, or required panel management interventions, translate to the greatest improvements in patient outcomes in this vulnerable population.
Conclusion
Our study showed that a dedicated QI curriculum for the residents and access to quality metric data improved both resident knowledge and comfort with QI approaches. Beyond resident-centered outcomes, there was also translation to improved patient outcomes, with a significant increase in colon cancer screening rates post intervention.
Corresponding author: Kinjalika Sathi, MD, 800 Washington St., Boston, MA 02111; ksathi@tuftsmedicalcenter.org
Disclosures: None reported.
ABSTRACT
Objective: To teach internal medicine residents quality improvement (QI) principles in an effort to improve resident knowledge and comfort with QI, as well as address quality care gaps in resident clinic primary care patient panels.
Design: A QI curriculum was implemented for all residents rotating through a primary care block over a 6-month period. Residents completed Institute for Healthcare Improvement (IHI) modules, participated in a QI workshop, and received panel data reports, ultimately completing a plan-do-study-act (PDSA) cycle to improve colorectal cancer screening and hypertension control.
Setting and participants: This project was undertaken at Tufts Medical Center Primary Care, Boston, Massachusetts, the primary care teaching practice for all 75 internal medicine residents at Tufts Medical Center. All internal medicine residents were included, with 55 (73%) of the 75 residents completing the presurvey, and 39 (52%) completing the postsurvey.
Measurements: We administered a 10-question pre- and postsurvey looking at resident attitudes toward and comfort with QI and familiarity with their panel data as well as measured rates of colorectal cancer screening and hypertension control in resident panels.
Results: There was an increase in the numbers of residents who performed a PDSA cycle (P = .002), completed outreach based on their panel data (P = .02), and felt comfortable in both creating aim statements and designing and implementing PDSA cycles (P < .0001). The residents’ knowledge of their panel data significantly increased. There was no significant improvement in hypertension control, but there was an increase in colorectal cancer screening rates (P < .0001).
Conclusion: Providing panel data and performing targeted QI interventions can improve resident comfort with QI, translating to improvement in patient outcomes.
Keywords: quality improvement, resident education, medical education, care gaps, quality metrics.
As quality improvement (QI) has become an integral part of clinical practice, residency training programs have continued to evolve in how best to teach QI. The Accreditation Council for Graduate Medical Education (ACGME) Common Program requirements mandate that core competencies in residency programs include practice-based learning and improvement and systems-based practice.1 Residents should receive education in QI, receive data on quality metrics and benchmarks related to their patient population, and participate in QI activities. The Clinical Learning Environment Review (CLER) program was established to provide feedback to institutions on 6 focused areas, including patient safety and health care quality. In visits to institutions across the United States, the CLER committees found that many residents had limited knowledge of QI concepts and limited access to data on quality metrics and benchmarks.2
There are many barriers to implementing a QI curriculum in residency programs, and creating and maintaining successful strategies has proven challenging.3 Many QI curricula for internal medicine residents have been described in the literature, but the results of many of these studies focus on resident self-assessment of QI knowledge and numbers of projects rather than on patient outcomes.4-13 As there is some evidence suggesting that patients treated by residents have worse outcomes on ambulatory quality measures when compared with patients treated by staff physicians,14,15 it is important to also look at patient outcomes when evaluating a QI curriculum. Experts in education recommend the following to optimize learning: exposure to both didactic and experiential opportunities, connection to health system improvement efforts, and assessment of patient outcomes in addition to learner feedback.16,17 A study also found that providing panel data to residents could improve quality metrics.18
In this study, we sought to investigate the effects of a resident QI intervention during an ambulatory block on both residents’ self-assessments of QI knowledge and attitudes as well as on patient quality metrics.
Methods
Curriculum
We implemented this educational initiative at Tufts Medical Center Primary Care, Boston, Massachusetts, the primary care teaching practice for all 75 internal medicine residents at Tufts Medical Center. Co-located with the 415-bed academic medical center in downtown Boston, the practice serves more than 40,000 patients, approximately 7000 of whom are cared for by resident primary care physicians (PCPs). The internal medicine residents rotate through the primary care clinic as part of continuity clinic during ambulatory or elective blocks. In addition to continuity clinic, the residents have 2 dedicated 3-week primary care rotations during the course of an academic year. Primary care rotations consist of 5 clinic sessions a week as well as structured teaching sessions. Each resident inherits a panel of patients from an outgoing senior resident, with an average panel size of 96 patients per resident.
Prior to this study intervention, we did not do any formal QI teaching to our residents as part of their primary care curriculum, and previous panel management had focused more on chart reviews of patients whom residents perceived to be higher risk. Residents from all 3 years were included in the intervention. We taught a QI curriculum to our residents from January 2018 to June 2018 during the 3-week primary care rotation, which consisted of the following components:
- Institute for Healthcare Improvement (IHI) module QI 102 completed independently online.
- A 2-hour QI workshop led by 1 of 2 primary care faculty with backgrounds in QI, during which residents were taught basic principles of QI, including how to craft aim statements and design plan-do-study-act (PDSA) cycles, and participated in a hands-on QI activity designed to model rapid cycle improvement (the Paper Airplane Factory19).
- Distribution of individualized reports of residents’ patient panel data by email at the start of the primary care block that detailed patients’ overall rates of colorectal cancer screening and hypertension (HTN) control, along with the average resident panel rates and the average attending panel rates. The reports also included a list of all residents’ patients who were overdue for colorectal cancer screening or whose last blood pressure (BP) was uncontrolled (systolic BP ≥ 140 mm Hg or diastolic BP ≥ 90 mm Hg). These reports were originally designed by our practice’s QI team and run and exported in Microsoft Excel format monthly by our information technology (IT) administrator.
- Instruction on aim statements as a group, followed by the expectation that each resident create an individualized aim statement tailored to each resident’s patient panel rates, with the PDSA cycle to be implemented during the remainder of the primary care rotation, focusing on improvement of colorectal cancer screening and HTN control (see supplementary eFigure 1 online for the worksheet used for the workshop).
- Residents were held accountable for their interventions by various check-ins. At the end of the primary care block, residents were required to submit their completed worksheets showing the intervention they had undertaken and when it was performed. The 2 primary care attendings primarily responsible for QI education would review the resident’s work approximately 1 to 2 months after they submitted their worksheets describing their intervention. These attendings sent the residents personalized feedback based on whether the intervention had been completed or successful as evidenced by documentation in the chart, including direct patient outreach by phone, letter, or portal; outreach to the resident coordinator; scheduled follow-up appointment; or booking or completion of colorectal cancer screening. Along with this feedback, residents were also sent suggestions for next steps. Resident preceptors were copied on the email to facilitate reinforcement of the goals and plans. Finally, the resident preceptors also helped with accountability by going through the residents’ worksheets and patient panel metrics with the residents during biannual evaluations.
Evaluation
Residents were surveyed with a 10-item questionnaire pre and post intervention regarding their attitudes toward QI, understanding of QI principles, and familiarity with their patient panel data. Surveys were anonymous and distributed via the SurveyMonkey platform (see supplementary eFigure 2 online). Residents were asked if they had ever performed a PDSA cycle, performed patient outreach, or performed an intervention and whether they knew the rates of diabetes, HTN, and colorectal cancer screening in their patient panels. Questions rated on a 5-point Likert scale were used to assess comfort with panel management, developing an aim statement, designing and implementing a PDSA cycle, as well as interest in pursuing QI as a career. For the purposes of analysis, these questions were dichotomized into “somewhat comfortable” and “very comfortable” vs “neutral,” “somewhat uncomfortable,” and “very uncomfortable.” Similarly, we dichotomized the question about interest in QI as a career into “somewhat interested” and “very interested” vs “neutral,” “somewhat disinterested,” and “very disinterested.” As the surveys were anonymous, we were unable to pair the pre- and postintervention surveys and used a chi-square test to evaluate whether there was an association between survey assessments pre intervention vs post intervention and a positive or negative response to the question.
We also examined rates of HTN control and colorectal cancer screening in all 75 resident panels pre and post intervention. The paired t-test was used to determine whether the mean change from pre to post intervention was significant. SAS 9.4 (SAS Institute Inc.) was used for all analyses. Institutional Review Board exemption was obtained from the Tufts Medical Center IRB. There was no funding received for this study.
Results
Respondents
Of the 75 residents, 55 (73%) completed the survey prior to the intervention, and 39 (52%) completed the survey after the intervention.
Panel Knowledge and Intervention
Prior to the intervention, 45% of residents had performed a PDSA cycle, compared with 77% post intervention, which was a significant increase (P = .002) (Table 1). Sixty-two percent of residents had performed outreach or an intervention based on their patient panel reports prior to the intervention, compared with 85% of residents post intervention, which was also a significant increase (P = .02). The increase post intervention was not 100%, as there were residents who either missed the initial workshop or who did not follow through with their planned intervention. Common interventions included the residents giving their coordinators a list of patients to call to schedule appointments, utilizing fellow team members (eg, pharmacists, social workers) for targeted patient outreach, or calling patients themselves to reestablish a connection.
In terms of knowledge of their patient panels, prior to the intervention, 55%, 62%, and 62% of residents knew the rates of patients in their panel with diabetes, HTN, and colorectal cancer screening, respectively. After the intervention, the residents’ knowledge of these rates increased significantly, to 85% for diabetes (P = .002), 97% for HTN (P < .0001), and 97% for colorectal cancer screening (P < .0001).
Comfort With QI Approaches
Prior to the intervention, 82% of residents were comfortable managing their primary care panel, which did not change significantly post intervention (Table 2). The residents’ comfort with designing an aim statement did significantly increase, from 55% to 95% (P < .0001). The residents also had a significant increase in comfort with both designing and implementing a PDSA cycle. Prior to the intervention, 22% felt comfortable designing a PDSA cycle, which increased to 79% (P < .0001) post intervention, and 24% felt comfortable implementing a PDSA cycle, which increased to 77% (P < .0001) post intervention.
Patient Outcome Measures
The rate of HTN control in the residents' patient panels did not change significantly pre and post intervention (Table 3). The rate of resident patients who were up to date with colorectal cancer screening increased by 6.5% post intervention (P < .0001).
Interest in QI as a Career
As part of the survey, residents were asked how interested they were in making QI a part of their career. Fifty percent of residents indicated an interest in QI pre intervention, and 54% indicated an interest post intervention, which was not a significant difference (P = .72).
Discussion
In this study, we found that integration of a QI curriculum into a primary care rotation improved both residents’ knowledge of their patient panels and comfort with QI approaches, which translated to improvement in patient outcomes. Several previous studies have found improvements in resident self-assessment or knowledge after implementation of a QI curriculum.4-13 Liao et al implemented a longitudinal curriculum including both didactic and experiential components and found an improvement in both QI confidence and knowledge.3 Similarly, Duello et al8 found that a curriculum including both didactic lectures and QI projects improved subjective QI knowledge and comfort. Interestingly, Fok and Wong9 found that resident knowledge could be sustained post curriculum after completion of a QI project, suggesting that experiential learning may be helpful in maintaining knowledge.
Studies also have looked at providing performance data to residents. Hwang et al18 found that providing audit and feedback in the form of individual panel performance data to residents compared with practice targets led to statistically significant improvement in cancer screening rates and composite quality score, indicating that there is tremendous potential in providing residents with their data. While the ACGME mandates that residents should receive data on their quality metrics, on CLER visits, many residents interviewed noted limited access to data on their metrics and benchmarks.1,2
Though previous studies have individually looked at teaching QI concepts, providing panel data, or targeting select metrics, our study was unique in that it reviewed both self-reported resident outcomes data as well as actual patient outcomes. In addition to finding increased knowledge of patient panels and comfort with QI approaches, we found a significant increase in colorectal cancer screening rates post intervention. We thought this finding was particularly important given some data that residents' patients have been found to have worse outcomes on quality metrics compared with patients cared for by staff physicians.14,15 Given that having a resident physician as a PCP has been associated with failing to meet quality measures, it is especially important to focus targeted quality improvement initiatives in this patient population to reduce disparities in care.
We found that residents had improved knowledge on their patient panels as a result of this initiative. The residents were noted to have a higher knowledge of their HTN and colorectal cancer screening rates in comparison to their diabetes metrics. We suspect this is because residents are provided with multiple metrics related to diabetes, including process measures such as A1c testing, as well as outcome measures such as A1c control, so it may be harder for them to elucidate exactly how they are doing with their diabetes patients, whereas in HTN control and colorectal cancer screening, there is only 1 associated metric. Interestingly, even though HTN and colorectal cancer screening were the 2 measures focused on in the study, the residents had a significant improvement in knowledge of the rates of diabetes in their panel as well. This suggests that even just receiving data alone is valuable, hopefully translating to better outcomes with better baseline understanding of panels. We believe that our intervention was successful because it included both a didactic and an experiential component, as well as the use of individual panel performance data.
There were several limitations to our study. It was performed at a single institution, translating to a small sample size. Our data analysis was limited because we were unable to pair our pre- and postintervention survey responses because we used an anonymous survey. We also did not have full participation in postintervention surveys from all residents, which may have biased the study in favor of high performers. Another limitation was that our survey relied on self-reported outcomes for the questions about the residents knowing their patient panels.
This study required a 2-hour workshop every 3 weeks led by a faculty member trained in QI. Given the amount of time needed for the curriculum, this study may be difficult to replicate at other institutions, especially if faculty with an interest or training in QI are not available. Given our finding that residents had increased knowledge of their patient panels after receiving panel metrics, simply providing data with the goal of smaller, focused interventions may be easier to implement. At our institution, we discontinued the longer 2-hour QI workshops designed to teach QI approaches more broadly. We continue to provide individualized panel data to all residents during their primary care rotations and conduct half-hour, small group workshops with the interns that focus on drafting aim statements and planning interventions. All residents are required to submit worksheets to us at the end of their primary care blocks listing their current rates of each predetermined metric and laying out their aim statements and planned interventions. Residents also continue to receive feedback from our faculty with expertise in QI afterward on their plans and evidence of follow-through in the chart, with their preceptors included on the feedback emails. Even without the larger QI workshop, this approach has continued to be successful and appreciated. In fact, it does appear as though improvement in colorectal cancer screening has been sustained over several years. At the end of our study period, the resident patient colorectal cancer screening rate rose from 34% to 43%, and for the 2021-2022 academic year, the rate rose further, from 46% to 50%.
Given that the resident clinic patient population is at higher risk overall, targeted outreach and approaches to improve quality must be continued. Future areas of research include looking at which interventions, whether QI curriculum, provision of panel data, or required panel management interventions, translate to the greatest improvements in patient outcomes in this vulnerable population.
Conclusion
Our study showed that a dedicated QI curriculum for the residents and access to quality metric data improved both resident knowledge and comfort with QI approaches. Beyond resident-centered outcomes, there was also translation to improved patient outcomes, with a significant increase in colon cancer screening rates post intervention.
Corresponding author: Kinjalika Sathi, MD, 800 Washington St., Boston, MA 02111; ksathi@tuftsmedicalcenter.org
Disclosures: None reported.
1. Accreditation Council for Graduate Medical Education. ACGME Common Program Requirements (Residency). Approved June 13, 2021. Updated July 1, 2022. Accessed December 29, 2022. https://www.acgme.org/globalassets/pfassets/programrequirements/cprresidency_2022v3.pdf
2. Koh NJ, Wagner R, Newton RC, et al; on behalf of the CLER Evaluation Committee and the CLER Program. CLER National Report of Findings 2021. Accreditation Council for Graduate Medical Education; 2021. Accessed December 29, 2022. https://www.acgme.org/globalassets/pdfs/cler/2021clernationalreportoffindings.pdf
3. Liao JM, Co JP, Kachalia A. Providing educational content and context for training the next generation of physicians in quality improvement. Acad Med. 2015;90(9):1241-1245. doi:10.1097/ACM.0000000000000799
4. Johnson KM, Fiordellisi W, Kuperman E, et al. X + Y = time for QI: meaningful engagement of residents in quality improvement during the ambulatory block. J Grad Med Educ. 2018;10(3):316-324. doi:10.4300/JGME-D-17-00761.1
5. Kesari K, Ali S, Smith S. Integrating residents with institutional quality improvement teams. Med Educ. 2017;51(11):1173. doi:10.1111/medu.13431
6. Ogrinc G, Cohen ES, van Aalst R, et al. Clinical and educational outcomes of an integrated inpatient quality improvement curriculum for internal medicine residents. J Grad Med Educ. 2016;8(4):563-568. doi:10.4300/JGME-D-15-00412.1
7. Malayala SV, Qazi KJ, Samdani AJ, et al. A multidisciplinary performance improvement rotation in an internal medicine training program. Int J Med Educ. 2016;7:212-213. doi:10.5116/ijme.5765.0bda
8. Duello K, Louh I, Greig H, et al. Residents’ knowledge of quality improvement: the impact of using a group project curriculum. Postgrad Med J. 2015;91(1078):431-435. doi:10.1136/postgradmedj-2014-132886
9. Fok MC, Wong RY. Impact of a competency based curriculum on quality improvement among internal medicine residents. BMC Med Educ. 2014;14:252. doi:10.1186/s12909-014-0252-7
10. Wilper AP, Smith CS, Weppner W. Instituting systems-based practice and practice-based learning and improvement: a curriculum of inquiry. Med Educ Online. 2013;18:21612. doi:10.3402/meo.v18i0.21612
11. Weigel C, Suen W, Gupte G. Using lean methodology to teach quality improvement to internal medicine residents at a safety net hospital. Am J Med Qual. 2013;28(5):392-399. doi:10.1177/1062860612474062
12. Tomolo AM, Lawrence RH, Watts B, et al. Pilot study evaluating a practice-based learning and improvement curriculum focusing on the development of system-level quality improvement skills. J Grad Med Educ. 2011;3(1):49-58. doi:10.4300/JGME-D-10-00104.1
13. Djuricich AM, Ciccarelli M, Swigonski NL. A continuous quality improvement curriculum for residents: addressing core competency, improving systems. Acad Med. 2004;79(10 Suppl):S65-S67. doi:10.1097/00001888-200410001-00020
14. Essien UR, He W, Ray A, et al. Disparities in quality of primary care by resident and staff physicians: is there a conflict between training and equity? J Gen Intern Med. 2019;34(7):1184-1191. doi:10.1007/s11606-019-04960-5
15. Amat M, Norian E, Graham KL. Unmasking a vulnerable patient care process: a qualitative study describing the current state of resident continuity clinic in a nationwide cohort of internal medicine residency programs. Am J Med. 2022;135(6):783-786. doi:10.1016/j.amjmed.2022.02.007
16. Wong BM, Etchells EE, Kuper A, et al. Teaching quality improvement and patient safety to trainees: a systematic review. Acad Med. 2010;85(9):1425-1439. doi:10.1097/ACM.0b013e3181e2d0c6
17. Armstrong G, Headrick L, Madigosky W, et al. Designing education to improve care. Jt Comm J Qual Patient Saf. 2012;38:5-14. doi:10.1016/s1553-7250(12)38002-1
18. Hwang AS, Harding AS, Chang Y, et al. An audit and feedback intervention to improve internal medicine residents’ performance on ambulatory quality measures: a randomized controlled trial. Popul Health Manag. 2019;22(6):529-535. doi:10.1089/pop.2018.0217
19. Institute for Healthcare Improvement. Open school. The paper airplane factory. Accessed December 29, 2022. https://www.ihi.org/education/IHIOpenSchool/resources/Pages/Activities/PaperAirplaneFactory.aspx
1. Accreditation Council for Graduate Medical Education. ACGME Common Program Requirements (Residency). Approved June 13, 2021. Updated July 1, 2022. Accessed December 29, 2022. https://www.acgme.org/globalassets/pfassets/programrequirements/cprresidency_2022v3.pdf
2. Koh NJ, Wagner R, Newton RC, et al; on behalf of the CLER Evaluation Committee and the CLER Program. CLER National Report of Findings 2021. Accreditation Council for Graduate Medical Education; 2021. Accessed December 29, 2022. https://www.acgme.org/globalassets/pdfs/cler/2021clernationalreportoffindings.pdf
3. Liao JM, Co JP, Kachalia A. Providing educational content and context for training the next generation of physicians in quality improvement. Acad Med. 2015;90(9):1241-1245. doi:10.1097/ACM.0000000000000799
4. Johnson KM, Fiordellisi W, Kuperman E, et al. X + Y = time for QI: meaningful engagement of residents in quality improvement during the ambulatory block. J Grad Med Educ. 2018;10(3):316-324. doi:10.4300/JGME-D-17-00761.1
5. Kesari K, Ali S, Smith S. Integrating residents with institutional quality improvement teams. Med Educ. 2017;51(11):1173. doi:10.1111/medu.13431
6. Ogrinc G, Cohen ES, van Aalst R, et al. Clinical and educational outcomes of an integrated inpatient quality improvement curriculum for internal medicine residents. J Grad Med Educ. 2016;8(4):563-568. doi:10.4300/JGME-D-15-00412.1
7. Malayala SV, Qazi KJ, Samdani AJ, et al. A multidisciplinary performance improvement rotation in an internal medicine training program. Int J Med Educ. 2016;7:212-213. doi:10.5116/ijme.5765.0bda
8. Duello K, Louh I, Greig H, et al. Residents’ knowledge of quality improvement: the impact of using a group project curriculum. Postgrad Med J. 2015;91(1078):431-435. doi:10.1136/postgradmedj-2014-132886
9. Fok MC, Wong RY. Impact of a competency based curriculum on quality improvement among internal medicine residents. BMC Med Educ. 2014;14:252. doi:10.1186/s12909-014-0252-7
10. Wilper AP, Smith CS, Weppner W. Instituting systems-based practice and practice-based learning and improvement: a curriculum of inquiry. Med Educ Online. 2013;18:21612. doi:10.3402/meo.v18i0.21612
11. Weigel C, Suen W, Gupte G. Using lean methodology to teach quality improvement to internal medicine residents at a safety net hospital. Am J Med Qual. 2013;28(5):392-399. doi:10.1177/1062860612474062
12. Tomolo AM, Lawrence RH, Watts B, et al. Pilot study evaluating a practice-based learning and improvement curriculum focusing on the development of system-level quality improvement skills. J Grad Med Educ. 2011;3(1):49-58. doi:10.4300/JGME-D-10-00104.1
13. Djuricich AM, Ciccarelli M, Swigonski NL. A continuous quality improvement curriculum for residents: addressing core competency, improving systems. Acad Med. 2004;79(10 Suppl):S65-S67. doi:10.1097/00001888-200410001-00020
14. Essien UR, He W, Ray A, et al. Disparities in quality of primary care by resident and staff physicians: is there a conflict between training and equity? J Gen Intern Med. 2019;34(7):1184-1191. doi:10.1007/s11606-019-04960-5
15. Amat M, Norian E, Graham KL. Unmasking a vulnerable patient care process: a qualitative study describing the current state of resident continuity clinic in a nationwide cohort of internal medicine residency programs. Am J Med. 2022;135(6):783-786. doi:10.1016/j.amjmed.2022.02.007
16. Wong BM, Etchells EE, Kuper A, et al. Teaching quality improvement and patient safety to trainees: a systematic review. Acad Med. 2010;85(9):1425-1439. doi:10.1097/ACM.0b013e3181e2d0c6
17. Armstrong G, Headrick L, Madigosky W, et al. Designing education to improve care. Jt Comm J Qual Patient Saf. 2012;38:5-14. doi:10.1016/s1553-7250(12)38002-1
18. Hwang AS, Harding AS, Chang Y, et al. An audit and feedback intervention to improve internal medicine residents’ performance on ambulatory quality measures: a randomized controlled trial. Popul Health Manag. 2019;22(6):529-535. doi:10.1089/pop.2018.0217
19. Institute for Healthcare Improvement. Open school. The paper airplane factory. Accessed December 29, 2022. https://www.ihi.org/education/IHIOpenSchool/resources/Pages/Activities/PaperAirplaneFactory.aspx
Diagnostic Errors in Hospitalized Patients
Abstract
Diagnostic errors in hospitalized patients are a leading cause of preventable morbidity and mortality. Significant challenges in defining and measuring diagnostic errors and underlying process failure points have led to considerable variability in reported rates of diagnostic errors and adverse outcomes. In this article, we explore the diagnostic process and its discrete components, emphasizing the centrality of the patient in decision-making as well as the continuous nature of the process. We review the incidence of diagnostic errors in hospitalized patients and different methodological approaches that have been used to arrive at these estimates. We discuss different but interdependent provider- and system-related process-failure points that lead to diagnostic errors. We examine specific challenges related to measurement of diagnostic errors and describe traditional and novel approaches that are being used to obtain the most precise estimates. Finally, we examine various patient-, provider-, and organizational-level interventions that have been proposed to improve diagnostic safety in hospitalized patients.
Keywords: diagnostic error, hospital medicine, patient safety.
Diagnosis is defined as a “pre-existing set of categories agreed upon by the medical profession to designate a specific condition.”1 The diagnostic process involves obtaining a clinical history, performing a physical examination, conducting diagnostic testing, and consulting with other clinical providers to gather data that are relevant to understanding the underlying disease processes. This exercise involves generating hypotheses and updating prior probabilities as more information and evidence become available. Throughout this process of information gathering, integration, and interpretation, there is an ongoing assessment of whether sufficient and necessary knowledge has been obtained to make an accurate diagnosis and provide appropriate treatment.2
Diagnostic error is defined as a missed opportunity to make a timely diagnosis as part of this iterative process, including the failure of communicating the diagnosis to the patient in a timely manner.3 It can be categorized as a missed, delayed, or incorrect diagnosis based on available evidence at the time. Establishing the correct diagnosis has important implications. A timely and precise diagnosis ensures the patient the highest probability of having a positive health outcome that reflects an appropriate understanding of underlying disease processes and is consistent with their overall goals of care.3 When diagnostic errors occur, they can cause patient harm. Adverse events due to medical errors, including diagnostic errors, are estimated to be the third leading cause of death in the United States.4 Most people will experience at least 1 diagnostic error in their lifetime. In the 2015 National Academy of Medicine report Improving Diagnosis in Healthcare, diagnostic errors were identified as a major hazard as well as an opportunity to improve patient outcomes.2
Diagnostic errors during hospitalizations are especially concerning, as they are more likely to be implicated in a wider spectrum of harm, including permanent disability and death. This has become even more relevant for hospital medicine physicians and other clinical providers as they encounter increasing cognitive and administrative workloads, rising dissatisfaction and burnout, and unique obstacles such as night-time scheduling.5
Incidence of Diagnostic Errors in Hospitalized Patients
Several methodological approaches have been used to estimate the incidence of diagnostic errors in hospitalized patients. These include retrospective reviews of a sample of all hospital admissions, evaluations of selected adverse outcomes including autopsy studies, patient and provider surveys, and malpractice claims. Laboratory testing audits and secondary reviews in other diagnostic subspecialities (eg, radiology, pathology, and microbiology) are also essential to improving diagnostic performance in these specialized fields, which in turn affects overall hospital diagnostic error rates.6-8 These diverse approaches provide unique insights regarding our ability to assess the degree to which potential harms, ranging from temporary impairment to permanent disability, to death, are attributable to different failure points in the diagnostic process.
Large retrospective chart reviews of random hospital admissions remain the most accurate way to determine the overall incidence of diagnostic errors in hospitalized patients.9 The Harvard Medical Practice Study, published in 1991, laid the groundwork for measuring the incidence of adverse events in hospitalized patients and assessing their relation to medical error, negligence, and disability. Reviewing 30,121 randomly selected records from 51 randomly selected acute care hospitals in New York State, the study found that adverse events occurred in 3.7% of hospitalizations, diagnostic errors accounted for 13.8% of these events, and these errors were likely attributable to negligence in 74.7% of cases. The study not only outlined individual-level process failures, but also focused attention on some of the systemic causes, setting the agenda for quality improvement research in hospital-based care for years to come.10-12 A recent systematic review and meta-analysis of 22 hospital admission studies found a pooled rate of 0.7% (95% CI, 0.5%-1.1%) for harmful diagnostic errors.9 It found significant variations in the rates of adverse events, diagnostic errors, and range of diagnoses that were missed. This was primarily because of variabilities in pre-test probabilities in detecting diagnostic errors in these specific cohorts, as well as due to heterogeneity in study definitions and methodologies, especially regarding how they defined and measured “diagnostic error.” The analysis, however, did not account for diagnostic errors that were not related to patient harm (missed opportunities); therefore, it likely significantly underestimated the true incidence of diagnostic errors in these study populations. Table 1 summarizes some of key studies that have examined the incidence of harmful diagnostic errors in hospitalized patients.9-21
The chief limitation of reviewing random hospital admissions is that, since overall rates of diagnostic errors are still relatively low, a large number of case reviews are required to identify a sufficient sample of adverse outcomes to gain a meaningful understanding of the underlying process failure points and develop tools for remediation. Patient and provider surveys or data from malpractice claims can be high-yield starting points for research on process errors.22,23 Reviews of enriched cohorts of adverse outcomes, such as rapid-response events, intensive care unit (ICU) transfers, deaths, and hospital readmissions, can be an efficient way to identify process failures that lead to greatest harm. Depending on the research approach and the types of underlying patient populations sampled, rates of diagnostic errors in these high-risk groups have been estimated to be approximately 5% to 20%, or even higher.6,24-31 For example, a retrospective study of 391 cases of unplanned 7-day readmissions found that 5.6% of cases contained at least 1 diagnostic error during the index admission.32 In a study conducted at 6 Belgian acute-care hospitals, 56% of patients requiring an unplanned transfer to a higher level of care were determined to have had an adverse event, and of these adverse events, 12.4% of cases were associated with errors in diagnosis.29 A systematic review of 16 hospital-based studies estimated that 3.1% of all inpatient deaths were likely preventable, which corresponded to 22,165 deaths annually in the United States.30 Another such review of 31 autopsy studies reported that 28% of autopsied ICU patients had at least 1 misdiagnosis; of these diagnostic errors, 8% were classified as potentially lethal, and 15% were considered major but not lethal.31 Significant drawbacks of such enriched cohort studies, however, are their poor generalizability and inability to detect failure points that do not lead to patient harm (near-miss events).33
Causes of Diagnostic Errors in Hospitalized Patients
All aspects of the diagnostic process are susceptible to errors. These errors stem from a variety of faulty processes, including failure of the patient to engage with the health care system (eg, due to lack of insurance or transportation, or delay in seeking care); failure in information gathering (eg, missed history or exam findings, ordering wrong tests, laboratory errors); failure in information interpretation (eg, exam finding or test result misinterpretation); inaccurate hypothesis generation (eg, due to suboptimal prioritization or weighing of supporting evidence); and failure in communication (eg, with other team members or with the patient).2,34 Reasons for diagnostic process failures vary widely across different health care settings. While clinician assessment errors (eg, failure to consider or alternatively overweigh competing diagnoses) and errors in testing and the monitoring phase (eg, failure to order or follow up diagnostic tests) can lead to a majority of diagnostic errors in some patient populations, in other settings, social (eg, poor health literacy, punitive cultural practices) and economic factors (eg, lack of access to appropriate diagnostic tests or to specialty expertise) play a more prominent role.34,35
The Figure describes the relationship between components of the diagnostic process and subsequent outcomes, including diagnostic process failures, diagnostic errors, and absence or presence of patient harm.2,36,37 It reemphasizes the centrality of the patient in decision-making and the continuous nature of the process. The Figure also illustrates that only a minority of process failures result in diagnostic errors, and a smaller proportion of diagnostic errors actually lead to patient harm. Conversely, it also shows that diagnostic errors can happen without any obvious process-failure points, and, similarly, patient harm can take place in the absence of any evident diagnostic errors.36-38 Finally, it highlights the need to incorporate feedback from process failures, diagnostic errors, and favorable and unfavorable patient outcomes in order to inform future quality improvement efforts and research.
A significant proportion of diagnostic errors are due to system-related vulnerabilities, such as limitations in availability, adoption or quality of work force training, health informatics resources, and diagnostic capabilities. Lack of institutional culture that promotes safety and transparency also predisposes to diagnostic errors.39,40 The other major domain of process failures is related to cognitive errors in clinician decision-making. Anchoring, confirmation bias, availability bias, and base-rate neglect are some of the common cognitive biases that, along with personality traits (aversion to risk or ambiguity, overconfidence) and affective biases (influence of emotion on decision-making), often determine the degree of utilization of resources and the possibility of suboptimal diagnostic performance.41,42 Further, implicit biases related to age, race, gender, and sexual orientation contribute to disparities in access to health care and outcomes.43 In a large number of cases of preventable adverse outcomes, however, there are multiple interdependent individual and system-related failure points that lead to diagnostic error and patient harm.6,32
Challenges in Defining and Measuring Diagnostic Errors
In order to develop effective, evidence-based interventions to reduce diagnostic errors in hospitalized patients, it is essential to be able to first operationally define, and then accurately measure, diagnostic errors and the process failures that contribute to these errors in a standardized way that is reproducible across different settings.6,44 There are a number of obstacles in this endeavor.
A fundamental problem is that establishing a diagnosis is not a single act but a process. Patterns of symptoms and clinical presentations often differ for the same disease. Information required to make a diagnosis is usually gathered in stages, where the clinician obtains additional data, while considering many possibilities, of which 1 may be ultimately correct. Diagnoses evolve over time and in different care settings. “The most likely diagnosis” is not always the same as “the final correct diagnosis.” Moreover, the diagnostic process is influenced by patients’ individual clinical courses and preferences over time. This makes determination of missed, delayed, or incorrect diagnoses challenging.45,46
For hospitalized patients, generally the goal is to first rule out more serious and acute conditions (eg, pulmonary embolism or stroke), even if their probability is rather low. Conversely, a diagnosis that appears less consequential if delayed (eg, chronic anemia of unclear etiology) might not be pursued on an urgent basis, and is often left to outpatient providers to examine, but still may manifest in downstream harm (eg, delayed diagnosis of gastrointestinal malignancy or recurrent admissions for heart failure due to missed iron-deficiency anemia). Therefore, coming up with disease diagnosis likelihoods in hindsight may turn out to be highly subjective and not always accurate. This can be particularly difficult when clinician and other team deliberations are not recorded in their entirety.47
Another hurdle in the practice of diagnostic medicine is to preserve the balance between underdiagnosing versus pursuing overly aggressive diagnostic approaches. Conducting laboratory, imaging, or other diagnostic studies without a clear shared understanding of how they would affect clinical decision-making (eg, use of prostate-specific antigen to detect prostate cancer) not only leads to increased costs but can also delay appropriate care. Worse, subsequent unnecessary diagnostic tests and treatments can sometimes lead to serious harm.48,49
Finally, retrospective reviews by clinicians are subject to multiple potential limitations that include failure to create well-defined research questions, poorly developed inclusion and exclusion criteria, and issues related to inter- and intra-rater reliability.50 These methodological deficiencies can occur despite following "best practice" guidelines during the study planning, execution, and analysis phases. They further add to the challenge of defining and measuring diagnostic errors.47
Strategies to Improve Measurement of Diagnostic Errors
Development of new methodologies to reliably measure diagnostic errors is an area of active research. The advancement of uniform and universally agreed-upon frameworks to define and identify process failure points and diagnostic errors would help reduce measurement error and support development and testing of interventions that could be generalizable across different health care settings. To more accurately define and measure diagnostic errors, several novel approaches have been proposed (Table 2).
The Safer Dx framework is an all-round tool developed to advance the discipline of measuring diagnostic errors. For an episode of care under review, the instrument scores various items to determine the likelihood of a diagnostic error. These items evaluate multiple dimensions affecting diagnostic performance and measurements across 3 broad domains: structure (provider and organizational characteristics—from everyone involved with patient care, to computing infrastructure, to policies and regulations), process (elements of the patient-provider encounter, diagnostic test performance and follow-up, and subspecialty- and referral-specific factors), and outcome (establishing accurate and timely diagnosis as opposed to missed, delayed, or incorrect diagnosis). This instrument has been revised and can be further modified by a variety of stakeholders, including clinicians, health care organizations, and policymakers, to identify potential diagnostic errors in a standardized way for patient safety and quality improvement research.51,52
Use of standardized tools, such as the Diagnosis Error Evaluation and Research (DEER) taxonomy, can help to identify and classify specific failure points across different diagnostic process dimensions.37 These failure points can be classified into: issues related to patient presentation or access to health care; failure to obtain or misinterpretation of history or physical exam findings; errors in use of diagnostics tests due to technical or clinician-related factors; failures in appropriate weighing of evidence and hypothesis generation; errors associated with referral or consultation process; and failure to monitor the patient or obtain timely follow-up.34 The DEER taxonomy can also be modified based on specific research questions and study populations. Further, it can be recategorized to correspond to Safer Dx framework diagnostic process dimensions to provide insights into reasons for specific process failures and to develop new interventions to mitigate errors and patient harm.6
Since a majority of diagnostic errors do not lead to actual harm, use of “triggers” or clues (eg, procedure-related complications, patient falls, transfers to a higher level of care, readmissions within 30 days) can be a more efficient method to identify diagnostic errors and adverse events that do cause harm. The Global Trigger Tool, developed by the Institute for Healthcare Improvement, uses this strategy. This tool has been shown to identify a significantly higher number of serious adverse events than comparable methods.53 This facilitates selection and development of strategies at the institutional level that are most likely to improve patient outcomes.24
Encouraging and facilitating voluntary or prompted reporting from patients and clinicians can also play an important role in capturing diagnostic errors. Patients and clinicians are not only the key stakeholders but are also uniquely placed within the diagnostic process to detect and report potential errors.25,54 Patient-safety-event reporting systems, such as RL6, play a vital role in reporting near-misses and adverse events. These systems provide a mechanism for team members at all levels within the hospital to contribute toward reporting patient adverse events, including those arising from diagnostic errors.55 The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey is the first standardized, nationally reported patient survey designed to measure patients’ perceptions of their hospital experience. The US Centers for Medicare and Medicaid Services (CMS) publishes HCAHPS results on its website 4 times a year, which serves as an important incentive for hospitals to improve patient safety and quality of health care delivery.56
Another novel approach links multiple symptoms to a range of target diseases using the Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) framework. Using “big data” technologies, this technique can help discover otherwise hidden symptom-disease links and improve overall diagnostic performance. This approach is proposed for both case-control (look-back) and cohort (look-forward) studies assessing diagnostic errors and misdiagnosis-related harms. For example, starting with a known diagnosis with high potential for harm (eg, stroke), the “look-back” approach can be used to identify high-risk symptoms (eg, dizziness, vertigo). In the “look-forward” approach, a single symptom or exposure risk factor known to be frequently misdiagnosed (eg, dizziness) can be analyzed to identify potential adverse disease outcomes (eg, stroke, migraine).57
Many large ongoing studies looking at diagnostic errors among hospitalized patients, such as Utility of Predictive Systems to identify Inpatient Diagnostic Errors (UPSIDE),58Patient Safety Learning Lab (PSLL),59 and Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT),60 are using structured chart review methodologies incorporating many of the above strategies in combination. Cases triggered by certain events (eg, ICU transfer, death, rapid response event, new or worsening acute kidney injury) are reviewed using validated tools, including Safer Dx framework and DEER taxonomy, to provide the most precise estimates of the burden of diagnostic errors in hospitalized patients. These estimates may be much higher than previously predicted using traditional chart review approaches.6,24 For example, a recently published study of 2809 random admissions in 11 Massachusetts hospitals identified 978 adverse events but only 10 diagnostic errors (diagnostic error rate, 0.4%).19 This was likely because the trigger method used in the study did not specifically examine the diagnostic process as critically as done by the Safer Dx framework and DEER taxonomy tools, thereby underestimating the total number of diagnostic errors. Further, these ongoing studies (eg, UPSIDE, ADEPT) aim to employ new and upcoming advanced machine-learning methods to create models that can improve overall diagnostic performance. This would pave the way to test and build novel, efficient, and scalable interventions to reduce diagnostic errors and improve patient outcomes.
Strategies to Improve Diagnostic Safety in Hospitalized Patients
Disease-specific biomedical research, as well as advances in laboratory, imaging, and other technologies, play a critical role in improving diagnostic accuracy. However, these technical approaches do not address many of the broader clinician- and system-level failure points and opportunities for improvement. Various patient-, provider-, and organizational-level interventions that could make diagnostic processes more resilient and reduce the risk of error and patient harm have been proposed.61
Among these strategies are approaches to empower patients and their families. Fostering therapeutic relationships between patients and members of the care team is essential to reducing diagnostic errors.62 Facilitating timely access to health records, ensuring transparency in decision making, and tailoring communication strategies to patients’ cultural and educational backgrounds can reduce harm.63 Similarly, at the system level, enhancing communication among different providers by use of tools such as structured handoffs can prevent communication breakdowns and facilitate positive outcomes.64
Interventions targeted at individual health care providers, such as educational programs to improve content-specific knowledge, can enhance diagnostic performance. Regular feedback, strategies to enhance equity, and fostering an environment where all providers are actively encouraged to think critically and participate in the diagnostic process (training programs to use “diagnostic time-outs” and making it a “team sport”) can improve clinical reasoning.65,66 Use of standardized patients can help identify individual-level cognitive failure points and facilitate creation of new interventions to improve clinical decision-making processes.67
Novel health information technologies can further augment these efforts. These include effective documentation by maintaining dynamic and accurate patient histories, problem lists, and medication lists68-70; use of electronic health record–based algorithms to identify potential diagnostic delays for serious conditions71,72; use of telemedicine technologies to improve accessibility and coordination73;application of mobile health and wearable technologies to facilitate data-gathering and care delivery74,75; and use of computerized decision-support tools, including applications to interpret electrocardiograms, imaging studies, and other diagnostic tests.76
Use of precision medicine, powered by new artificial intelligence (AI) tools, is becoming more widespread. Algorithms powered by AI can augment and sometimes even outperform clinician decision-making in areas such as oncology, radiology, and primary care.77 Creation of large biobanks like the All of Us research program can be used to study thousands of environmental and genetic risk factors and health conditions simultaneously, and help identify specific treatments that work best for people of different backgrounds.78 Active research in these areas holds great promise in terms of how and when we diagnose diseases and make appropriate preventative and treatment decisions. Significant scientific, ethical, and regulatory challenges will need to be overcome before these technologies can address some of the most complex problems in health care.79
Finally, diagnostic performance is affected by the external environment, including the functioning of the medical liability system. Diagnostic errors that lead to patient harm are a leading cause of malpractice claims.80 Developing a legal environment, in collaboration with patient advocacy groups and health care organizations, that promotes and facilitates timely disclosure of diagnostic errors could decrease the incentive to hide errors, advance care processes, and improve outcomes.81,82
Conclusion
The burden of diagnostic errors in hospitalized patients is unacceptably high and remains an underemphasized cause of preventable morbidity and mortality. Diagnostic errors often result from a breakdown in multiple interdependent processes that involve patient-, provider-, and system-level factors. Significant challenges remain in defining and identifying diagnostic errors as well as underlying process-failure points. The most effective interventions to reduce diagnostic errors will require greater patient participation in the diagnostic process and a mix of evidence-based interventions that promote individual-provider excellence as well as system-level changes. Further research and collaboration among various stakeholders should help improve diagnostic safety for hospitalized patients.
Corresponding author: Abhishek Goyal, MD, MPH; agoyal4@bwh.harvard.edu
Disclosures: Dr. Dalal disclosed receiving income ≥ $250 from MayaMD.
1. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):1493-1499. doi:10.1001/archinte.165.13.1493
2. National Academies of Sciences, Engineering, and Medicine. 2015. Improving Diagnosis in Health Care. The National Academies Press. doi:10.17226/21794
3. Singh H, Graber ML. Improving diagnosis in health care—the next imperative for patient safety. N Engl J Med. 2015;373(26):2493-2495. doi:10.1056/NEJMp1512241
4. Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139. doi:10.1136/bmj.i2139
5. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636-641. doi:10.1007/s11606-009-0944-6
6. Griffin JA, Carr K, Bersani K, et al. Analyzing diagnostic errors in the acute setting: a process-driven approach. Diagnosis (Berl). 2021;9(1):77-88. doi:10.1515/dx-2021-0033
7. Itri JN, Tappouni RR, McEachern RO, Pesch AJ, Patel SH. Fundamentals of diagnostic error in imaging. RadioGraphics. 2018;38(6):1845-1865. doi:10.1148/rg.2018180021
8. Hammerling JA. A Review of medical errors in laboratory diagnostics and where we are today. Lab Med. 2012;43(2):41-44. doi:10.1309/LM6ER9WJR1IHQAUY
9. Gunderson CG, Bilan VP, Holleck JL, et al. Prevalence of harmful diagnostic errors in hospitalised adults: a systematic review and meta-analysis. BMJ Qual Saf. 2020;29(12):1008-1018. doi:10.1136/bmjqs-2019-010822
10. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
11. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324(6):377-384. doi:10.1056/NEJM199102073240605
12. Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence. Results of the Harvard Medical Practice Study III. N Engl J Med. 1991;325(4):245-251. doi:10.1056/NEJM199107253250405
13. Wilson RM, Michel P, Olsen S, et al. Patient safety in developing countries: retrospective estimation of scale and nature of harm to patients in hospital. BMJ. 2012;344:e832. doi:10.1136/bmj.e832
14. Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust. 1995;163(9):458-471. doi:10.5694/j.1326-5377.1995.tb124691.x
15. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38(3):261-271. doi:10.1097/00005650-200003000-00003
16. Baker GR, Norton PG, Flintoft V, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. CMAJ. 2004;170(11):1678-1686. doi:10.1503/cmaj.1040498
17. Davis P, Lay-Yee R, Briant R, Ali W, Scott A, Schug S. Adverse events in New Zealand public hospitals II: preventability and clinical context. N Z Med J. 2003;116(1183):U624.
18. Aranaz-Andrés JM, Aibar-Remón C, Vitaller-Murillo J, et al. Incidence of adverse events related to health care in Spain: results of the Spanish National Study of Adverse Events. J Epidemiol Community Health. 2008;62(12):1022-1029. doi:10.1136/jech.2007.065227
19. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
20. Soop M, Fryksmark U, Köster M, Haglund B. The incidence of adverse events in Swedish hospitals: a retrospective medical record review study. Int J Qual Health Care. 2009;21(4):285-291. doi:10.1093/intqhc/mzp025
21. Rafter N, Hickey A, Conroy RM, et al. The Irish National Adverse Events Study (INAES): the frequency and nature of adverse events in Irish hospitals—a retrospective record review study. BMJ Qual Saf. 2017;26(2):111-119. doi:10.1136/bmjqs-2015-004828
22. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med. 2002;347(24):1933-1940. doi:10.1056/NEJMsa022151
23. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672-680. doi:10.1136/bmjqs-2012-001550
24. Malik MA, Motta-Calderon D, Piniella N, et al. A structured approach to EHR surveillance of diagnostic error in acute care: an exploratory analysis of two institutionally-defined case cohorts. Diagnosis (Berl). 2022;9(4):446-457. doi:10.1515/dx-2022-0032
25. Graber ML. The incidence of diagnostic error in medicine. BMJ Qual Saf. 2013;22(suppl 2):ii21-ii27. doi:10.1136/bmjqs-2012-001615
26. Bergl PA, Taneja A, El-Kareh R, Singh H, Nanchal RS. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med. 2019;47(11):e902-e910. doi:10.1097/CCM.0000000000003976
27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf. 2012;21(9):737-745. doi:10.1136/bmjqs-2011-001159
28. Bergl PA, Nanchal RS, Singh H. Diagnostic error in the critically ill: defining the problem and exploring next steps to advance intensive care unit safety. Ann Am Thorac Soc. 2018;15(8):903-907. doi:10.1513/AnnalsATS.201801-068PS
29. Marquet K, Claes N, De Troy E, et al. One fourth of unplanned transfers to a higher level of care are associated with a highly preventable adverse event: a patient record review in six Belgian hospitals. Crit Care Med. 2015;43(5):1053-1061. doi:10.1097/CCM.0000000000000932
30. Rodwin BA, Bilan VP, Merchant NB, et al. Rate of preventable mortality in hospitalized patients: a systematic review and meta-analysis. J Gen Intern Med. 2020;35(7):2099-2106. doi:10.1007/s11606-019-05592-5
31. Winters B, Custer J, Galvagno SM, et al. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf. 2012;21(11):894-902. doi:10.1136/bmjqs-2012-000803
32. Raffel KE, Kantor MA, Barish P, et al. Prevalence and characterisation of diagnostic error among 7-day all-cause hospital medicine readmissions: a retrospective cohort study. BMJ Qual Saf. 2020;29(12):971-979. doi:10.1136/bmjqs-2020-010896
33. Weingart SN, Pagovich O, Sands DZ, et al. What can hospitalized patients tell us about adverse events? learning from patient-reported incidents. J Gen Intern Med. 2005;20(9):830-836. doi:10.1111/j.1525-1497.2005.0180.x
34. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881-1887. doi:10.1001/archinternmed.2009.333
35. Singh H, Schiff GD, Graber ML, Onakpoya I, Thompson MJ. The global burden of diagnostic errors in primary care. BMJ Qual Saf. 2017;26(6):484-494. doi:10.1136/bmjqs-2016-005401
36. Schiff GD, Leape LL. Commentary: how can we make diagnosis safer? Acad Med J Assoc Am Med Coll. 2012;87(2):135-138. doi:10.1097/ACM.0b013e31823f711c
37. Schiff GD, Kim S, Abrams R, et al. Diagnosing diagnosis errors: lessons from a multi-institutional collaborative project. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. Advances in Patient Safety: From Research to Implementation. Volume 2: Concepts and Methodology. AHRQ Publication No. 05-0021-2. Agency for Healthcare Research and Quality (US); 2005. Accessed January 16, 2023. http://www.ncbi.nlm.nih.gov/books/NBK20492/
38. Newman-Toker DE. A unified conceptual model for diagnostic errors: underdiagnosis, overdiagnosis, and misdiagnosis. Diagnosis (Berl). 2014;1(1):43-48. doi:10.1515/dx-2013-0027
39. Abimanyi-Ochom J, Bohingamu Mudiyanselage S, Catchpool M, Firipis M, Wanni Arachchige Dona S, Watts JJ. Strategies to reduce diagnostic errors: a systematic review. BMC Med Inform Decis Mak. 2019;19(1):174. doi:10.1186/s12911-019-0901-1
40. Gupta A, Harrod M, Quinn M, et al. Mind the overlap: how system problems contribute to cognitive failure and diagnostic errors. Diagnosis (Berl). 2018;5(3):151-156. doi:10.1515/dx-2018-0014
41. Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16:138. doi:10.1186/s12911-016-0377-1
42. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775-780. doi: 10.1097/00001888-200308000-00003
43. Chapman EN, Kaatz A, Carnes M. Physicians and implicit bias: how doctors may unwittingly perpetuate health care disparities. J Gen Intern Med. 2013;28(11):1504-1510. doi:10.1007/s11606-013-2441-1
44. Zwaan L, Singh H. The challenges in defining and measuring diagnostic error. Diagnosis (Ber). 2015;2(2):97-103. doi:10.1515/dx-2014-0069
45. Arkes HR, Wortmann RL, Saville PD, Harkness AR. Hindsight bias among physicians weighing the likelihood of diagnoses. J Appl Psychol. 1981;66(2):252-254.
46. Singh H. Editorial: Helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf. 2014;40(3):99-101. doi:10.1016/s1553-7250(14)40012-6
47. Vassar M, Holzmann M. The retrospective chart review: important methodological considerations. J Educ Eval Health Prof. 2013;10:12. doi:10.3352/jeehp.2013.10.12
48. Welch HG, Black WC. Overdiagnosis in cancer. J Natl Cancer Inst. 2010;102(9):605-613. doi:10.1093/jnci/djq099
49. Moynihan R, Doust J, Henry D. Preventing overdiagnosis: how to stop harming the healthy. BMJ. 2012;344:e3502. doi:10.1136/bmj.e3502
50. Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. JAMA. 2001;286(4):415-420. doi:10.1001/jama.286.4.415
51. Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015;24(2):103-110. doi:10.1136/bmjqs-2014-003675
52. Singh H, Khanna A, Spitzmueller C, Meyer AND. Recommendations for using the Revised Safer Dx Instrument to help measure and improve diagnostic safety. Diagnosis (Berl). 2019;6(4):315-323. doi:10.1515/dx-2019-0012
53. Classen DC, Resar R, Griffin F, et al. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30(4):581-589. doi:10.1377/hlthaff.2011.0190
54. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med. 2008;121(5 suppl):S38-S42. doi:10.1016/j.amjmed.2008.02.004
55. Mitchell I, Schuster A, Smith K, Pronovost P, Wu A. Patient safety incident reporting: a qualitative study of thoughts and perceptions of experts 15 years after “To Err is Human.” BMJ Qual Saf. 2016;25(2):92-99. doi:10.1136/bmjqs-2015-004405
56. Mazurenko O, Collum T, Ferdinand A, Menachemi N. Predictors of hospital patient satisfaction as measured by HCAHPS: a systematic review. J Healthc Manag. 2017;62(4):272-283. doi:10.1097/JHM-D-15-00050
57. Liberman AL, Newman-Toker DE. Symptom-Disease Pair Analysis of Diagnostic Error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data. BMJ Qual Saf. 2018;27(7):557-566. doi:10.1136/bmjqs-2017-007032
58. Utility of Predictive Systems to Identify Inpatient Diagnostic Errors: the UPSIDE study. NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/search/rpoHXlEAcEudQV3B9ld8iw/project-details/10020962
59. Overview of Patient Safety Learning Laboratory (PSLL) Projects. Agency for Healthcare Research and Quality. Accessed January 14, 2023. https://www.ahrq.gov/patient-safety/resources/learning-lab/index.html
60. Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT). NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/project-details/10642576
61. Zwaan L, Singh H. Diagnostic error in hospitals: finding forests not just the big trees. BMJ Qual Saf. 2020;29(12):961-964. doi:10.1136/bmjqs-2020-011099
62. Longtin Y, Sax H, Leape LL, Sheridan SE, Donaldson L, Pittet D. Patient participation: current knowledge and applicability to patient safety. Mayo Clin Proc. 2010;85(1):53-62. doi:10.4065/mcp.2009.0248
63. Murphy DR, Singh H, Berlin L. Communication breakdowns and diagnostic errors: a radiology perspective. Diagnosis (Berl). 2014;1(4):253-261. doi:10.1515/dx-2014-0035
64. Singh H, Naik AD, Rao R, Petersen LA. Reducing diagnostic errors through effective communication: harnessing the power of information technology. J Gen Intern Med. 2008;23(4):489-494. doi:10.1007/s11606-007-0393-z
65. Singh H, Connor DM, Dhaliwal G. Five strategies for clinicians to advance diagnostic excellence. BMJ. 2022;376:e068044. doi:10.1136/bmj-2021-068044
66. Yale S, Cohen S, Bordini BJ. Diagnostic time-outs to improve diagnosis. Crit Care Clin. 2022;38(2):185-194. doi:10.1016/j.ccc.2021.11.008
67. Schwartz A, Peskin S, Spiro A, Weiner SJ. Impact of unannounced standardized patient audit and feedback on care, documentation, and costs: an experiment and claims analysis. J Gen Intern Med. 2021;36(1):27-34. doi:10.1007/s11606-020-05965-1
68. Carpenter JD, Gorman PN. Using medication list—problem list mismatches as markers of potential error. Proc AMIA Symp. 2002:106-110.
69. Hron JD, Manzi S, Dionne R, et al. Electronic medication reconciliation and medication errors. Int J Qual Health Care. 2015;27(4):314-319. doi:10.1093/intqhc/mzv046
70. Graber ML, Siegal D, Riah H, Johnston D, Kenyon K. Electronic health record–related events in medical malpractice claims. J Patient Saf. 2019;15(2):77-85. doi:10.1097/PTS.0000000000000240
71. Murphy DR, Wu L, Thomas EJ, Forjuoh SN, Meyer AND, Singh H. Electronic trigger-based intervention to reduce delays in diagnostic evaluation for cancer: a cluster randomized controlled trial. J Clin Oncol. 2015;33(31):3560-3567. doi:10.1200/JCO.2015.61.1301
72. Singh H, Giardina TD, Forjuoh SN, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012;21(2):93-100. doi:10.1136/bmjqs-2011-000304
73. Armaignac DL, Saxena A, Rubens M, et al. Impact of telemedicine on mortality, length of stay, and cost among patients in progressive care units: experience from a large healthcare system. Crit Care Med. 2018;46(5):728-735. doi:10.1097/CCM.0000000000002994
74. MacKinnon GE, Brittain EL. Mobile health technologies in cardiopulmonary disease. Chest. 2020;157(3):654-664. doi:10.1016/j.chest.2019.10.015
75. DeVore AD, Wosik J, Hernandez AF. The future of wearables in heart failure patients. JACC Heart Fail. 2019;7(11):922-932. doi:10.1016/j.jchf.2019.08.008
76. Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms. J Am Med Inform Assoc. 2003;10(5):478-483. doi:10.1197/jamia.M1279
77. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Intern Med. 2019;34(8):1626-1630. doi:10.1007/s11606-019-05035-1
78. Ramirez AH, Gebo KA, Harris PA. Progress with the All Of Us research program: opening access for researchers. JAMA. 2021;325(24):2441-2442. doi:10.1001/jama.2021.7702
79. Johnson KB, Wei W, Weeraratne D, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. 2021;14(1):86-93. doi:10.1111/cts.12884
80. Gupta A, Snyder A, Kachalia A, Flanders S, Saint S, Chopra V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf. 2017;27(1):bmjqs-2017-006774. doi:10.1136/bmjqs-2017-006774
81. Renkema E, Broekhuis M, Ahaus K. Conditions that influence the impact of malpractice litigation risk on physicians’ behavior regarding patient safety. BMC Health Serv Res. 2014;14(1):38. doi:10.1186/1472-6963-14-38
82. Kachalia A, Mello MM, Nallamothu BK, Studdert DM. Legal and policy interventions to improve patient safety. Circulation. 2016;133(7):661-671. doi:10.1161/CIRCULATIONAHA.115.015880
Abstract
Diagnostic errors in hospitalized patients are a leading cause of preventable morbidity and mortality. Significant challenges in defining and measuring diagnostic errors and underlying process failure points have led to considerable variability in reported rates of diagnostic errors and adverse outcomes. In this article, we explore the diagnostic process and its discrete components, emphasizing the centrality of the patient in decision-making as well as the continuous nature of the process. We review the incidence of diagnostic errors in hospitalized patients and different methodological approaches that have been used to arrive at these estimates. We discuss different but interdependent provider- and system-related process-failure points that lead to diagnostic errors. We examine specific challenges related to measurement of diagnostic errors and describe traditional and novel approaches that are being used to obtain the most precise estimates. Finally, we examine various patient-, provider-, and organizational-level interventions that have been proposed to improve diagnostic safety in hospitalized patients.
Keywords: diagnostic error, hospital medicine, patient safety.
Diagnosis is defined as a “pre-existing set of categories agreed upon by the medical profession to designate a specific condition.”1 The diagnostic process involves obtaining a clinical history, performing a physical examination, conducting diagnostic testing, and consulting with other clinical providers to gather data that are relevant to understanding the underlying disease processes. This exercise involves generating hypotheses and updating prior probabilities as more information and evidence become available. Throughout this process of information gathering, integration, and interpretation, there is an ongoing assessment of whether sufficient and necessary knowledge has been obtained to make an accurate diagnosis and provide appropriate treatment.2
Diagnostic error is defined as a missed opportunity to make a timely diagnosis as part of this iterative process, including the failure of communicating the diagnosis to the patient in a timely manner.3 It can be categorized as a missed, delayed, or incorrect diagnosis based on available evidence at the time. Establishing the correct diagnosis has important implications. A timely and precise diagnosis ensures the patient the highest probability of having a positive health outcome that reflects an appropriate understanding of underlying disease processes and is consistent with their overall goals of care.3 When diagnostic errors occur, they can cause patient harm. Adverse events due to medical errors, including diagnostic errors, are estimated to be the third leading cause of death in the United States.4 Most people will experience at least 1 diagnostic error in their lifetime. In the 2015 National Academy of Medicine report Improving Diagnosis in Healthcare, diagnostic errors were identified as a major hazard as well as an opportunity to improve patient outcomes.2
Diagnostic errors during hospitalizations are especially concerning, as they are more likely to be implicated in a wider spectrum of harm, including permanent disability and death. This has become even more relevant for hospital medicine physicians and other clinical providers as they encounter increasing cognitive and administrative workloads, rising dissatisfaction and burnout, and unique obstacles such as night-time scheduling.5
Incidence of Diagnostic Errors in Hospitalized Patients
Several methodological approaches have been used to estimate the incidence of diagnostic errors in hospitalized patients. These include retrospective reviews of a sample of all hospital admissions, evaluations of selected adverse outcomes including autopsy studies, patient and provider surveys, and malpractice claims. Laboratory testing audits and secondary reviews in other diagnostic subspecialities (eg, radiology, pathology, and microbiology) are also essential to improving diagnostic performance in these specialized fields, which in turn affects overall hospital diagnostic error rates.6-8 These diverse approaches provide unique insights regarding our ability to assess the degree to which potential harms, ranging from temporary impairment to permanent disability, to death, are attributable to different failure points in the diagnostic process.
Large retrospective chart reviews of random hospital admissions remain the most accurate way to determine the overall incidence of diagnostic errors in hospitalized patients.9 The Harvard Medical Practice Study, published in 1991, laid the groundwork for measuring the incidence of adverse events in hospitalized patients and assessing their relation to medical error, negligence, and disability. Reviewing 30,121 randomly selected records from 51 randomly selected acute care hospitals in New York State, the study found that adverse events occurred in 3.7% of hospitalizations, diagnostic errors accounted for 13.8% of these events, and these errors were likely attributable to negligence in 74.7% of cases. The study not only outlined individual-level process failures, but also focused attention on some of the systemic causes, setting the agenda for quality improvement research in hospital-based care for years to come.10-12 A recent systematic review and meta-analysis of 22 hospital admission studies found a pooled rate of 0.7% (95% CI, 0.5%-1.1%) for harmful diagnostic errors.9 It found significant variations in the rates of adverse events, diagnostic errors, and range of diagnoses that were missed. This was primarily because of variabilities in pre-test probabilities in detecting diagnostic errors in these specific cohorts, as well as due to heterogeneity in study definitions and methodologies, especially regarding how they defined and measured “diagnostic error.” The analysis, however, did not account for diagnostic errors that were not related to patient harm (missed opportunities); therefore, it likely significantly underestimated the true incidence of diagnostic errors in these study populations. Table 1 summarizes some of key studies that have examined the incidence of harmful diagnostic errors in hospitalized patients.9-21
The chief limitation of reviewing random hospital admissions is that, since overall rates of diagnostic errors are still relatively low, a large number of case reviews are required to identify a sufficient sample of adverse outcomes to gain a meaningful understanding of the underlying process failure points and develop tools for remediation. Patient and provider surveys or data from malpractice claims can be high-yield starting points for research on process errors.22,23 Reviews of enriched cohorts of adverse outcomes, such as rapid-response events, intensive care unit (ICU) transfers, deaths, and hospital readmissions, can be an efficient way to identify process failures that lead to greatest harm. Depending on the research approach and the types of underlying patient populations sampled, rates of diagnostic errors in these high-risk groups have been estimated to be approximately 5% to 20%, or even higher.6,24-31 For example, a retrospective study of 391 cases of unplanned 7-day readmissions found that 5.6% of cases contained at least 1 diagnostic error during the index admission.32 In a study conducted at 6 Belgian acute-care hospitals, 56% of patients requiring an unplanned transfer to a higher level of care were determined to have had an adverse event, and of these adverse events, 12.4% of cases were associated with errors in diagnosis.29 A systematic review of 16 hospital-based studies estimated that 3.1% of all inpatient deaths were likely preventable, which corresponded to 22,165 deaths annually in the United States.30 Another such review of 31 autopsy studies reported that 28% of autopsied ICU patients had at least 1 misdiagnosis; of these diagnostic errors, 8% were classified as potentially lethal, and 15% were considered major but not lethal.31 Significant drawbacks of such enriched cohort studies, however, are their poor generalizability and inability to detect failure points that do not lead to patient harm (near-miss events).33
Causes of Diagnostic Errors in Hospitalized Patients
All aspects of the diagnostic process are susceptible to errors. These errors stem from a variety of faulty processes, including failure of the patient to engage with the health care system (eg, due to lack of insurance or transportation, or delay in seeking care); failure in information gathering (eg, missed history or exam findings, ordering wrong tests, laboratory errors); failure in information interpretation (eg, exam finding or test result misinterpretation); inaccurate hypothesis generation (eg, due to suboptimal prioritization or weighing of supporting evidence); and failure in communication (eg, with other team members or with the patient).2,34 Reasons for diagnostic process failures vary widely across different health care settings. While clinician assessment errors (eg, failure to consider or alternatively overweigh competing diagnoses) and errors in testing and the monitoring phase (eg, failure to order or follow up diagnostic tests) can lead to a majority of diagnostic errors in some patient populations, in other settings, social (eg, poor health literacy, punitive cultural practices) and economic factors (eg, lack of access to appropriate diagnostic tests or to specialty expertise) play a more prominent role.34,35
The Figure describes the relationship between components of the diagnostic process and subsequent outcomes, including diagnostic process failures, diagnostic errors, and absence or presence of patient harm.2,36,37 It reemphasizes the centrality of the patient in decision-making and the continuous nature of the process. The Figure also illustrates that only a minority of process failures result in diagnostic errors, and a smaller proportion of diagnostic errors actually lead to patient harm. Conversely, it also shows that diagnostic errors can happen without any obvious process-failure points, and, similarly, patient harm can take place in the absence of any evident diagnostic errors.36-38 Finally, it highlights the need to incorporate feedback from process failures, diagnostic errors, and favorable and unfavorable patient outcomes in order to inform future quality improvement efforts and research.
A significant proportion of diagnostic errors are due to system-related vulnerabilities, such as limitations in availability, adoption or quality of work force training, health informatics resources, and diagnostic capabilities. Lack of institutional culture that promotes safety and transparency also predisposes to diagnostic errors.39,40 The other major domain of process failures is related to cognitive errors in clinician decision-making. Anchoring, confirmation bias, availability bias, and base-rate neglect are some of the common cognitive biases that, along with personality traits (aversion to risk or ambiguity, overconfidence) and affective biases (influence of emotion on decision-making), often determine the degree of utilization of resources and the possibility of suboptimal diagnostic performance.41,42 Further, implicit biases related to age, race, gender, and sexual orientation contribute to disparities in access to health care and outcomes.43 In a large number of cases of preventable adverse outcomes, however, there are multiple interdependent individual and system-related failure points that lead to diagnostic error and patient harm.6,32
Challenges in Defining and Measuring Diagnostic Errors
In order to develop effective, evidence-based interventions to reduce diagnostic errors in hospitalized patients, it is essential to be able to first operationally define, and then accurately measure, diagnostic errors and the process failures that contribute to these errors in a standardized way that is reproducible across different settings.6,44 There are a number of obstacles in this endeavor.
A fundamental problem is that establishing a diagnosis is not a single act but a process. Patterns of symptoms and clinical presentations often differ for the same disease. Information required to make a diagnosis is usually gathered in stages, where the clinician obtains additional data, while considering many possibilities, of which 1 may be ultimately correct. Diagnoses evolve over time and in different care settings. “The most likely diagnosis” is not always the same as “the final correct diagnosis.” Moreover, the diagnostic process is influenced by patients’ individual clinical courses and preferences over time. This makes determination of missed, delayed, or incorrect diagnoses challenging.45,46
For hospitalized patients, generally the goal is to first rule out more serious and acute conditions (eg, pulmonary embolism or stroke), even if their probability is rather low. Conversely, a diagnosis that appears less consequential if delayed (eg, chronic anemia of unclear etiology) might not be pursued on an urgent basis, and is often left to outpatient providers to examine, but still may manifest in downstream harm (eg, delayed diagnosis of gastrointestinal malignancy or recurrent admissions for heart failure due to missed iron-deficiency anemia). Therefore, coming up with disease diagnosis likelihoods in hindsight may turn out to be highly subjective and not always accurate. This can be particularly difficult when clinician and other team deliberations are not recorded in their entirety.47
Another hurdle in the practice of diagnostic medicine is to preserve the balance between underdiagnosing versus pursuing overly aggressive diagnostic approaches. Conducting laboratory, imaging, or other diagnostic studies without a clear shared understanding of how they would affect clinical decision-making (eg, use of prostate-specific antigen to detect prostate cancer) not only leads to increased costs but can also delay appropriate care. Worse, subsequent unnecessary diagnostic tests and treatments can sometimes lead to serious harm.48,49
Finally, retrospective reviews by clinicians are subject to multiple potential limitations that include failure to create well-defined research questions, poorly developed inclusion and exclusion criteria, and issues related to inter- and intra-rater reliability.50 These methodological deficiencies can occur despite following "best practice" guidelines during the study planning, execution, and analysis phases. They further add to the challenge of defining and measuring diagnostic errors.47
Strategies to Improve Measurement of Diagnostic Errors
Development of new methodologies to reliably measure diagnostic errors is an area of active research. The advancement of uniform and universally agreed-upon frameworks to define and identify process failure points and diagnostic errors would help reduce measurement error and support development and testing of interventions that could be generalizable across different health care settings. To more accurately define and measure diagnostic errors, several novel approaches have been proposed (Table 2).
The Safer Dx framework is an all-round tool developed to advance the discipline of measuring diagnostic errors. For an episode of care under review, the instrument scores various items to determine the likelihood of a diagnostic error. These items evaluate multiple dimensions affecting diagnostic performance and measurements across 3 broad domains: structure (provider and organizational characteristics—from everyone involved with patient care, to computing infrastructure, to policies and regulations), process (elements of the patient-provider encounter, diagnostic test performance and follow-up, and subspecialty- and referral-specific factors), and outcome (establishing accurate and timely diagnosis as opposed to missed, delayed, or incorrect diagnosis). This instrument has been revised and can be further modified by a variety of stakeholders, including clinicians, health care organizations, and policymakers, to identify potential diagnostic errors in a standardized way for patient safety and quality improvement research.51,52
Use of standardized tools, such as the Diagnosis Error Evaluation and Research (DEER) taxonomy, can help to identify and classify specific failure points across different diagnostic process dimensions.37 These failure points can be classified into: issues related to patient presentation or access to health care; failure to obtain or misinterpretation of history or physical exam findings; errors in use of diagnostics tests due to technical or clinician-related factors; failures in appropriate weighing of evidence and hypothesis generation; errors associated with referral or consultation process; and failure to monitor the patient or obtain timely follow-up.34 The DEER taxonomy can also be modified based on specific research questions and study populations. Further, it can be recategorized to correspond to Safer Dx framework diagnostic process dimensions to provide insights into reasons for specific process failures and to develop new interventions to mitigate errors and patient harm.6
Since a majority of diagnostic errors do not lead to actual harm, use of “triggers” or clues (eg, procedure-related complications, patient falls, transfers to a higher level of care, readmissions within 30 days) can be a more efficient method to identify diagnostic errors and adverse events that do cause harm. The Global Trigger Tool, developed by the Institute for Healthcare Improvement, uses this strategy. This tool has been shown to identify a significantly higher number of serious adverse events than comparable methods.53 This facilitates selection and development of strategies at the institutional level that are most likely to improve patient outcomes.24
Encouraging and facilitating voluntary or prompted reporting from patients and clinicians can also play an important role in capturing diagnostic errors. Patients and clinicians are not only the key stakeholders but are also uniquely placed within the diagnostic process to detect and report potential errors.25,54 Patient-safety-event reporting systems, such as RL6, play a vital role in reporting near-misses and adverse events. These systems provide a mechanism for team members at all levels within the hospital to contribute toward reporting patient adverse events, including those arising from diagnostic errors.55 The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey is the first standardized, nationally reported patient survey designed to measure patients’ perceptions of their hospital experience. The US Centers for Medicare and Medicaid Services (CMS) publishes HCAHPS results on its website 4 times a year, which serves as an important incentive for hospitals to improve patient safety and quality of health care delivery.56
Another novel approach links multiple symptoms to a range of target diseases using the Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) framework. Using “big data” technologies, this technique can help discover otherwise hidden symptom-disease links and improve overall diagnostic performance. This approach is proposed for both case-control (look-back) and cohort (look-forward) studies assessing diagnostic errors and misdiagnosis-related harms. For example, starting with a known diagnosis with high potential for harm (eg, stroke), the “look-back” approach can be used to identify high-risk symptoms (eg, dizziness, vertigo). In the “look-forward” approach, a single symptom or exposure risk factor known to be frequently misdiagnosed (eg, dizziness) can be analyzed to identify potential adverse disease outcomes (eg, stroke, migraine).57
Many large ongoing studies looking at diagnostic errors among hospitalized patients, such as Utility of Predictive Systems to identify Inpatient Diagnostic Errors (UPSIDE),58Patient Safety Learning Lab (PSLL),59 and Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT),60 are using structured chart review methodologies incorporating many of the above strategies in combination. Cases triggered by certain events (eg, ICU transfer, death, rapid response event, new or worsening acute kidney injury) are reviewed using validated tools, including Safer Dx framework and DEER taxonomy, to provide the most precise estimates of the burden of diagnostic errors in hospitalized patients. These estimates may be much higher than previously predicted using traditional chart review approaches.6,24 For example, a recently published study of 2809 random admissions in 11 Massachusetts hospitals identified 978 adverse events but only 10 diagnostic errors (diagnostic error rate, 0.4%).19 This was likely because the trigger method used in the study did not specifically examine the diagnostic process as critically as done by the Safer Dx framework and DEER taxonomy tools, thereby underestimating the total number of diagnostic errors. Further, these ongoing studies (eg, UPSIDE, ADEPT) aim to employ new and upcoming advanced machine-learning methods to create models that can improve overall diagnostic performance. This would pave the way to test and build novel, efficient, and scalable interventions to reduce diagnostic errors and improve patient outcomes.
Strategies to Improve Diagnostic Safety in Hospitalized Patients
Disease-specific biomedical research, as well as advances in laboratory, imaging, and other technologies, play a critical role in improving diagnostic accuracy. However, these technical approaches do not address many of the broader clinician- and system-level failure points and opportunities for improvement. Various patient-, provider-, and organizational-level interventions that could make diagnostic processes more resilient and reduce the risk of error and patient harm have been proposed.61
Among these strategies are approaches to empower patients and their families. Fostering therapeutic relationships between patients and members of the care team is essential to reducing diagnostic errors.62 Facilitating timely access to health records, ensuring transparency in decision making, and tailoring communication strategies to patients’ cultural and educational backgrounds can reduce harm.63 Similarly, at the system level, enhancing communication among different providers by use of tools such as structured handoffs can prevent communication breakdowns and facilitate positive outcomes.64
Interventions targeted at individual health care providers, such as educational programs to improve content-specific knowledge, can enhance diagnostic performance. Regular feedback, strategies to enhance equity, and fostering an environment where all providers are actively encouraged to think critically and participate in the diagnostic process (training programs to use “diagnostic time-outs” and making it a “team sport”) can improve clinical reasoning.65,66 Use of standardized patients can help identify individual-level cognitive failure points and facilitate creation of new interventions to improve clinical decision-making processes.67
Novel health information technologies can further augment these efforts. These include effective documentation by maintaining dynamic and accurate patient histories, problem lists, and medication lists68-70; use of electronic health record–based algorithms to identify potential diagnostic delays for serious conditions71,72; use of telemedicine technologies to improve accessibility and coordination73;application of mobile health and wearable technologies to facilitate data-gathering and care delivery74,75; and use of computerized decision-support tools, including applications to interpret electrocardiograms, imaging studies, and other diagnostic tests.76
Use of precision medicine, powered by new artificial intelligence (AI) tools, is becoming more widespread. Algorithms powered by AI can augment and sometimes even outperform clinician decision-making in areas such as oncology, radiology, and primary care.77 Creation of large biobanks like the All of Us research program can be used to study thousands of environmental and genetic risk factors and health conditions simultaneously, and help identify specific treatments that work best for people of different backgrounds.78 Active research in these areas holds great promise in terms of how and when we diagnose diseases and make appropriate preventative and treatment decisions. Significant scientific, ethical, and regulatory challenges will need to be overcome before these technologies can address some of the most complex problems in health care.79
Finally, diagnostic performance is affected by the external environment, including the functioning of the medical liability system. Diagnostic errors that lead to patient harm are a leading cause of malpractice claims.80 Developing a legal environment, in collaboration with patient advocacy groups and health care organizations, that promotes and facilitates timely disclosure of diagnostic errors could decrease the incentive to hide errors, advance care processes, and improve outcomes.81,82
Conclusion
The burden of diagnostic errors in hospitalized patients is unacceptably high and remains an underemphasized cause of preventable morbidity and mortality. Diagnostic errors often result from a breakdown in multiple interdependent processes that involve patient-, provider-, and system-level factors. Significant challenges remain in defining and identifying diagnostic errors as well as underlying process-failure points. The most effective interventions to reduce diagnostic errors will require greater patient participation in the diagnostic process and a mix of evidence-based interventions that promote individual-provider excellence as well as system-level changes. Further research and collaboration among various stakeholders should help improve diagnostic safety for hospitalized patients.
Corresponding author: Abhishek Goyal, MD, MPH; agoyal4@bwh.harvard.edu
Disclosures: Dr. Dalal disclosed receiving income ≥ $250 from MayaMD.
Abstract
Diagnostic errors in hospitalized patients are a leading cause of preventable morbidity and mortality. Significant challenges in defining and measuring diagnostic errors and underlying process failure points have led to considerable variability in reported rates of diagnostic errors and adverse outcomes. In this article, we explore the diagnostic process and its discrete components, emphasizing the centrality of the patient in decision-making as well as the continuous nature of the process. We review the incidence of diagnostic errors in hospitalized patients and different methodological approaches that have been used to arrive at these estimates. We discuss different but interdependent provider- and system-related process-failure points that lead to diagnostic errors. We examine specific challenges related to measurement of diagnostic errors and describe traditional and novel approaches that are being used to obtain the most precise estimates. Finally, we examine various patient-, provider-, and organizational-level interventions that have been proposed to improve diagnostic safety in hospitalized patients.
Keywords: diagnostic error, hospital medicine, patient safety.
Diagnosis is defined as a “pre-existing set of categories agreed upon by the medical profession to designate a specific condition.”1 The diagnostic process involves obtaining a clinical history, performing a physical examination, conducting diagnostic testing, and consulting with other clinical providers to gather data that are relevant to understanding the underlying disease processes. This exercise involves generating hypotheses and updating prior probabilities as more information and evidence become available. Throughout this process of information gathering, integration, and interpretation, there is an ongoing assessment of whether sufficient and necessary knowledge has been obtained to make an accurate diagnosis and provide appropriate treatment.2
Diagnostic error is defined as a missed opportunity to make a timely diagnosis as part of this iterative process, including the failure of communicating the diagnosis to the patient in a timely manner.3 It can be categorized as a missed, delayed, or incorrect diagnosis based on available evidence at the time. Establishing the correct diagnosis has important implications. A timely and precise diagnosis ensures the patient the highest probability of having a positive health outcome that reflects an appropriate understanding of underlying disease processes and is consistent with their overall goals of care.3 When diagnostic errors occur, they can cause patient harm. Adverse events due to medical errors, including diagnostic errors, are estimated to be the third leading cause of death in the United States.4 Most people will experience at least 1 diagnostic error in their lifetime. In the 2015 National Academy of Medicine report Improving Diagnosis in Healthcare, diagnostic errors were identified as a major hazard as well as an opportunity to improve patient outcomes.2
Diagnostic errors during hospitalizations are especially concerning, as they are more likely to be implicated in a wider spectrum of harm, including permanent disability and death. This has become even more relevant for hospital medicine physicians and other clinical providers as they encounter increasing cognitive and administrative workloads, rising dissatisfaction and burnout, and unique obstacles such as night-time scheduling.5
Incidence of Diagnostic Errors in Hospitalized Patients
Several methodological approaches have been used to estimate the incidence of diagnostic errors in hospitalized patients. These include retrospective reviews of a sample of all hospital admissions, evaluations of selected adverse outcomes including autopsy studies, patient and provider surveys, and malpractice claims. Laboratory testing audits and secondary reviews in other diagnostic subspecialities (eg, radiology, pathology, and microbiology) are also essential to improving diagnostic performance in these specialized fields, which in turn affects overall hospital diagnostic error rates.6-8 These diverse approaches provide unique insights regarding our ability to assess the degree to which potential harms, ranging from temporary impairment to permanent disability, to death, are attributable to different failure points in the diagnostic process.
Large retrospective chart reviews of random hospital admissions remain the most accurate way to determine the overall incidence of diagnostic errors in hospitalized patients.9 The Harvard Medical Practice Study, published in 1991, laid the groundwork for measuring the incidence of adverse events in hospitalized patients and assessing their relation to medical error, negligence, and disability. Reviewing 30,121 randomly selected records from 51 randomly selected acute care hospitals in New York State, the study found that adverse events occurred in 3.7% of hospitalizations, diagnostic errors accounted for 13.8% of these events, and these errors were likely attributable to negligence in 74.7% of cases. The study not only outlined individual-level process failures, but also focused attention on some of the systemic causes, setting the agenda for quality improvement research in hospital-based care for years to come.10-12 A recent systematic review and meta-analysis of 22 hospital admission studies found a pooled rate of 0.7% (95% CI, 0.5%-1.1%) for harmful diagnostic errors.9 It found significant variations in the rates of adverse events, diagnostic errors, and range of diagnoses that were missed. This was primarily because of variabilities in pre-test probabilities in detecting diagnostic errors in these specific cohorts, as well as due to heterogeneity in study definitions and methodologies, especially regarding how they defined and measured “diagnostic error.” The analysis, however, did not account for diagnostic errors that were not related to patient harm (missed opportunities); therefore, it likely significantly underestimated the true incidence of diagnostic errors in these study populations. Table 1 summarizes some of key studies that have examined the incidence of harmful diagnostic errors in hospitalized patients.9-21
The chief limitation of reviewing random hospital admissions is that, since overall rates of diagnostic errors are still relatively low, a large number of case reviews are required to identify a sufficient sample of adverse outcomes to gain a meaningful understanding of the underlying process failure points and develop tools for remediation. Patient and provider surveys or data from malpractice claims can be high-yield starting points for research on process errors.22,23 Reviews of enriched cohorts of adverse outcomes, such as rapid-response events, intensive care unit (ICU) transfers, deaths, and hospital readmissions, can be an efficient way to identify process failures that lead to greatest harm. Depending on the research approach and the types of underlying patient populations sampled, rates of diagnostic errors in these high-risk groups have been estimated to be approximately 5% to 20%, or even higher.6,24-31 For example, a retrospective study of 391 cases of unplanned 7-day readmissions found that 5.6% of cases contained at least 1 diagnostic error during the index admission.32 In a study conducted at 6 Belgian acute-care hospitals, 56% of patients requiring an unplanned transfer to a higher level of care were determined to have had an adverse event, and of these adverse events, 12.4% of cases were associated with errors in diagnosis.29 A systematic review of 16 hospital-based studies estimated that 3.1% of all inpatient deaths were likely preventable, which corresponded to 22,165 deaths annually in the United States.30 Another such review of 31 autopsy studies reported that 28% of autopsied ICU patients had at least 1 misdiagnosis; of these diagnostic errors, 8% were classified as potentially lethal, and 15% were considered major but not lethal.31 Significant drawbacks of such enriched cohort studies, however, are their poor generalizability and inability to detect failure points that do not lead to patient harm (near-miss events).33
Causes of Diagnostic Errors in Hospitalized Patients
All aspects of the diagnostic process are susceptible to errors. These errors stem from a variety of faulty processes, including failure of the patient to engage with the health care system (eg, due to lack of insurance or transportation, or delay in seeking care); failure in information gathering (eg, missed history or exam findings, ordering wrong tests, laboratory errors); failure in information interpretation (eg, exam finding or test result misinterpretation); inaccurate hypothesis generation (eg, due to suboptimal prioritization or weighing of supporting evidence); and failure in communication (eg, with other team members or with the patient).2,34 Reasons for diagnostic process failures vary widely across different health care settings. While clinician assessment errors (eg, failure to consider or alternatively overweigh competing diagnoses) and errors in testing and the monitoring phase (eg, failure to order or follow up diagnostic tests) can lead to a majority of diagnostic errors in some patient populations, in other settings, social (eg, poor health literacy, punitive cultural practices) and economic factors (eg, lack of access to appropriate diagnostic tests or to specialty expertise) play a more prominent role.34,35
The Figure describes the relationship between components of the diagnostic process and subsequent outcomes, including diagnostic process failures, diagnostic errors, and absence or presence of patient harm.2,36,37 It reemphasizes the centrality of the patient in decision-making and the continuous nature of the process. The Figure also illustrates that only a minority of process failures result in diagnostic errors, and a smaller proportion of diagnostic errors actually lead to patient harm. Conversely, it also shows that diagnostic errors can happen without any obvious process-failure points, and, similarly, patient harm can take place in the absence of any evident diagnostic errors.36-38 Finally, it highlights the need to incorporate feedback from process failures, diagnostic errors, and favorable and unfavorable patient outcomes in order to inform future quality improvement efforts and research.
A significant proportion of diagnostic errors are due to system-related vulnerabilities, such as limitations in availability, adoption or quality of work force training, health informatics resources, and diagnostic capabilities. Lack of institutional culture that promotes safety and transparency also predisposes to diagnostic errors.39,40 The other major domain of process failures is related to cognitive errors in clinician decision-making. Anchoring, confirmation bias, availability bias, and base-rate neglect are some of the common cognitive biases that, along with personality traits (aversion to risk or ambiguity, overconfidence) and affective biases (influence of emotion on decision-making), often determine the degree of utilization of resources and the possibility of suboptimal diagnostic performance.41,42 Further, implicit biases related to age, race, gender, and sexual orientation contribute to disparities in access to health care and outcomes.43 In a large number of cases of preventable adverse outcomes, however, there are multiple interdependent individual and system-related failure points that lead to diagnostic error and patient harm.6,32
Challenges in Defining and Measuring Diagnostic Errors
In order to develop effective, evidence-based interventions to reduce diagnostic errors in hospitalized patients, it is essential to be able to first operationally define, and then accurately measure, diagnostic errors and the process failures that contribute to these errors in a standardized way that is reproducible across different settings.6,44 There are a number of obstacles in this endeavor.
A fundamental problem is that establishing a diagnosis is not a single act but a process. Patterns of symptoms and clinical presentations often differ for the same disease. Information required to make a diagnosis is usually gathered in stages, where the clinician obtains additional data, while considering many possibilities, of which 1 may be ultimately correct. Diagnoses evolve over time and in different care settings. “The most likely diagnosis” is not always the same as “the final correct diagnosis.” Moreover, the diagnostic process is influenced by patients’ individual clinical courses and preferences over time. This makes determination of missed, delayed, or incorrect diagnoses challenging.45,46
For hospitalized patients, generally the goal is to first rule out more serious and acute conditions (eg, pulmonary embolism or stroke), even if their probability is rather low. Conversely, a diagnosis that appears less consequential if delayed (eg, chronic anemia of unclear etiology) might not be pursued on an urgent basis, and is often left to outpatient providers to examine, but still may manifest in downstream harm (eg, delayed diagnosis of gastrointestinal malignancy or recurrent admissions for heart failure due to missed iron-deficiency anemia). Therefore, coming up with disease diagnosis likelihoods in hindsight may turn out to be highly subjective and not always accurate. This can be particularly difficult when clinician and other team deliberations are not recorded in their entirety.47
Another hurdle in the practice of diagnostic medicine is to preserve the balance between underdiagnosing versus pursuing overly aggressive diagnostic approaches. Conducting laboratory, imaging, or other diagnostic studies without a clear shared understanding of how they would affect clinical decision-making (eg, use of prostate-specific antigen to detect prostate cancer) not only leads to increased costs but can also delay appropriate care. Worse, subsequent unnecessary diagnostic tests and treatments can sometimes lead to serious harm.48,49
Finally, retrospective reviews by clinicians are subject to multiple potential limitations that include failure to create well-defined research questions, poorly developed inclusion and exclusion criteria, and issues related to inter- and intra-rater reliability.50 These methodological deficiencies can occur despite following "best practice" guidelines during the study planning, execution, and analysis phases. They further add to the challenge of defining and measuring diagnostic errors.47
Strategies to Improve Measurement of Diagnostic Errors
Development of new methodologies to reliably measure diagnostic errors is an area of active research. The advancement of uniform and universally agreed-upon frameworks to define and identify process failure points and diagnostic errors would help reduce measurement error and support development and testing of interventions that could be generalizable across different health care settings. To more accurately define and measure diagnostic errors, several novel approaches have been proposed (Table 2).
The Safer Dx framework is an all-round tool developed to advance the discipline of measuring diagnostic errors. For an episode of care under review, the instrument scores various items to determine the likelihood of a diagnostic error. These items evaluate multiple dimensions affecting diagnostic performance and measurements across 3 broad domains: structure (provider and organizational characteristics—from everyone involved with patient care, to computing infrastructure, to policies and regulations), process (elements of the patient-provider encounter, diagnostic test performance and follow-up, and subspecialty- and referral-specific factors), and outcome (establishing accurate and timely diagnosis as opposed to missed, delayed, or incorrect diagnosis). This instrument has been revised and can be further modified by a variety of stakeholders, including clinicians, health care organizations, and policymakers, to identify potential diagnostic errors in a standardized way for patient safety and quality improvement research.51,52
Use of standardized tools, such as the Diagnosis Error Evaluation and Research (DEER) taxonomy, can help to identify and classify specific failure points across different diagnostic process dimensions.37 These failure points can be classified into: issues related to patient presentation or access to health care; failure to obtain or misinterpretation of history or physical exam findings; errors in use of diagnostics tests due to technical or clinician-related factors; failures in appropriate weighing of evidence and hypothesis generation; errors associated with referral or consultation process; and failure to monitor the patient or obtain timely follow-up.34 The DEER taxonomy can also be modified based on specific research questions and study populations. Further, it can be recategorized to correspond to Safer Dx framework diagnostic process dimensions to provide insights into reasons for specific process failures and to develop new interventions to mitigate errors and patient harm.6
Since a majority of diagnostic errors do not lead to actual harm, use of “triggers” or clues (eg, procedure-related complications, patient falls, transfers to a higher level of care, readmissions within 30 days) can be a more efficient method to identify diagnostic errors and adverse events that do cause harm. The Global Trigger Tool, developed by the Institute for Healthcare Improvement, uses this strategy. This tool has been shown to identify a significantly higher number of serious adverse events than comparable methods.53 This facilitates selection and development of strategies at the institutional level that are most likely to improve patient outcomes.24
Encouraging and facilitating voluntary or prompted reporting from patients and clinicians can also play an important role in capturing diagnostic errors. Patients and clinicians are not only the key stakeholders but are also uniquely placed within the diagnostic process to detect and report potential errors.25,54 Patient-safety-event reporting systems, such as RL6, play a vital role in reporting near-misses and adverse events. These systems provide a mechanism for team members at all levels within the hospital to contribute toward reporting patient adverse events, including those arising from diagnostic errors.55 The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey is the first standardized, nationally reported patient survey designed to measure patients’ perceptions of their hospital experience. The US Centers for Medicare and Medicaid Services (CMS) publishes HCAHPS results on its website 4 times a year, which serves as an important incentive for hospitals to improve patient safety and quality of health care delivery.56
Another novel approach links multiple symptoms to a range of target diseases using the Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) framework. Using “big data” technologies, this technique can help discover otherwise hidden symptom-disease links and improve overall diagnostic performance. This approach is proposed for both case-control (look-back) and cohort (look-forward) studies assessing diagnostic errors and misdiagnosis-related harms. For example, starting with a known diagnosis with high potential for harm (eg, stroke), the “look-back” approach can be used to identify high-risk symptoms (eg, dizziness, vertigo). In the “look-forward” approach, a single symptom or exposure risk factor known to be frequently misdiagnosed (eg, dizziness) can be analyzed to identify potential adverse disease outcomes (eg, stroke, migraine).57
Many large ongoing studies looking at diagnostic errors among hospitalized patients, such as Utility of Predictive Systems to identify Inpatient Diagnostic Errors (UPSIDE),58Patient Safety Learning Lab (PSLL),59 and Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT),60 are using structured chart review methodologies incorporating many of the above strategies in combination. Cases triggered by certain events (eg, ICU transfer, death, rapid response event, new or worsening acute kidney injury) are reviewed using validated tools, including Safer Dx framework and DEER taxonomy, to provide the most precise estimates of the burden of diagnostic errors in hospitalized patients. These estimates may be much higher than previously predicted using traditional chart review approaches.6,24 For example, a recently published study of 2809 random admissions in 11 Massachusetts hospitals identified 978 adverse events but only 10 diagnostic errors (diagnostic error rate, 0.4%).19 This was likely because the trigger method used in the study did not specifically examine the diagnostic process as critically as done by the Safer Dx framework and DEER taxonomy tools, thereby underestimating the total number of diagnostic errors. Further, these ongoing studies (eg, UPSIDE, ADEPT) aim to employ new and upcoming advanced machine-learning methods to create models that can improve overall diagnostic performance. This would pave the way to test and build novel, efficient, and scalable interventions to reduce diagnostic errors and improve patient outcomes.
Strategies to Improve Diagnostic Safety in Hospitalized Patients
Disease-specific biomedical research, as well as advances in laboratory, imaging, and other technologies, play a critical role in improving diagnostic accuracy. However, these technical approaches do not address many of the broader clinician- and system-level failure points and opportunities for improvement. Various patient-, provider-, and organizational-level interventions that could make diagnostic processes more resilient and reduce the risk of error and patient harm have been proposed.61
Among these strategies are approaches to empower patients and their families. Fostering therapeutic relationships between patients and members of the care team is essential to reducing diagnostic errors.62 Facilitating timely access to health records, ensuring transparency in decision making, and tailoring communication strategies to patients’ cultural and educational backgrounds can reduce harm.63 Similarly, at the system level, enhancing communication among different providers by use of tools such as structured handoffs can prevent communication breakdowns and facilitate positive outcomes.64
Interventions targeted at individual health care providers, such as educational programs to improve content-specific knowledge, can enhance diagnostic performance. Regular feedback, strategies to enhance equity, and fostering an environment where all providers are actively encouraged to think critically and participate in the diagnostic process (training programs to use “diagnostic time-outs” and making it a “team sport”) can improve clinical reasoning.65,66 Use of standardized patients can help identify individual-level cognitive failure points and facilitate creation of new interventions to improve clinical decision-making processes.67
Novel health information technologies can further augment these efforts. These include effective documentation by maintaining dynamic and accurate patient histories, problem lists, and medication lists68-70; use of electronic health record–based algorithms to identify potential diagnostic delays for serious conditions71,72; use of telemedicine technologies to improve accessibility and coordination73;application of mobile health and wearable technologies to facilitate data-gathering and care delivery74,75; and use of computerized decision-support tools, including applications to interpret electrocardiograms, imaging studies, and other diagnostic tests.76
Use of precision medicine, powered by new artificial intelligence (AI) tools, is becoming more widespread. Algorithms powered by AI can augment and sometimes even outperform clinician decision-making in areas such as oncology, radiology, and primary care.77 Creation of large biobanks like the All of Us research program can be used to study thousands of environmental and genetic risk factors and health conditions simultaneously, and help identify specific treatments that work best for people of different backgrounds.78 Active research in these areas holds great promise in terms of how and when we diagnose diseases and make appropriate preventative and treatment decisions. Significant scientific, ethical, and regulatory challenges will need to be overcome before these technologies can address some of the most complex problems in health care.79
Finally, diagnostic performance is affected by the external environment, including the functioning of the medical liability system. Diagnostic errors that lead to patient harm are a leading cause of malpractice claims.80 Developing a legal environment, in collaboration with patient advocacy groups and health care organizations, that promotes and facilitates timely disclosure of diagnostic errors could decrease the incentive to hide errors, advance care processes, and improve outcomes.81,82
Conclusion
The burden of diagnostic errors in hospitalized patients is unacceptably high and remains an underemphasized cause of preventable morbidity and mortality. Diagnostic errors often result from a breakdown in multiple interdependent processes that involve patient-, provider-, and system-level factors. Significant challenges remain in defining and identifying diagnostic errors as well as underlying process-failure points. The most effective interventions to reduce diagnostic errors will require greater patient participation in the diagnostic process and a mix of evidence-based interventions that promote individual-provider excellence as well as system-level changes. Further research and collaboration among various stakeholders should help improve diagnostic safety for hospitalized patients.
Corresponding author: Abhishek Goyal, MD, MPH; agoyal4@bwh.harvard.edu
Disclosures: Dr. Dalal disclosed receiving income ≥ $250 from MayaMD.
1. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):1493-1499. doi:10.1001/archinte.165.13.1493
2. National Academies of Sciences, Engineering, and Medicine. 2015. Improving Diagnosis in Health Care. The National Academies Press. doi:10.17226/21794
3. Singh H, Graber ML. Improving diagnosis in health care—the next imperative for patient safety. N Engl J Med. 2015;373(26):2493-2495. doi:10.1056/NEJMp1512241
4. Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139. doi:10.1136/bmj.i2139
5. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636-641. doi:10.1007/s11606-009-0944-6
6. Griffin JA, Carr K, Bersani K, et al. Analyzing diagnostic errors in the acute setting: a process-driven approach. Diagnosis (Berl). 2021;9(1):77-88. doi:10.1515/dx-2021-0033
7. Itri JN, Tappouni RR, McEachern RO, Pesch AJ, Patel SH. Fundamentals of diagnostic error in imaging. RadioGraphics. 2018;38(6):1845-1865. doi:10.1148/rg.2018180021
8. Hammerling JA. A Review of medical errors in laboratory diagnostics and where we are today. Lab Med. 2012;43(2):41-44. doi:10.1309/LM6ER9WJR1IHQAUY
9. Gunderson CG, Bilan VP, Holleck JL, et al. Prevalence of harmful diagnostic errors in hospitalised adults: a systematic review and meta-analysis. BMJ Qual Saf. 2020;29(12):1008-1018. doi:10.1136/bmjqs-2019-010822
10. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
11. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324(6):377-384. doi:10.1056/NEJM199102073240605
12. Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence. Results of the Harvard Medical Practice Study III. N Engl J Med. 1991;325(4):245-251. doi:10.1056/NEJM199107253250405
13. Wilson RM, Michel P, Olsen S, et al. Patient safety in developing countries: retrospective estimation of scale and nature of harm to patients in hospital. BMJ. 2012;344:e832. doi:10.1136/bmj.e832
14. Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust. 1995;163(9):458-471. doi:10.5694/j.1326-5377.1995.tb124691.x
15. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38(3):261-271. doi:10.1097/00005650-200003000-00003
16. Baker GR, Norton PG, Flintoft V, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. CMAJ. 2004;170(11):1678-1686. doi:10.1503/cmaj.1040498
17. Davis P, Lay-Yee R, Briant R, Ali W, Scott A, Schug S. Adverse events in New Zealand public hospitals II: preventability and clinical context. N Z Med J. 2003;116(1183):U624.
18. Aranaz-Andrés JM, Aibar-Remón C, Vitaller-Murillo J, et al. Incidence of adverse events related to health care in Spain: results of the Spanish National Study of Adverse Events. J Epidemiol Community Health. 2008;62(12):1022-1029. doi:10.1136/jech.2007.065227
19. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
20. Soop M, Fryksmark U, Köster M, Haglund B. The incidence of adverse events in Swedish hospitals: a retrospective medical record review study. Int J Qual Health Care. 2009;21(4):285-291. doi:10.1093/intqhc/mzp025
21. Rafter N, Hickey A, Conroy RM, et al. The Irish National Adverse Events Study (INAES): the frequency and nature of adverse events in Irish hospitals—a retrospective record review study. BMJ Qual Saf. 2017;26(2):111-119. doi:10.1136/bmjqs-2015-004828
22. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med. 2002;347(24):1933-1940. doi:10.1056/NEJMsa022151
23. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672-680. doi:10.1136/bmjqs-2012-001550
24. Malik MA, Motta-Calderon D, Piniella N, et al. A structured approach to EHR surveillance of diagnostic error in acute care: an exploratory analysis of two institutionally-defined case cohorts. Diagnosis (Berl). 2022;9(4):446-457. doi:10.1515/dx-2022-0032
25. Graber ML. The incidence of diagnostic error in medicine. BMJ Qual Saf. 2013;22(suppl 2):ii21-ii27. doi:10.1136/bmjqs-2012-001615
26. Bergl PA, Taneja A, El-Kareh R, Singh H, Nanchal RS. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med. 2019;47(11):e902-e910. doi:10.1097/CCM.0000000000003976
27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf. 2012;21(9):737-745. doi:10.1136/bmjqs-2011-001159
28. Bergl PA, Nanchal RS, Singh H. Diagnostic error in the critically ill: defining the problem and exploring next steps to advance intensive care unit safety. Ann Am Thorac Soc. 2018;15(8):903-907. doi:10.1513/AnnalsATS.201801-068PS
29. Marquet K, Claes N, De Troy E, et al. One fourth of unplanned transfers to a higher level of care are associated with a highly preventable adverse event: a patient record review in six Belgian hospitals. Crit Care Med. 2015;43(5):1053-1061. doi:10.1097/CCM.0000000000000932
30. Rodwin BA, Bilan VP, Merchant NB, et al. Rate of preventable mortality in hospitalized patients: a systematic review and meta-analysis. J Gen Intern Med. 2020;35(7):2099-2106. doi:10.1007/s11606-019-05592-5
31. Winters B, Custer J, Galvagno SM, et al. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf. 2012;21(11):894-902. doi:10.1136/bmjqs-2012-000803
32. Raffel KE, Kantor MA, Barish P, et al. Prevalence and characterisation of diagnostic error among 7-day all-cause hospital medicine readmissions: a retrospective cohort study. BMJ Qual Saf. 2020;29(12):971-979. doi:10.1136/bmjqs-2020-010896
33. Weingart SN, Pagovich O, Sands DZ, et al. What can hospitalized patients tell us about adverse events? learning from patient-reported incidents. J Gen Intern Med. 2005;20(9):830-836. doi:10.1111/j.1525-1497.2005.0180.x
34. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881-1887. doi:10.1001/archinternmed.2009.333
35. Singh H, Schiff GD, Graber ML, Onakpoya I, Thompson MJ. The global burden of diagnostic errors in primary care. BMJ Qual Saf. 2017;26(6):484-494. doi:10.1136/bmjqs-2016-005401
36. Schiff GD, Leape LL. Commentary: how can we make diagnosis safer? Acad Med J Assoc Am Med Coll. 2012;87(2):135-138. doi:10.1097/ACM.0b013e31823f711c
37. Schiff GD, Kim S, Abrams R, et al. Diagnosing diagnosis errors: lessons from a multi-institutional collaborative project. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. Advances in Patient Safety: From Research to Implementation. Volume 2: Concepts and Methodology. AHRQ Publication No. 05-0021-2. Agency for Healthcare Research and Quality (US); 2005. Accessed January 16, 2023. http://www.ncbi.nlm.nih.gov/books/NBK20492/
38. Newman-Toker DE. A unified conceptual model for diagnostic errors: underdiagnosis, overdiagnosis, and misdiagnosis. Diagnosis (Berl). 2014;1(1):43-48. doi:10.1515/dx-2013-0027
39. Abimanyi-Ochom J, Bohingamu Mudiyanselage S, Catchpool M, Firipis M, Wanni Arachchige Dona S, Watts JJ. Strategies to reduce diagnostic errors: a systematic review. BMC Med Inform Decis Mak. 2019;19(1):174. doi:10.1186/s12911-019-0901-1
40. Gupta A, Harrod M, Quinn M, et al. Mind the overlap: how system problems contribute to cognitive failure and diagnostic errors. Diagnosis (Berl). 2018;5(3):151-156. doi:10.1515/dx-2018-0014
41. Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16:138. doi:10.1186/s12911-016-0377-1
42. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775-780. doi: 10.1097/00001888-200308000-00003
43. Chapman EN, Kaatz A, Carnes M. Physicians and implicit bias: how doctors may unwittingly perpetuate health care disparities. J Gen Intern Med. 2013;28(11):1504-1510. doi:10.1007/s11606-013-2441-1
44. Zwaan L, Singh H. The challenges in defining and measuring diagnostic error. Diagnosis (Ber). 2015;2(2):97-103. doi:10.1515/dx-2014-0069
45. Arkes HR, Wortmann RL, Saville PD, Harkness AR. Hindsight bias among physicians weighing the likelihood of diagnoses. J Appl Psychol. 1981;66(2):252-254.
46. Singh H. Editorial: Helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf. 2014;40(3):99-101. doi:10.1016/s1553-7250(14)40012-6
47. Vassar M, Holzmann M. The retrospective chart review: important methodological considerations. J Educ Eval Health Prof. 2013;10:12. doi:10.3352/jeehp.2013.10.12
48. Welch HG, Black WC. Overdiagnosis in cancer. J Natl Cancer Inst. 2010;102(9):605-613. doi:10.1093/jnci/djq099
49. Moynihan R, Doust J, Henry D. Preventing overdiagnosis: how to stop harming the healthy. BMJ. 2012;344:e3502. doi:10.1136/bmj.e3502
50. Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. JAMA. 2001;286(4):415-420. doi:10.1001/jama.286.4.415
51. Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015;24(2):103-110. doi:10.1136/bmjqs-2014-003675
52. Singh H, Khanna A, Spitzmueller C, Meyer AND. Recommendations for using the Revised Safer Dx Instrument to help measure and improve diagnostic safety. Diagnosis (Berl). 2019;6(4):315-323. doi:10.1515/dx-2019-0012
53. Classen DC, Resar R, Griffin F, et al. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30(4):581-589. doi:10.1377/hlthaff.2011.0190
54. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med. 2008;121(5 suppl):S38-S42. doi:10.1016/j.amjmed.2008.02.004
55. Mitchell I, Schuster A, Smith K, Pronovost P, Wu A. Patient safety incident reporting: a qualitative study of thoughts and perceptions of experts 15 years after “To Err is Human.” BMJ Qual Saf. 2016;25(2):92-99. doi:10.1136/bmjqs-2015-004405
56. Mazurenko O, Collum T, Ferdinand A, Menachemi N. Predictors of hospital patient satisfaction as measured by HCAHPS: a systematic review. J Healthc Manag. 2017;62(4):272-283. doi:10.1097/JHM-D-15-00050
57. Liberman AL, Newman-Toker DE. Symptom-Disease Pair Analysis of Diagnostic Error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data. BMJ Qual Saf. 2018;27(7):557-566. doi:10.1136/bmjqs-2017-007032
58. Utility of Predictive Systems to Identify Inpatient Diagnostic Errors: the UPSIDE study. NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/search/rpoHXlEAcEudQV3B9ld8iw/project-details/10020962
59. Overview of Patient Safety Learning Laboratory (PSLL) Projects. Agency for Healthcare Research and Quality. Accessed January 14, 2023. https://www.ahrq.gov/patient-safety/resources/learning-lab/index.html
60. Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT). NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/project-details/10642576
61. Zwaan L, Singh H. Diagnostic error in hospitals: finding forests not just the big trees. BMJ Qual Saf. 2020;29(12):961-964. doi:10.1136/bmjqs-2020-011099
62. Longtin Y, Sax H, Leape LL, Sheridan SE, Donaldson L, Pittet D. Patient participation: current knowledge and applicability to patient safety. Mayo Clin Proc. 2010;85(1):53-62. doi:10.4065/mcp.2009.0248
63. Murphy DR, Singh H, Berlin L. Communication breakdowns and diagnostic errors: a radiology perspective. Diagnosis (Berl). 2014;1(4):253-261. doi:10.1515/dx-2014-0035
64. Singh H, Naik AD, Rao R, Petersen LA. Reducing diagnostic errors through effective communication: harnessing the power of information technology. J Gen Intern Med. 2008;23(4):489-494. doi:10.1007/s11606-007-0393-z
65. Singh H, Connor DM, Dhaliwal G. Five strategies for clinicians to advance diagnostic excellence. BMJ. 2022;376:e068044. doi:10.1136/bmj-2021-068044
66. Yale S, Cohen S, Bordini BJ. Diagnostic time-outs to improve diagnosis. Crit Care Clin. 2022;38(2):185-194. doi:10.1016/j.ccc.2021.11.008
67. Schwartz A, Peskin S, Spiro A, Weiner SJ. Impact of unannounced standardized patient audit and feedback on care, documentation, and costs: an experiment and claims analysis. J Gen Intern Med. 2021;36(1):27-34. doi:10.1007/s11606-020-05965-1
68. Carpenter JD, Gorman PN. Using medication list—problem list mismatches as markers of potential error. Proc AMIA Symp. 2002:106-110.
69. Hron JD, Manzi S, Dionne R, et al. Electronic medication reconciliation and medication errors. Int J Qual Health Care. 2015;27(4):314-319. doi:10.1093/intqhc/mzv046
70. Graber ML, Siegal D, Riah H, Johnston D, Kenyon K. Electronic health record–related events in medical malpractice claims. J Patient Saf. 2019;15(2):77-85. doi:10.1097/PTS.0000000000000240
71. Murphy DR, Wu L, Thomas EJ, Forjuoh SN, Meyer AND, Singh H. Electronic trigger-based intervention to reduce delays in diagnostic evaluation for cancer: a cluster randomized controlled trial. J Clin Oncol. 2015;33(31):3560-3567. doi:10.1200/JCO.2015.61.1301
72. Singh H, Giardina TD, Forjuoh SN, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012;21(2):93-100. doi:10.1136/bmjqs-2011-000304
73. Armaignac DL, Saxena A, Rubens M, et al. Impact of telemedicine on mortality, length of stay, and cost among patients in progressive care units: experience from a large healthcare system. Crit Care Med. 2018;46(5):728-735. doi:10.1097/CCM.0000000000002994
74. MacKinnon GE, Brittain EL. Mobile health technologies in cardiopulmonary disease. Chest. 2020;157(3):654-664. doi:10.1016/j.chest.2019.10.015
75. DeVore AD, Wosik J, Hernandez AF. The future of wearables in heart failure patients. JACC Heart Fail. 2019;7(11):922-932. doi:10.1016/j.jchf.2019.08.008
76. Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms. J Am Med Inform Assoc. 2003;10(5):478-483. doi:10.1197/jamia.M1279
77. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Intern Med. 2019;34(8):1626-1630. doi:10.1007/s11606-019-05035-1
78. Ramirez AH, Gebo KA, Harris PA. Progress with the All Of Us research program: opening access for researchers. JAMA. 2021;325(24):2441-2442. doi:10.1001/jama.2021.7702
79. Johnson KB, Wei W, Weeraratne D, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. 2021;14(1):86-93. doi:10.1111/cts.12884
80. Gupta A, Snyder A, Kachalia A, Flanders S, Saint S, Chopra V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf. 2017;27(1):bmjqs-2017-006774. doi:10.1136/bmjqs-2017-006774
81. Renkema E, Broekhuis M, Ahaus K. Conditions that influence the impact of malpractice litigation risk on physicians’ behavior regarding patient safety. BMC Health Serv Res. 2014;14(1):38. doi:10.1186/1472-6963-14-38
82. Kachalia A, Mello MM, Nallamothu BK, Studdert DM. Legal and policy interventions to improve patient safety. Circulation. 2016;133(7):661-671. doi:10.1161/CIRCULATIONAHA.115.015880
1. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):1493-1499. doi:10.1001/archinte.165.13.1493
2. National Academies of Sciences, Engineering, and Medicine. 2015. Improving Diagnosis in Health Care. The National Academies Press. doi:10.17226/21794
3. Singh H, Graber ML. Improving diagnosis in health care—the next imperative for patient safety. N Engl J Med. 2015;373(26):2493-2495. doi:10.1056/NEJMp1512241
4. Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139. doi:10.1136/bmj.i2139
5. Flanders SA, Centor B, Weber V, McGinn T, Desalvo K, Auerbach A. Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636-641. doi:10.1007/s11606-009-0944-6
6. Griffin JA, Carr K, Bersani K, et al. Analyzing diagnostic errors in the acute setting: a process-driven approach. Diagnosis (Berl). 2021;9(1):77-88. doi:10.1515/dx-2021-0033
7. Itri JN, Tappouni RR, McEachern RO, Pesch AJ, Patel SH. Fundamentals of diagnostic error in imaging. RadioGraphics. 2018;38(6):1845-1865. doi:10.1148/rg.2018180021
8. Hammerling JA. A Review of medical errors in laboratory diagnostics and where we are today. Lab Med. 2012;43(2):41-44. doi:10.1309/LM6ER9WJR1IHQAUY
9. Gunderson CG, Bilan VP, Holleck JL, et al. Prevalence of harmful diagnostic errors in hospitalised adults: a systematic review and meta-analysis. BMJ Qual Saf. 2020;29(12):1008-1018. doi:10.1136/bmjqs-2019-010822
10. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
11. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324(6):377-384. doi:10.1056/NEJM199102073240605
12. Localio AR, Lawthers AG, Brennan TA, et al. Relation between malpractice claims and adverse events due to negligence. Results of the Harvard Medical Practice Study III. N Engl J Med. 1991;325(4):245-251. doi:10.1056/NEJM199107253250405
13. Wilson RM, Michel P, Olsen S, et al. Patient safety in developing countries: retrospective estimation of scale and nature of harm to patients in hospital. BMJ. 2012;344:e832. doi:10.1136/bmj.e832
14. Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust. 1995;163(9):458-471. doi:10.5694/j.1326-5377.1995.tb124691.x
15. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38(3):261-271. doi:10.1097/00005650-200003000-00003
16. Baker GR, Norton PG, Flintoft V, et al. The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. CMAJ. 2004;170(11):1678-1686. doi:10.1503/cmaj.1040498
17. Davis P, Lay-Yee R, Briant R, Ali W, Scott A, Schug S. Adverse events in New Zealand public hospitals II: preventability and clinical context. N Z Med J. 2003;116(1183):U624.
18. Aranaz-Andrés JM, Aibar-Remón C, Vitaller-Murillo J, et al. Incidence of adverse events related to health care in Spain: results of the Spanish National Study of Adverse Events. J Epidemiol Community Health. 2008;62(12):1022-1029. doi:10.1136/jech.2007.065227
19. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
20. Soop M, Fryksmark U, Köster M, Haglund B. The incidence of adverse events in Swedish hospitals: a retrospective medical record review study. Int J Qual Health Care. 2009;21(4):285-291. doi:10.1093/intqhc/mzp025
21. Rafter N, Hickey A, Conroy RM, et al. The Irish National Adverse Events Study (INAES): the frequency and nature of adverse events in Irish hospitals—a retrospective record review study. BMJ Qual Saf. 2017;26(2):111-119. doi:10.1136/bmjqs-2015-004828
22. Blendon RJ, DesRoches CM, Brodie M, et al. Views of practicing physicians and the public on medical errors. N Engl J Med. 2002;347(24):1933-1940. doi:10.1056/NEJMsa022151
23. Saber Tehrani AS, Lee H, Mathews SC, et al. 25-year summary of US malpractice claims for diagnostic errors 1986-2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8):672-680. doi:10.1136/bmjqs-2012-001550
24. Malik MA, Motta-Calderon D, Piniella N, et al. A structured approach to EHR surveillance of diagnostic error in acute care: an exploratory analysis of two institutionally-defined case cohorts. Diagnosis (Berl). 2022;9(4):446-457. doi:10.1515/dx-2022-0032
25. Graber ML. The incidence of diagnostic error in medicine. BMJ Qual Saf. 2013;22(suppl 2):ii21-ii27. doi:10.1136/bmjqs-2012-001615
26. Bergl PA, Taneja A, El-Kareh R, Singh H, Nanchal RS. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med. 2019;47(11):e902-e910. doi:10.1097/CCM.0000000000003976
27. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf. 2012;21(9):737-745. doi:10.1136/bmjqs-2011-001159
28. Bergl PA, Nanchal RS, Singh H. Diagnostic error in the critically ill: defining the problem and exploring next steps to advance intensive care unit safety. Ann Am Thorac Soc. 2018;15(8):903-907. doi:10.1513/AnnalsATS.201801-068PS
29. Marquet K, Claes N, De Troy E, et al. One fourth of unplanned transfers to a higher level of care are associated with a highly preventable adverse event: a patient record review in six Belgian hospitals. Crit Care Med. 2015;43(5):1053-1061. doi:10.1097/CCM.0000000000000932
30. Rodwin BA, Bilan VP, Merchant NB, et al. Rate of preventable mortality in hospitalized patients: a systematic review and meta-analysis. J Gen Intern Med. 2020;35(7):2099-2106. doi:10.1007/s11606-019-05592-5
31. Winters B, Custer J, Galvagno SM, et al. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf. 2012;21(11):894-902. doi:10.1136/bmjqs-2012-000803
32. Raffel KE, Kantor MA, Barish P, et al. Prevalence and characterisation of diagnostic error among 7-day all-cause hospital medicine readmissions: a retrospective cohort study. BMJ Qual Saf. 2020;29(12):971-979. doi:10.1136/bmjqs-2020-010896
33. Weingart SN, Pagovich O, Sands DZ, et al. What can hospitalized patients tell us about adverse events? learning from patient-reported incidents. J Gen Intern Med. 2005;20(9):830-836. doi:10.1111/j.1525-1497.2005.0180.x
34. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169(20):1881-1887. doi:10.1001/archinternmed.2009.333
35. Singh H, Schiff GD, Graber ML, Onakpoya I, Thompson MJ. The global burden of diagnostic errors in primary care. BMJ Qual Saf. 2017;26(6):484-494. doi:10.1136/bmjqs-2016-005401
36. Schiff GD, Leape LL. Commentary: how can we make diagnosis safer? Acad Med J Assoc Am Med Coll. 2012;87(2):135-138. doi:10.1097/ACM.0b013e31823f711c
37. Schiff GD, Kim S, Abrams R, et al. Diagnosing diagnosis errors: lessons from a multi-institutional collaborative project. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. Advances in Patient Safety: From Research to Implementation. Volume 2: Concepts and Methodology. AHRQ Publication No. 05-0021-2. Agency for Healthcare Research and Quality (US); 2005. Accessed January 16, 2023. http://www.ncbi.nlm.nih.gov/books/NBK20492/
38. Newman-Toker DE. A unified conceptual model for diagnostic errors: underdiagnosis, overdiagnosis, and misdiagnosis. Diagnosis (Berl). 2014;1(1):43-48. doi:10.1515/dx-2013-0027
39. Abimanyi-Ochom J, Bohingamu Mudiyanselage S, Catchpool M, Firipis M, Wanni Arachchige Dona S, Watts JJ. Strategies to reduce diagnostic errors: a systematic review. BMC Med Inform Decis Mak. 2019;19(1):174. doi:10.1186/s12911-019-0901-1
40. Gupta A, Harrod M, Quinn M, et al. Mind the overlap: how system problems contribute to cognitive failure and diagnostic errors. Diagnosis (Berl). 2018;5(3):151-156. doi:10.1515/dx-2018-0014
41. Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Med Inform Decis Mak. 2016;16:138. doi:10.1186/s12911-016-0377-1
42. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775-780. doi: 10.1097/00001888-200308000-00003
43. Chapman EN, Kaatz A, Carnes M. Physicians and implicit bias: how doctors may unwittingly perpetuate health care disparities. J Gen Intern Med. 2013;28(11):1504-1510. doi:10.1007/s11606-013-2441-1
44. Zwaan L, Singh H. The challenges in defining and measuring diagnostic error. Diagnosis (Ber). 2015;2(2):97-103. doi:10.1515/dx-2014-0069
45. Arkes HR, Wortmann RL, Saville PD, Harkness AR. Hindsight bias among physicians weighing the likelihood of diagnoses. J Appl Psychol. 1981;66(2):252-254.
46. Singh H. Editorial: Helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf. 2014;40(3):99-101. doi:10.1016/s1553-7250(14)40012-6
47. Vassar M, Holzmann M. The retrospective chart review: important methodological considerations. J Educ Eval Health Prof. 2013;10:12. doi:10.3352/jeehp.2013.10.12
48. Welch HG, Black WC. Overdiagnosis in cancer. J Natl Cancer Inst. 2010;102(9):605-613. doi:10.1093/jnci/djq099
49. Moynihan R, Doust J, Henry D. Preventing overdiagnosis: how to stop harming the healthy. BMJ. 2012;344:e3502. doi:10.1136/bmj.e3502
50. Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. JAMA. 2001;286(4):415-420. doi:10.1001/jama.286.4.415
51. Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015;24(2):103-110. doi:10.1136/bmjqs-2014-003675
52. Singh H, Khanna A, Spitzmueller C, Meyer AND. Recommendations for using the Revised Safer Dx Instrument to help measure and improve diagnostic safety. Diagnosis (Berl). 2019;6(4):315-323. doi:10.1515/dx-2019-0012
53. Classen DC, Resar R, Griffin F, et al. “Global trigger tool” shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood). 2011;30(4):581-589. doi:10.1377/hlthaff.2011.0190
54. Schiff GD. Minimizing diagnostic error: the importance of follow-up and feedback. Am J Med. 2008;121(5 suppl):S38-S42. doi:10.1016/j.amjmed.2008.02.004
55. Mitchell I, Schuster A, Smith K, Pronovost P, Wu A. Patient safety incident reporting: a qualitative study of thoughts and perceptions of experts 15 years after “To Err is Human.” BMJ Qual Saf. 2016;25(2):92-99. doi:10.1136/bmjqs-2015-004405
56. Mazurenko O, Collum T, Ferdinand A, Menachemi N. Predictors of hospital patient satisfaction as measured by HCAHPS: a systematic review. J Healthc Manag. 2017;62(4):272-283. doi:10.1097/JHM-D-15-00050
57. Liberman AL, Newman-Toker DE. Symptom-Disease Pair Analysis of Diagnostic Error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data. BMJ Qual Saf. 2018;27(7):557-566. doi:10.1136/bmjqs-2017-007032
58. Utility of Predictive Systems to Identify Inpatient Diagnostic Errors: the UPSIDE study. NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/search/rpoHXlEAcEudQV3B9ld8iw/project-details/10020962
59. Overview of Patient Safety Learning Laboratory (PSLL) Projects. Agency for Healthcare Research and Quality. Accessed January 14, 2023. https://www.ahrq.gov/patient-safety/resources/learning-lab/index.html
60. Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT). NIH RePort/RePORTER. Accessed January 14, 2023. https://reporter.nih.gov/project-details/10642576
61. Zwaan L, Singh H. Diagnostic error in hospitals: finding forests not just the big trees. BMJ Qual Saf. 2020;29(12):961-964. doi:10.1136/bmjqs-2020-011099
62. Longtin Y, Sax H, Leape LL, Sheridan SE, Donaldson L, Pittet D. Patient participation: current knowledge and applicability to patient safety. Mayo Clin Proc. 2010;85(1):53-62. doi:10.4065/mcp.2009.0248
63. Murphy DR, Singh H, Berlin L. Communication breakdowns and diagnostic errors: a radiology perspective. Diagnosis (Berl). 2014;1(4):253-261. doi:10.1515/dx-2014-0035
64. Singh H, Naik AD, Rao R, Petersen LA. Reducing diagnostic errors through effective communication: harnessing the power of information technology. J Gen Intern Med. 2008;23(4):489-494. doi:10.1007/s11606-007-0393-z
65. Singh H, Connor DM, Dhaliwal G. Five strategies for clinicians to advance diagnostic excellence. BMJ. 2022;376:e068044. doi:10.1136/bmj-2021-068044
66. Yale S, Cohen S, Bordini BJ. Diagnostic time-outs to improve diagnosis. Crit Care Clin. 2022;38(2):185-194. doi:10.1016/j.ccc.2021.11.008
67. Schwartz A, Peskin S, Spiro A, Weiner SJ. Impact of unannounced standardized patient audit and feedback on care, documentation, and costs: an experiment and claims analysis. J Gen Intern Med. 2021;36(1):27-34. doi:10.1007/s11606-020-05965-1
68. Carpenter JD, Gorman PN. Using medication list—problem list mismatches as markers of potential error. Proc AMIA Symp. 2002:106-110.
69. Hron JD, Manzi S, Dionne R, et al. Electronic medication reconciliation and medication errors. Int J Qual Health Care. 2015;27(4):314-319. doi:10.1093/intqhc/mzv046
70. Graber ML, Siegal D, Riah H, Johnston D, Kenyon K. Electronic health record–related events in medical malpractice claims. J Patient Saf. 2019;15(2):77-85. doi:10.1097/PTS.0000000000000240
71. Murphy DR, Wu L, Thomas EJ, Forjuoh SN, Meyer AND, Singh H. Electronic trigger-based intervention to reduce delays in diagnostic evaluation for cancer: a cluster randomized controlled trial. J Clin Oncol. 2015;33(31):3560-3567. doi:10.1200/JCO.2015.61.1301
72. Singh H, Giardina TD, Forjuoh SN, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012;21(2):93-100. doi:10.1136/bmjqs-2011-000304
73. Armaignac DL, Saxena A, Rubens M, et al. Impact of telemedicine on mortality, length of stay, and cost among patients in progressive care units: experience from a large healthcare system. Crit Care Med. 2018;46(5):728-735. doi:10.1097/CCM.0000000000002994
74. MacKinnon GE, Brittain EL. Mobile health technologies in cardiopulmonary disease. Chest. 2020;157(3):654-664. doi:10.1016/j.chest.2019.10.015
75. DeVore AD, Wosik J, Hernandez AF. The future of wearables in heart failure patients. JACC Heart Fail. 2019;7(11):922-932. doi:10.1016/j.jchf.2019.08.008
76. Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms. J Am Med Inform Assoc. 2003;10(5):478-483. doi:10.1197/jamia.M1279
77. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Intern Med. 2019;34(8):1626-1630. doi:10.1007/s11606-019-05035-1
78. Ramirez AH, Gebo KA, Harris PA. Progress with the All Of Us research program: opening access for researchers. JAMA. 2021;325(24):2441-2442. doi:10.1001/jama.2021.7702
79. Johnson KB, Wei W, Weeraratne D, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. 2021;14(1):86-93. doi:10.1111/cts.12884
80. Gupta A, Snyder A, Kachalia A, Flanders S, Saint S, Chopra V. Malpractice claims related to diagnostic errors in the hospital. BMJ Qual Saf. 2017;27(1):bmjqs-2017-006774. doi:10.1136/bmjqs-2017-006774
81. Renkema E, Broekhuis M, Ahaus K. Conditions that influence the impact of malpractice litigation risk on physicians’ behavior regarding patient safety. BMC Health Serv Res. 2014;14(1):38. doi:10.1186/1472-6963-14-38
82. Kachalia A, Mello MM, Nallamothu BK, Studdert DM. Legal and policy interventions to improve patient safety. Circulation. 2016;133(7):661-671. doi:10.1161/CIRCULATIONAHA.115.015880
Safety in Health Care: An Essential Pillar of Quality
Each year, 40,000 to 98,000 deaths occur due to medical errors.1 The Harvard Medical Practice Study (HMPS), published in 1991, found that 3.7% of hospitalized patients were harmed by adverse events and 1% were harmed by adverse events due to negligence.2 The latest HMPS showed that, despite significant improvements in patient safety over the past 3 decades, patient safety challenges persist. This study found that inpatient care leads to harm in nearly a quarter of patients, and that 1 in 4 of these adverse events are preventable.3
Since the first HMPS study was published, efforts to improve patient safety have focused on identifying causes of medical error and the design and implementation of interventions to mitigate errors. Factors contributing to medical errors have been well documented: the complexity of care delivery from inpatient to outpatient settings, with transitions of care and extensive use of medications; multiple comorbidities; and the fragmentation of care across multiple systems and specialties. Although most errors are related to process or system failure, accountability of each practitioner and clinician is essential to promoting a culture of safety. Many medical errors are preventable through multifaceted approaches employed throughout the phases of the care,4 with medication errors, both prescribing and administration, and diagnostic and treatment errors encompassing most risk prevention areas. Broadly, safety efforts should emphasize building a culture of safety where all safety events are reported, including near-miss events.
Two articles in this issue of JCOM address key elements of patient safety: building a safety culture and diagnostic error. Merchant et al5 report on an initiative designed to promote a safety culture by recognizing and rewarding staff who identify and report near misses. The tiered awards program they designed led to significantly increased staff participation in the safety awards nomination process and was associated with increased reporting of actual and close-call events and greater attendance at monthly safety forums. Goyal et al,6 noting that diagnostic error rates in hospitalized patients remain unacceptably high, provide a concise update on diagnostic error among inpatients, focusing on issues related to defining and measuring diagnostic errors and current strategies to improve diagnostic safety in hospitalized patients. In a third article, Sathi et al report on efforts to teach quality improvement (QI) methods to internal medicine trainees; their project increased residents’ knowledge of their patient panels and comfort with QI approaches and led to improved patient outcomes.
Major progress has been made to improve health care safety since the first HMPS was published. However, the latest HMPS shows that patient safety efforts must continue, given the persistent risk for patient harm in the current health care delivery system. Safety, along with clear accountability for identifying, reporting, and addressing errors, should be a top priority for health care systems throughout the preventive, diagnostic, and therapeutic phases of care.
Corresponding author: Ebrahim Barkoudah, MD, MPH; ebarkoudah@bwh.harvard.edu
1. Clancy C, Munier W, Brady J. National healthcare quality report. Agency for Healthcare Research and Quality; 2013.
2. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
3. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
4. Bates DW, Cullen DJ, Laird N, et al. Incidence of adverse drug events and potential adverse drug events: implications for prevention. JAMA. 1995;274(1):29-34.
5. Merchant NB, O’Neal J, Murray JS. Development of a safety awards program at a Veterans Affairs health care system: a quality improvement initiative. J Clin Outcome Manag. 2023;30(1):9-16. doi:10.12788/jcom.0120
6. Goyal A, Martin-Doyle W, Dalal AK. Diagnostic errors in hospitalized patients. J Clin Outcome Manag. 2023;30(1):17-27. doi:10.12788/jcom.0121
7. Sathi K, Huang KTL, Chandler DM, et al. Teaching quality improvement to internal medicine residents to address patient care gaps in ambulatory quality metrics. J Clin Outcome Manag. 2023;30(1):1-6.doi:10.12788/jcom.0119
Each year, 40,000 to 98,000 deaths occur due to medical errors.1 The Harvard Medical Practice Study (HMPS), published in 1991, found that 3.7% of hospitalized patients were harmed by adverse events and 1% were harmed by adverse events due to negligence.2 The latest HMPS showed that, despite significant improvements in patient safety over the past 3 decades, patient safety challenges persist. This study found that inpatient care leads to harm in nearly a quarter of patients, and that 1 in 4 of these adverse events are preventable.3
Since the first HMPS study was published, efforts to improve patient safety have focused on identifying causes of medical error and the design and implementation of interventions to mitigate errors. Factors contributing to medical errors have been well documented: the complexity of care delivery from inpatient to outpatient settings, with transitions of care and extensive use of medications; multiple comorbidities; and the fragmentation of care across multiple systems and specialties. Although most errors are related to process or system failure, accountability of each practitioner and clinician is essential to promoting a culture of safety. Many medical errors are preventable through multifaceted approaches employed throughout the phases of the care,4 with medication errors, both prescribing and administration, and diagnostic and treatment errors encompassing most risk prevention areas. Broadly, safety efforts should emphasize building a culture of safety where all safety events are reported, including near-miss events.
Two articles in this issue of JCOM address key elements of patient safety: building a safety culture and diagnostic error. Merchant et al5 report on an initiative designed to promote a safety culture by recognizing and rewarding staff who identify and report near misses. The tiered awards program they designed led to significantly increased staff participation in the safety awards nomination process and was associated with increased reporting of actual and close-call events and greater attendance at monthly safety forums. Goyal et al,6 noting that diagnostic error rates in hospitalized patients remain unacceptably high, provide a concise update on diagnostic error among inpatients, focusing on issues related to defining and measuring diagnostic errors and current strategies to improve diagnostic safety in hospitalized patients. In a third article, Sathi et al report on efforts to teach quality improvement (QI) methods to internal medicine trainees; their project increased residents’ knowledge of their patient panels and comfort with QI approaches and led to improved patient outcomes.
Major progress has been made to improve health care safety since the first HMPS was published. However, the latest HMPS shows that patient safety efforts must continue, given the persistent risk for patient harm in the current health care delivery system. Safety, along with clear accountability for identifying, reporting, and addressing errors, should be a top priority for health care systems throughout the preventive, diagnostic, and therapeutic phases of care.
Corresponding author: Ebrahim Barkoudah, MD, MPH; ebarkoudah@bwh.harvard.edu
Each year, 40,000 to 98,000 deaths occur due to medical errors.1 The Harvard Medical Practice Study (HMPS), published in 1991, found that 3.7% of hospitalized patients were harmed by adverse events and 1% were harmed by adverse events due to negligence.2 The latest HMPS showed that, despite significant improvements in patient safety over the past 3 decades, patient safety challenges persist. This study found that inpatient care leads to harm in nearly a quarter of patients, and that 1 in 4 of these adverse events are preventable.3
Since the first HMPS study was published, efforts to improve patient safety have focused on identifying causes of medical error and the design and implementation of interventions to mitigate errors. Factors contributing to medical errors have been well documented: the complexity of care delivery from inpatient to outpatient settings, with transitions of care and extensive use of medications; multiple comorbidities; and the fragmentation of care across multiple systems and specialties. Although most errors are related to process or system failure, accountability of each practitioner and clinician is essential to promoting a culture of safety. Many medical errors are preventable through multifaceted approaches employed throughout the phases of the care,4 with medication errors, both prescribing and administration, and diagnostic and treatment errors encompassing most risk prevention areas. Broadly, safety efforts should emphasize building a culture of safety where all safety events are reported, including near-miss events.
Two articles in this issue of JCOM address key elements of patient safety: building a safety culture and diagnostic error. Merchant et al5 report on an initiative designed to promote a safety culture by recognizing and rewarding staff who identify and report near misses. The tiered awards program they designed led to significantly increased staff participation in the safety awards nomination process and was associated with increased reporting of actual and close-call events and greater attendance at monthly safety forums. Goyal et al,6 noting that diagnostic error rates in hospitalized patients remain unacceptably high, provide a concise update on diagnostic error among inpatients, focusing on issues related to defining and measuring diagnostic errors and current strategies to improve diagnostic safety in hospitalized patients. In a third article, Sathi et al report on efforts to teach quality improvement (QI) methods to internal medicine trainees; their project increased residents’ knowledge of their patient panels and comfort with QI approaches and led to improved patient outcomes.
Major progress has been made to improve health care safety since the first HMPS was published. However, the latest HMPS shows that patient safety efforts must continue, given the persistent risk for patient harm in the current health care delivery system. Safety, along with clear accountability for identifying, reporting, and addressing errors, should be a top priority for health care systems throughout the preventive, diagnostic, and therapeutic phases of care.
Corresponding author: Ebrahim Barkoudah, MD, MPH; ebarkoudah@bwh.harvard.edu
1. Clancy C, Munier W, Brady J. National healthcare quality report. Agency for Healthcare Research and Quality; 2013.
2. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
3. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
4. Bates DW, Cullen DJ, Laird N, et al. Incidence of adverse drug events and potential adverse drug events: implications for prevention. JAMA. 1995;274(1):29-34.
5. Merchant NB, O’Neal J, Murray JS. Development of a safety awards program at a Veterans Affairs health care system: a quality improvement initiative. J Clin Outcome Manag. 2023;30(1):9-16. doi:10.12788/jcom.0120
6. Goyal A, Martin-Doyle W, Dalal AK. Diagnostic errors in hospitalized patients. J Clin Outcome Manag. 2023;30(1):17-27. doi:10.12788/jcom.0121
7. Sathi K, Huang KTL, Chandler DM, et al. Teaching quality improvement to internal medicine residents to address patient care gaps in ambulatory quality metrics. J Clin Outcome Manag. 2023;30(1):1-6.doi:10.12788/jcom.0119
1. Clancy C, Munier W, Brady J. National healthcare quality report. Agency for Healthcare Research and Quality; 2013.
2. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
3. Bates DW, Levine DM, Salmasian H, et al. The safety of inpatient health care. N Engl J Med. 2023;388(2):142-153. doi:10.1056/NEJMsa2206117
4. Bates DW, Cullen DJ, Laird N, et al. Incidence of adverse drug events and potential adverse drug events: implications for prevention. JAMA. 1995;274(1):29-34.
5. Merchant NB, O’Neal J, Murray JS. Development of a safety awards program at a Veterans Affairs health care system: a quality improvement initiative. J Clin Outcome Manag. 2023;30(1):9-16. doi:10.12788/jcom.0120
6. Goyal A, Martin-Doyle W, Dalal AK. Diagnostic errors in hospitalized patients. J Clin Outcome Manag. 2023;30(1):17-27. doi:10.12788/jcom.0121
7. Sathi K, Huang KTL, Chandler DM, et al. Teaching quality improvement to internal medicine residents to address patient care gaps in ambulatory quality metrics. J Clin Outcome Manag. 2023;30(1):1-6.doi:10.12788/jcom.0119
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Study 1 Overview (Chang et al)
Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.
Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.
Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.
Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.
Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.
Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.
Study 2 Overview (Mei et al)
Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.
Design: Randomized clinical trial of propofol and sevoflurane groups.
Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.
Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.
Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P = .049, Student’s t-test).
Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.
Commentary
Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.
In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.
In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.
Applications for Clinical Practice and System Implementation
The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.
The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.
Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.
Practice Points
- Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
- Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.
–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai
1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x
2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z
Study 1 Overview (Chang et al)
Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.
Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.
Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.
Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.
Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.
Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.
Study 2 Overview (Mei et al)
Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.
Design: Randomized clinical trial of propofol and sevoflurane groups.
Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.
Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.
Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P = .049, Student’s t-test).
Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.
Commentary
Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.
In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.
In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.
Applications for Clinical Practice and System Implementation
The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.
The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.
Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.
Practice Points
- Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
- Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.
–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai
Study 1 Overview (Chang et al)
Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.
Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.
Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.
Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.
Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.
Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.
Study 2 Overview (Mei et al)
Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.
Design: Randomized clinical trial of propofol and sevoflurane groups.
Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.
Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.
Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P = .049, Student’s t-test).
Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.
Commentary
Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.
In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.
In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.
Applications for Clinical Practice and System Implementation
The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.
The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.
Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.
Practice Points
- Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
- Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.
–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai
1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x
2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z
1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x
2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z
Meet the JCOM Author with Dr. Barkoudah: Improving Inpatient COVID-19 Vaccination Rates
Improving Inpatient COVID-19 Vaccination Rates Among Adult Patients at a Tertiary Academic Medical Center
From the Department of Medicine, The George Washington University School of Medicine and Health Sciences, Washington, DC.
Abstract
Objective: Inpatient vaccination initiatives are well described in the literature. During the COVID-19 pandemic, hospitals began administering COVID-19 vaccines to hospitalized patients. Although vaccination rates increased, there remained many unvaccinated patients despite community efforts. This quality improvement project aimed to increase the COVID-19 vaccination rates of hospitalized patients on the medicine service at the George Washington University Hospital (GWUH).
Methods: From November 2021 through February 2022, we conducted a Plan-Do-Study-Act (PDSA) cycle with 3 phases. Initial steps included gathering baseline data from the electronic health record and consulting stakeholders. The first 2 phases focused on educating housestaff on the availability, ordering process, and administration of the Pfizer vaccine. The third phase consisted of developing educational pamphlets for patients to be included in their admission packets.
Results: The baseline mean COVID-19 vaccination rate (August to October 2021) of eligible patients on the medicine service was 10.7%. In the months after we implemented the PDSA cycle (November 2021 to February 2022), the mean vaccination rate increased to 15.4%.
Conclusion: This quality improvement project implemented measures to increase administration of the Pfizer vaccine to eligible patients admitted to the medicine service at GWUH. The mean vaccination rate increased from 10.7% in the 3 months prior to implementation to 15.4% during the 4 months post implementation. Other measures to consider in the future include increasing the availability of other COVID-19 vaccines at our hospital and incorporating the vaccine into the admission order set to help facilitate vaccination early in the hospital course.
Keywords: housestaff, quality improvement, PDSA, COVID-19, BNT162b2 vaccine, patient education
Throughout the COVID-19 pandemic, case rates in the United States have fluctuated considerably, corresponding to epidemic waves. In 2021, US daily cases of COVID-19 peaked at nearly 300,000 in early January and reached a nadir of 8000 cases in mid-June.1 In September 2021, new cases had increased to 200,000 per day due to the prevalence of the Delta variant.1 Particularly with the emergence of new variants of SARS-CoV-2, vaccination efforts to limit the spread of infection and severity of illness are critical. Data have shown that 2 doses of the BNT162b2 vaccine (Pfizer-BioNTech) were largely protective against severe infection for approximately 6 months.2,3 When we began this quality improvement (QI) project in September 2021, only 179 million Americans had been fully vaccinated, according to data from the Centers for Disease Control and Prevention, which is just over half of the US population.4 An electronic survey conducted in the United States with more than 5 million responses found that, of those who were hesitant about receiving the vaccine, 49% reported a fear of adverse effects and 48% reported a lack of trust in the vaccine.5
This QI project sought to target unvaccinated individuals admitted to the internal medicine inpatient service. Vaccinating hospitalized patients is especially important since they are sicker than the general population and at higher risk of having poor outcomes from COVID-19. Inpatient vaccine initiatives, such as administering influenza vaccine prior to discharge, have been successfully implemented in the past.6 One large COVID-19 vaccination program featured an admission order set to increase the rates of vaccination among hospitalized patients.7 Our QI project piloted a multidisciplinary approach involving the nursing staff, pharmacy, information technology (IT) department, and internal medicine housestaff to increase COVID-19 vaccination rates among hospitalized patients on the medical service. This project aimed to increase inpatient vaccination rates through interventions targeting both primary providers as well as the patients themselves.
Methods
Setting and Interventions
This project was conducted at the George Washington University Hospital (GWUH) in Washington, DC. The clinicians involved in the study were the internal medicine housestaff, and the patients included were adults admitted to the resident medicine ward teams. The project was exempt by the institutional review board and did not require informed consent.
The quality improvement initiative had 3 phases, each featuring a different intervention (Table 1). The first phase involved sending a weekly announcement (via email and a secure health care messaging app) to current residents rotating on the inpatient medicine service. The announcement contained information regarding COVID-19 vaccine availability at the hospital, instructions on ordering the vaccine, and the process of coordinating with pharmacy to facilitate vaccine administration. Thereafter, residents were educated on the process of giving a COVID-19 vaccine to a patient from start to finish. Due to the nature of the residency schedule, different housestaff members rotated in and out of the medicine wards during the intervention periods. The weekly email was sent to the entire internal medicine housestaff, informing all residents about the QI project, while the weekly secure messages served as reminders and were only sent to residents currently on the medicine wards.
In the second phase, we posted paper flyers throughout the hospital to remind housestaff to give the vaccine and again educate them on the process of ordering the vaccine. For the third intervention, a COVID-19 vaccine educational pamphlet was developed for distribution to inpatients at GWUH. The pamphlet included information on vaccine efficacy, safety, side effects, and eligibility. The pamphlet was incorporated in the admission packet that every patient receives upon admission to the hospital. The patients reviewed the pamphlets with nursing staff, who would answer any questions, with residents available to discuss any outstanding concerns.
Measures and Data Gathering
The primary endpoint of the study was inpatient vaccination rate, defined as the number of COVID-19 vaccines administered divided by the number of patients eligible to receive a vaccine (not fully vaccinated). During initial triage, nursing staff documented vaccination status in the electronic health record (EHR), checking a box in a data entry form if a patient had received 0, 1, or 2 doses of the COVID-19 vaccine. The GWUH IT department generated data from this form to determine the number of patients eligible to receive a COVID-19 vaccine. Data were extracted from the medication administration record in the EHR to determine the number of vaccines that were administered to patients during their hospitalization on the inpatient medical service. Each month, the IT department extracted data for the number of eligible patients and the number of vaccines administered. This yielded the monthly vaccination rates. The monthly vaccination rates in the period prior to starting the QI initiative were compared to the rates in the period after the interventions were implemented.
Of note, during the course of this project, patients became eligible for a third COVID-19 vaccine (booster). We decided to continue with the original aim of vaccinating adults who had only received 0 or 1 dose of the vaccine. Therefore, the eligibility criteria remained the same throughout the study. We obtained retrospective data to ensure that the vaccines being counted toward the vaccination rate were vaccines given to patients not yet fully vaccinated and not vaccines given as boosters.
Results
From August to October 2021, the baseline average monthly vaccination rate of patients on the medicine service who were eligible to receive a COVID-19 vaccine was 10.7%. After the first intervention, the vaccination rate increased to 19.7% in November 2021 (Table 2). The second intervention yielded vaccination rates of 11.4% and 11.8% in December 2021 and January 2022, respectively. During the final phase in February 2022, the vaccination rate was 19.0%. At the conclusion of the study, the mean vaccination rate for the intervention months was 15.4% (Figure 1). Process stability and variation are demonstrated with a statistical process control chart (Figure 2).
Discussion
For this housestaff-driven QI project, we implemented an inpatient COVID-19 vaccination campaign consisting of 3 phases that targeted both providers and patients. During the intervention period, we observed an increased vaccination rate compared to the period just prior to implementation of the QI project. While our interventions may certainly have boosted vaccination rates, we understand other variables could have contributed to increased rates as well. The emergence of variants in the United States, such as omicron in December 2021,8 could have precipitated a demand for vaccinations among patients. Holidays in November and December may also have increased patients’ desire to get vaccinated before travel.
We encountered a number of roadblocks that challenged our project, including difficulty identifying patients who were eligible for the vaccine, logistical vaccine administration challenges, and hesitancy among the inpatient population. Accurately identifying patients who were eligible for a vaccine in the EHR was especially challenging in the setting of rapidly changing guidelines regarding COVID-19 vaccination. In September 2021, the US Food and Drug Administration authorized the Pfizer booster for certain populations and later, in November 2021, for all adults. This meant that some fully vaccinated hospitalized patients (those with 2 doses) then qualified for an additional dose of the vaccine and received a dose during hospitalization. To determine the true vaccination rate, we obtained retrospective data that allowed us to track each vaccine administered. If a patient had already received 2 doses of the COVID-19 vaccine, the vaccine administered was counted as a booster and excluded from the calculation of the vaccination rate. Future PDSA cycles could include updating the EHR to capture the whole range of COVID-19 vaccination status (unvaccinated, partially vaccinated, fully vaccinated, fully vaccinated with 1 booster, fully vaccinated with 2 boosters).
We also encountered logistical challenges with the administration of the COVID-19 vaccine to hospitalized patients. During the intervention period, our pharmacy department required 5 COVID-19 vaccination orders before opening a vial and administering the vaccine doses in order to reduce waste. This policy may have limited our ability to vaccinate eligible inpatients because we were not always able to identify 5 patients simultaneously on the service who were eligible and consented to the vaccine.
The majority of patients who were interested in receiving COVID-19 vaccination had already been vaccinated in the outpatient setting. This fact made the inpatient internal medicine subset of patients a particularly challenging population to target, given their possible hesitancy regarding vaccination. By utilizing a multidisciplinary team and increasing communication of providers and nursing staff, we helped to increase the COVID-19 vaccination rates at our hospital from 10.7% to 15.4%.
Future Directions
Future interventions to consider include increasing the availability of other approved COVID-19 vaccines at our hospital besides the Pfizer-BioNTech vaccine. Furthermore, incorporating the vaccine into the admission order set would help initiate the vaccination process early in the hospital course. We encourage other institutions to utilize similar approaches to not only remind providers about inpatient vaccination, but also educate and encourage patients to receive the vaccine. These measures will help institutions increase inpatient COVID-19 vaccination rates in a high-risk population.
Corresponding author: Anna Rubin, MD, Department of Medicine, The George Washington University School of Medicine and Health Sciences, Washington, DC; arubin@mfa.gwu.edu
Disclosures: None reported.
1. Trends in number of COVID-19 cases and deaths in the US reported to CDC, by state/territory. Centers for Disease Control and Prevention. Accessed February 25, 2022. https://covid.cdc.gov/covid-data-tracker/#trends_dailycases
2. Polack FP, Thomas SJ, Kitchin N, et al. Safety and efficacy of the BNT162B2 MRNA COVID-19 vaccine. N Engl J Med. 2020;383(27):2603-2615. doi:10.1056/nejmoa2034577
3. Hall V, Foulkes S, Insalata F, et al. Protection against SARS-COV-2 after covid-19 vaccination and previous infection. N Engl J Med. 2022;386(13):1207-1220. doi:10.1056/nejmoa2118691
4. Trends in number of COVID-19 vaccinations in the US. Centers for Disease Control and Prevention. Accessed February 25, 2022. https://covid.cdc.gov/covid-data-tracker/#vaccination-trends_vacctrends-fully-cum
5. King WC, Rubinstein M, Reinhart A, Mejia R. Time trends, factors associated with, and reasons for covid-19 vaccine hesitancy: A massive online survey of US adults from January-May 2021. PLOS ONE. 2021;16(12). doi:10.1371/journal.pone.0260731
6. Cohen ES, Ogrinc G, Taylor T, et al. Influenza vaccination rates for hospitalised patients: A multiyear quality improvement effort. BMJ Qual Saf. 2015;24(3):221-227. doi:10.1136/bmjqs-2014-003556
7. Berger RE, Diaz DC, Chacko S, et al. Implementation of an inpatient covid-19 vaccination program. NEJM Catalyst. 2021;2(10). doi:10.1056/cat.21.0235
8. CDC COVID-19 Response Team. SARS-CoV-2 B.1.1.529 (Omicron) Variant - United States, December 1-8, 2021. MMWR Morb Mortal Wkly Rep. 2021;70(50):1731-1734. doi:10.15585/mmwr.mm7050e1
From the Department of Medicine, The George Washington University School of Medicine and Health Sciences, Washington, DC.
Abstract
Objective: Inpatient vaccination initiatives are well described in the literature. During the COVID-19 pandemic, hospitals began administering COVID-19 vaccines to hospitalized patients. Although vaccination rates increased, there remained many unvaccinated patients despite community efforts. This quality improvement project aimed to increase the COVID-19 vaccination rates of hospitalized patients on the medicine service at the George Washington University Hospital (GWUH).
Methods: From November 2021 through February 2022, we conducted a Plan-Do-Study-Act (PDSA) cycle with 3 phases. Initial steps included gathering baseline data from the electronic health record and consulting stakeholders. The first 2 phases focused on educating housestaff on the availability, ordering process, and administration of the Pfizer vaccine. The third phase consisted of developing educational pamphlets for patients to be included in their admission packets.
Results: The baseline mean COVID-19 vaccination rate (August to October 2021) of eligible patients on the medicine service was 10.7%. In the months after we implemented the PDSA cycle (November 2021 to February 2022), the mean vaccination rate increased to 15.4%.
Conclusion: This quality improvement project implemented measures to increase administration of the Pfizer vaccine to eligible patients admitted to the medicine service at GWUH. The mean vaccination rate increased from 10.7% in the 3 months prior to implementation to 15.4% during the 4 months post implementation. Other measures to consider in the future include increasing the availability of other COVID-19 vaccines at our hospital and incorporating the vaccine into the admission order set to help facilitate vaccination early in the hospital course.
Keywords: housestaff, quality improvement, PDSA, COVID-19, BNT162b2 vaccine, patient education
Throughout the COVID-19 pandemic, case rates in the United States have fluctuated considerably, corresponding to epidemic waves. In 2021, US daily cases of COVID-19 peaked at nearly 300,000 in early January and reached a nadir of 8000 cases in mid-June.1 In September 2021, new cases had increased to 200,000 per day due to the prevalence of the Delta variant.1 Particularly with the emergence of new variants of SARS-CoV-2, vaccination efforts to limit the spread of infection and severity of illness are critical. Data have shown that 2 doses of the BNT162b2 vaccine (Pfizer-BioNTech) were largely protective against severe infection for approximately 6 months.2,3 When we began this quality improvement (QI) project in September 2021, only 179 million Americans had been fully vaccinated, according to data from the Centers for Disease Control and Prevention, which is just over half of the US population.4 An electronic survey conducted in the United States with more than 5 million responses found that, of those who were hesitant about receiving the vaccine, 49% reported a fear of adverse effects and 48% reported a lack of trust in the vaccine.5
This QI project sought to target unvaccinated individuals admitted to the internal medicine inpatient service. Vaccinating hospitalized patients is especially important since they are sicker than the general population and at higher risk of having poor outcomes from COVID-19. Inpatient vaccine initiatives, such as administering influenza vaccine prior to discharge, have been successfully implemented in the past.6 One large COVID-19 vaccination program featured an admission order set to increase the rates of vaccination among hospitalized patients.7 Our QI project piloted a multidisciplinary approach involving the nursing staff, pharmacy, information technology (IT) department, and internal medicine housestaff to increase COVID-19 vaccination rates among hospitalized patients on the medical service. This project aimed to increase inpatient vaccination rates through interventions targeting both primary providers as well as the patients themselves.
Methods
Setting and Interventions
This project was conducted at the George Washington University Hospital (GWUH) in Washington, DC. The clinicians involved in the study were the internal medicine housestaff, and the patients included were adults admitted to the resident medicine ward teams. The project was exempt by the institutional review board and did not require informed consent.
The quality improvement initiative had 3 phases, each featuring a different intervention (Table 1). The first phase involved sending a weekly announcement (via email and a secure health care messaging app) to current residents rotating on the inpatient medicine service. The announcement contained information regarding COVID-19 vaccine availability at the hospital, instructions on ordering the vaccine, and the process of coordinating with pharmacy to facilitate vaccine administration. Thereafter, residents were educated on the process of giving a COVID-19 vaccine to a patient from start to finish. Due to the nature of the residency schedule, different housestaff members rotated in and out of the medicine wards during the intervention periods. The weekly email was sent to the entire internal medicine housestaff, informing all residents about the QI project, while the weekly secure messages served as reminders and were only sent to residents currently on the medicine wards.
In the second phase, we posted paper flyers throughout the hospital to remind housestaff to give the vaccine and again educate them on the process of ordering the vaccine. For the third intervention, a COVID-19 vaccine educational pamphlet was developed for distribution to inpatients at GWUH. The pamphlet included information on vaccine efficacy, safety, side effects, and eligibility. The pamphlet was incorporated in the admission packet that every patient receives upon admission to the hospital. The patients reviewed the pamphlets with nursing staff, who would answer any questions, with residents available to discuss any outstanding concerns.
Measures and Data Gathering
The primary endpoint of the study was inpatient vaccination rate, defined as the number of COVID-19 vaccines administered divided by the number of patients eligible to receive a vaccine (not fully vaccinated). During initial triage, nursing staff documented vaccination status in the electronic health record (EHR), checking a box in a data entry form if a patient had received 0, 1, or 2 doses of the COVID-19 vaccine. The GWUH IT department generated data from this form to determine the number of patients eligible to receive a COVID-19 vaccine. Data were extracted from the medication administration record in the EHR to determine the number of vaccines that were administered to patients during their hospitalization on the inpatient medical service. Each month, the IT department extracted data for the number of eligible patients and the number of vaccines administered. This yielded the monthly vaccination rates. The monthly vaccination rates in the period prior to starting the QI initiative were compared to the rates in the period after the interventions were implemented.
Of note, during the course of this project, patients became eligible for a third COVID-19 vaccine (booster). We decided to continue with the original aim of vaccinating adults who had only received 0 or 1 dose of the vaccine. Therefore, the eligibility criteria remained the same throughout the study. We obtained retrospective data to ensure that the vaccines being counted toward the vaccination rate were vaccines given to patients not yet fully vaccinated and not vaccines given as boosters.
Results
From August to October 2021, the baseline average monthly vaccination rate of patients on the medicine service who were eligible to receive a COVID-19 vaccine was 10.7%. After the first intervention, the vaccination rate increased to 19.7% in November 2021 (Table 2). The second intervention yielded vaccination rates of 11.4% and 11.8% in December 2021 and January 2022, respectively. During the final phase in February 2022, the vaccination rate was 19.0%. At the conclusion of the study, the mean vaccination rate for the intervention months was 15.4% (Figure 1). Process stability and variation are demonstrated with a statistical process control chart (Figure 2).
Discussion
For this housestaff-driven QI project, we implemented an inpatient COVID-19 vaccination campaign consisting of 3 phases that targeted both providers and patients. During the intervention period, we observed an increased vaccination rate compared to the period just prior to implementation of the QI project. While our interventions may certainly have boosted vaccination rates, we understand other variables could have contributed to increased rates as well. The emergence of variants in the United States, such as omicron in December 2021,8 could have precipitated a demand for vaccinations among patients. Holidays in November and December may also have increased patients’ desire to get vaccinated before travel.
We encountered a number of roadblocks that challenged our project, including difficulty identifying patients who were eligible for the vaccine, logistical vaccine administration challenges, and hesitancy among the inpatient population. Accurately identifying patients who were eligible for a vaccine in the EHR was especially challenging in the setting of rapidly changing guidelines regarding COVID-19 vaccination. In September 2021, the US Food and Drug Administration authorized the Pfizer booster for certain populations and later, in November 2021, for all adults. This meant that some fully vaccinated hospitalized patients (those with 2 doses) then qualified for an additional dose of the vaccine and received a dose during hospitalization. To determine the true vaccination rate, we obtained retrospective data that allowed us to track each vaccine administered. If a patient had already received 2 doses of the COVID-19 vaccine, the vaccine administered was counted as a booster and excluded from the calculation of the vaccination rate. Future PDSA cycles could include updating the EHR to capture the whole range of COVID-19 vaccination status (unvaccinated, partially vaccinated, fully vaccinated, fully vaccinated with 1 booster, fully vaccinated with 2 boosters).
We also encountered logistical challenges with the administration of the COVID-19 vaccine to hospitalized patients. During the intervention period, our pharmacy department required 5 COVID-19 vaccination orders before opening a vial and administering the vaccine doses in order to reduce waste. This policy may have limited our ability to vaccinate eligible inpatients because we were not always able to identify 5 patients simultaneously on the service who were eligible and consented to the vaccine.
The majority of patients who were interested in receiving COVID-19 vaccination had already been vaccinated in the outpatient setting. This fact made the inpatient internal medicine subset of patients a particularly challenging population to target, given their possible hesitancy regarding vaccination. By utilizing a multidisciplinary team and increasing communication of providers and nursing staff, we helped to increase the COVID-19 vaccination rates at our hospital from 10.7% to 15.4%.
Future Directions
Future interventions to consider include increasing the availability of other approved COVID-19 vaccines at our hospital besides the Pfizer-BioNTech vaccine. Furthermore, incorporating the vaccine into the admission order set would help initiate the vaccination process early in the hospital course. We encourage other institutions to utilize similar approaches to not only remind providers about inpatient vaccination, but also educate and encourage patients to receive the vaccine. These measures will help institutions increase inpatient COVID-19 vaccination rates in a high-risk population.
Corresponding author: Anna Rubin, MD, Department of Medicine, The George Washington University School of Medicine and Health Sciences, Washington, DC; arubin@mfa.gwu.edu
Disclosures: None reported.
From the Department of Medicine, The George Washington University School of Medicine and Health Sciences, Washington, DC.
Abstract
Objective: Inpatient vaccination initiatives are well described in the literature. During the COVID-19 pandemic, hospitals began administering COVID-19 vaccines to hospitalized patients. Although vaccination rates increased, there remained many unvaccinated patients despite community efforts. This quality improvement project aimed to increase the COVID-19 vaccination rates of hospitalized patients on the medicine service at the George Washington University Hospital (GWUH).
Methods: From November 2021 through February 2022, we conducted a Plan-Do-Study-Act (PDSA) cycle with 3 phases. Initial steps included gathering baseline data from the electronic health record and consulting stakeholders. The first 2 phases focused on educating housestaff on the availability, ordering process, and administration of the Pfizer vaccine. The third phase consisted of developing educational pamphlets for patients to be included in their admission packets.
Results: The baseline mean COVID-19 vaccination rate (August to October 2021) of eligible patients on the medicine service was 10.7%. In the months after we implemented the PDSA cycle (November 2021 to February 2022), the mean vaccination rate increased to 15.4%.
Conclusion: This quality improvement project implemented measures to increase administration of the Pfizer vaccine to eligible patients admitted to the medicine service at GWUH. The mean vaccination rate increased from 10.7% in the 3 months prior to implementation to 15.4% during the 4 months post implementation. Other measures to consider in the future include increasing the availability of other COVID-19 vaccines at our hospital and incorporating the vaccine into the admission order set to help facilitate vaccination early in the hospital course.
Keywords: housestaff, quality improvement, PDSA, COVID-19, BNT162b2 vaccine, patient education
Throughout the COVID-19 pandemic, case rates in the United States have fluctuated considerably, corresponding to epidemic waves. In 2021, US daily cases of COVID-19 peaked at nearly 300,000 in early January and reached a nadir of 8000 cases in mid-June.1 In September 2021, new cases had increased to 200,000 per day due to the prevalence of the Delta variant.1 Particularly with the emergence of new variants of SARS-CoV-2, vaccination efforts to limit the spread of infection and severity of illness are critical. Data have shown that 2 doses of the BNT162b2 vaccine (Pfizer-BioNTech) were largely protective against severe infection for approximately 6 months.2,3 When we began this quality improvement (QI) project in September 2021, only 179 million Americans had been fully vaccinated, according to data from the Centers for Disease Control and Prevention, which is just over half of the US population.4 An electronic survey conducted in the United States with more than 5 million responses found that, of those who were hesitant about receiving the vaccine, 49% reported a fear of adverse effects and 48% reported a lack of trust in the vaccine.5
This QI project sought to target unvaccinated individuals admitted to the internal medicine inpatient service. Vaccinating hospitalized patients is especially important since they are sicker than the general population and at higher risk of having poor outcomes from COVID-19. Inpatient vaccine initiatives, such as administering influenza vaccine prior to discharge, have been successfully implemented in the past.6 One large COVID-19 vaccination program featured an admission order set to increase the rates of vaccination among hospitalized patients.7 Our QI project piloted a multidisciplinary approach involving the nursing staff, pharmacy, information technology (IT) department, and internal medicine housestaff to increase COVID-19 vaccination rates among hospitalized patients on the medical service. This project aimed to increase inpatient vaccination rates through interventions targeting both primary providers as well as the patients themselves.
Methods
Setting and Interventions
This project was conducted at the George Washington University Hospital (GWUH) in Washington, DC. The clinicians involved in the study were the internal medicine housestaff, and the patients included were adults admitted to the resident medicine ward teams. The project was exempt by the institutional review board and did not require informed consent.
The quality improvement initiative had 3 phases, each featuring a different intervention (Table 1). The first phase involved sending a weekly announcement (via email and a secure health care messaging app) to current residents rotating on the inpatient medicine service. The announcement contained information regarding COVID-19 vaccine availability at the hospital, instructions on ordering the vaccine, and the process of coordinating with pharmacy to facilitate vaccine administration. Thereafter, residents were educated on the process of giving a COVID-19 vaccine to a patient from start to finish. Due to the nature of the residency schedule, different housestaff members rotated in and out of the medicine wards during the intervention periods. The weekly email was sent to the entire internal medicine housestaff, informing all residents about the QI project, while the weekly secure messages served as reminders and were only sent to residents currently on the medicine wards.
In the second phase, we posted paper flyers throughout the hospital to remind housestaff to give the vaccine and again educate them on the process of ordering the vaccine. For the third intervention, a COVID-19 vaccine educational pamphlet was developed for distribution to inpatients at GWUH. The pamphlet included information on vaccine efficacy, safety, side effects, and eligibility. The pamphlet was incorporated in the admission packet that every patient receives upon admission to the hospital. The patients reviewed the pamphlets with nursing staff, who would answer any questions, with residents available to discuss any outstanding concerns.
Measures and Data Gathering
The primary endpoint of the study was inpatient vaccination rate, defined as the number of COVID-19 vaccines administered divided by the number of patients eligible to receive a vaccine (not fully vaccinated). During initial triage, nursing staff documented vaccination status in the electronic health record (EHR), checking a box in a data entry form if a patient had received 0, 1, or 2 doses of the COVID-19 vaccine. The GWUH IT department generated data from this form to determine the number of patients eligible to receive a COVID-19 vaccine. Data were extracted from the medication administration record in the EHR to determine the number of vaccines that were administered to patients during their hospitalization on the inpatient medical service. Each month, the IT department extracted data for the number of eligible patients and the number of vaccines administered. This yielded the monthly vaccination rates. The monthly vaccination rates in the period prior to starting the QI initiative were compared to the rates in the period after the interventions were implemented.
Of note, during the course of this project, patients became eligible for a third COVID-19 vaccine (booster). We decided to continue with the original aim of vaccinating adults who had only received 0 or 1 dose of the vaccine. Therefore, the eligibility criteria remained the same throughout the study. We obtained retrospective data to ensure that the vaccines being counted toward the vaccination rate were vaccines given to patients not yet fully vaccinated and not vaccines given as boosters.
Results
From August to October 2021, the baseline average monthly vaccination rate of patients on the medicine service who were eligible to receive a COVID-19 vaccine was 10.7%. After the first intervention, the vaccination rate increased to 19.7% in November 2021 (Table 2). The second intervention yielded vaccination rates of 11.4% and 11.8% in December 2021 and January 2022, respectively. During the final phase in February 2022, the vaccination rate was 19.0%. At the conclusion of the study, the mean vaccination rate for the intervention months was 15.4% (Figure 1). Process stability and variation are demonstrated with a statistical process control chart (Figure 2).
Discussion
For this housestaff-driven QI project, we implemented an inpatient COVID-19 vaccination campaign consisting of 3 phases that targeted both providers and patients. During the intervention period, we observed an increased vaccination rate compared to the period just prior to implementation of the QI project. While our interventions may certainly have boosted vaccination rates, we understand other variables could have contributed to increased rates as well. The emergence of variants in the United States, such as omicron in December 2021,8 could have precipitated a demand for vaccinations among patients. Holidays in November and December may also have increased patients’ desire to get vaccinated before travel.
We encountered a number of roadblocks that challenged our project, including difficulty identifying patients who were eligible for the vaccine, logistical vaccine administration challenges, and hesitancy among the inpatient population. Accurately identifying patients who were eligible for a vaccine in the EHR was especially challenging in the setting of rapidly changing guidelines regarding COVID-19 vaccination. In September 2021, the US Food and Drug Administration authorized the Pfizer booster for certain populations and later, in November 2021, for all adults. This meant that some fully vaccinated hospitalized patients (those with 2 doses) then qualified for an additional dose of the vaccine and received a dose during hospitalization. To determine the true vaccination rate, we obtained retrospective data that allowed us to track each vaccine administered. If a patient had already received 2 doses of the COVID-19 vaccine, the vaccine administered was counted as a booster and excluded from the calculation of the vaccination rate. Future PDSA cycles could include updating the EHR to capture the whole range of COVID-19 vaccination status (unvaccinated, partially vaccinated, fully vaccinated, fully vaccinated with 1 booster, fully vaccinated with 2 boosters).
We also encountered logistical challenges with the administration of the COVID-19 vaccine to hospitalized patients. During the intervention period, our pharmacy department required 5 COVID-19 vaccination orders before opening a vial and administering the vaccine doses in order to reduce waste. This policy may have limited our ability to vaccinate eligible inpatients because we were not always able to identify 5 patients simultaneously on the service who were eligible and consented to the vaccine.
The majority of patients who were interested in receiving COVID-19 vaccination had already been vaccinated in the outpatient setting. This fact made the inpatient internal medicine subset of patients a particularly challenging population to target, given their possible hesitancy regarding vaccination. By utilizing a multidisciplinary team and increasing communication of providers and nursing staff, we helped to increase the COVID-19 vaccination rates at our hospital from 10.7% to 15.4%.
Future Directions
Future interventions to consider include increasing the availability of other approved COVID-19 vaccines at our hospital besides the Pfizer-BioNTech vaccine. Furthermore, incorporating the vaccine into the admission order set would help initiate the vaccination process early in the hospital course. We encourage other institutions to utilize similar approaches to not only remind providers about inpatient vaccination, but also educate and encourage patients to receive the vaccine. These measures will help institutions increase inpatient COVID-19 vaccination rates in a high-risk population.
Corresponding author: Anna Rubin, MD, Department of Medicine, The George Washington University School of Medicine and Health Sciences, Washington, DC; arubin@mfa.gwu.edu
Disclosures: None reported.
1. Trends in number of COVID-19 cases and deaths in the US reported to CDC, by state/territory. Centers for Disease Control and Prevention. Accessed February 25, 2022. https://covid.cdc.gov/covid-data-tracker/#trends_dailycases
2. Polack FP, Thomas SJ, Kitchin N, et al. Safety and efficacy of the BNT162B2 MRNA COVID-19 vaccine. N Engl J Med. 2020;383(27):2603-2615. doi:10.1056/nejmoa2034577
3. Hall V, Foulkes S, Insalata F, et al. Protection against SARS-COV-2 after covid-19 vaccination and previous infection. N Engl J Med. 2022;386(13):1207-1220. doi:10.1056/nejmoa2118691
4. Trends in number of COVID-19 vaccinations in the US. Centers for Disease Control and Prevention. Accessed February 25, 2022. https://covid.cdc.gov/covid-data-tracker/#vaccination-trends_vacctrends-fully-cum
5. King WC, Rubinstein M, Reinhart A, Mejia R. Time trends, factors associated with, and reasons for covid-19 vaccine hesitancy: A massive online survey of US adults from January-May 2021. PLOS ONE. 2021;16(12). doi:10.1371/journal.pone.0260731
6. Cohen ES, Ogrinc G, Taylor T, et al. Influenza vaccination rates for hospitalised patients: A multiyear quality improvement effort. BMJ Qual Saf. 2015;24(3):221-227. doi:10.1136/bmjqs-2014-003556
7. Berger RE, Diaz DC, Chacko S, et al. Implementation of an inpatient covid-19 vaccination program. NEJM Catalyst. 2021;2(10). doi:10.1056/cat.21.0235
8. CDC COVID-19 Response Team. SARS-CoV-2 B.1.1.529 (Omicron) Variant - United States, December 1-8, 2021. MMWR Morb Mortal Wkly Rep. 2021;70(50):1731-1734. doi:10.15585/mmwr.mm7050e1
1. Trends in number of COVID-19 cases and deaths in the US reported to CDC, by state/territory. Centers for Disease Control and Prevention. Accessed February 25, 2022. https://covid.cdc.gov/covid-data-tracker/#trends_dailycases
2. Polack FP, Thomas SJ, Kitchin N, et al. Safety and efficacy of the BNT162B2 MRNA COVID-19 vaccine. N Engl J Med. 2020;383(27):2603-2615. doi:10.1056/nejmoa2034577
3. Hall V, Foulkes S, Insalata F, et al. Protection against SARS-COV-2 after covid-19 vaccination and previous infection. N Engl J Med. 2022;386(13):1207-1220. doi:10.1056/nejmoa2118691
4. Trends in number of COVID-19 vaccinations in the US. Centers for Disease Control and Prevention. Accessed February 25, 2022. https://covid.cdc.gov/covid-data-tracker/#vaccination-trends_vacctrends-fully-cum
5. King WC, Rubinstein M, Reinhart A, Mejia R. Time trends, factors associated with, and reasons for covid-19 vaccine hesitancy: A massive online survey of US adults from January-May 2021. PLOS ONE. 2021;16(12). doi:10.1371/journal.pone.0260731
6. Cohen ES, Ogrinc G, Taylor T, et al. Influenza vaccination rates for hospitalised patients: A multiyear quality improvement effort. BMJ Qual Saf. 2015;24(3):221-227. doi:10.1136/bmjqs-2014-003556
7. Berger RE, Diaz DC, Chacko S, et al. Implementation of an inpatient covid-19 vaccination program. NEJM Catalyst. 2021;2(10). doi:10.1056/cat.21.0235
8. CDC COVID-19 Response Team. SARS-CoV-2 B.1.1.529 (Omicron) Variant - United States, December 1-8, 2021. MMWR Morb Mortal Wkly Rep. 2021;70(50):1731-1734. doi:10.15585/mmwr.mm7050e1
Diabetes Population Health Innovations in the Age of COVID-19: Insights From the T1D Exchange Quality Improvement Collaborative
From the T1D Exchange, Boston, MA (Ann Mungmode, Nicole Rioles, Jesse Cases, Dr. Ebekozien); The Leona M. and Harry B. Hemsley Charitable Trust, New York, NY (Laurel Koester); and the University of Mississippi School of Population Health, Jackson, MS (Dr. Ebekozien).
Abstract
There have been remarkable innovations in diabetes management since the start of the COVID-19 pandemic, but these groundbreaking innovations are drawing limited focus as the field focuses on the adverse impact of the pandemic on patients with diabetes. This article reviews select population health innovations in diabetes management that have become available over the past 2 years of the COVID-19 pandemic from the perspective of the T1D Exchange Quality Improvement Collaborative, a learning health network that focuses on improving care and outcomes for individuals with type 1 diabetes (T1D). Such innovations include expanded telemedicine access, collection of real-world data, machine learning and artificial intelligence, and new diabetes medications and devices. In addition, multiple innovative studies have been undertaken to explore contributors to health inequities in diabetes, and advocacy efforts for specific populations have been successful. Looking to the future, work is required to explore additional health equity successes that do not further exacerbate inequities and to look for additional innovative ways to engage people with T1D in their health care through conversations on social determinants of health and societal structures.
Keywords: type 1 diabetes, learning health network, continuous glucose monitoring, health equity
One in 10 people in the United States has diabetes.1 Diabetes is the nation’s second leading cause of death, costing the US health system more than $300 billion annually.2 The COVID-19 pandemic presented additional health burdens for people living with diabetes. For example, preexisting diabetes was identified as a risk factor for COVID-19–associated morbidity and mortality.3,4 Over the past 2 years, there have been remarkable innovations in diabetes management, including stem cell therapy and new medication options. Additionally, improved technology solutions have aided in diabetes management through continuous glucose monitors (CGM), smart insulin pens, advanced hybrid closed-loop systems, and continuous subcutaneous insulin injections.5,6 Unfortunately, these groundbreaking innovations are drawing limited focus, as the field is rightfully focused on the adverse impact of the pandemic on patients with diabetes.
Learning health networks like the T1D Exchange Quality Improvement Collaborative (T1DX-QI) have implemented some of these innovative solutions to improve care for people with diabetes.7 T1DX-QI has more than 50 data-sharing endocrinology centers that care for over 75,000 people with diabetes across the United States (Figure 1). Centers participating in the T1DX-QI use quality improvement (QI) and implementation science methods to quickly translate research into evidence-based clinical practice. T1DX-QI leads diabetes population health and health system research and supports widespread transferability across health care organizations through regular collaborative calls, conferences, and case study documentation.8
In this review, we summarize impactful population health innovations in diabetes management that have become available over the past 2 years of the COVID-19 pandemic from the perspective of T1DX-QI (see Figure 2 for relevant definitions). This review is limited in scope and is not meant to be an exhaustive list of innovations. The review also reflects significant changes from the perspective of academic diabetes centers, which may not apply to rural or primary care diabetes practices.
Methods
The first (A.M.), second (H.H.), and senior (O.E.) authors conducted a scoping review of published literature using terms related to diabetes, population health, and innovation on PubMed Central and Google Scholar for the period March 2020 to June 2022. To complement the review, A.M. and O.E. also reviewed abstracts from presentations at major international diabetes conferences, including the American Diabetes Association (ADA), the International Society for Pediatric and Adolescent Diabetes (ISPAD), the T1DX-QI Learning Session Conference, and the Advanced Technologies & Treatments for Diabetes (ATTD) 2020 to 2022 conferences.9-14 The authors also searched FDA.gov and ClinicalTrials.gov for relevant insights. A.M. and O.E. sorted the reviewed literature into major themes (Figure 3) from the population health improvement perspective of the T1DX-QI.
Population Health Innovations in Diabetes Management
Expansion of Telemedicine Access
Telemedicine is cost-effective for patients with diabetes,15 including those with complex cases.16 Before the COVID-19 pandemic, telemedicine and virtual care were rare in diabetes management. However, the pandemic offered a new opportunity to expand the practice of telemedicine in diabetes management. A study from the T1DX-QI showed that telemedicine visits grew from comprising <1% of visits pre-pandemic (December 2019) to 95.2% during the pandemic (August 2020).17 Additional studies, like those conducted by Phillip et al,18 confirmed the noninferiority of telemedicine practice for patients with diabetes.Telemedicine was also found to be an effective strategy to educate patients on the use of diabetes technologies.19
Real-World Data and Disease Surveillance
As the COVID-19 pandemic exacerbated outcomes for people with type 1 diabetes (T1D), a need arose to understand the immediate effects of the pandemic on people with T1D through real-world data and disease surveillance. In April 2020, the T1DX-QI initiated a multicenter surveillance study to collect data and analyze the impact of COVID-19 on people with T1D. The existing health collaborative served as a springboard for robust surveillance study, documenting numerous works on the effects of COVID-19.3,4,20-28 Other investigators also embraced the power of real-world surveillance and real-world data.29,30
Big Data, Machine Learning, and Artificial Intelligence
The past 2 years have seen a shift toward embracing the incredible opportunity to tap the large volume of data generated from routine care for practical insights.31 In particular, researchers have demonstrated the widespread application of machine learning and artificial intelligence to improve diabetes management.32 The T1DX-QI also harnessed the growing power of big data by expanding the functionality of innovative benchmarking software. The T1DX QI Portal uses electronic medical record data of diabetes patients for clinic-to-clinic benchmarking and data analysis, using business intelligence solutions.33
Health Equity
While inequities across various health outcomes have been well documented for years,34 the COVID-19 pandemic further exaggerated racial/ethnic health inequities in T1D.23,35 In response, several organizations have outlined specific strategies to address these health inequities. Emboldened by the pandemic, the T1DX-QI announced a multipronged approach to address health inequities among patients with T1D through the Health Equity Advancement Lab (HEAL).36 One of HEAL’s main components is using real-world data to champion population-level insights and demonstrate progress in QI efforts.
Multiple innovative studies have been undertaken to explore contributors to health inequities in diabetes, and these studies are expanding our understanding of the chasm.37 There have also been innovative solutions to addressing these inequities, with multiple studies published over the past 2 years.38 A source of inequity among patients with T1D is the lack of representation of racial/ethnic minorities with T1D in clinical trials.39 The T1DX-QI suggests that the equity-adapted framework for QI can be applied by research leaders to support trial diversity and representation, ensuring future device innovations are meaningful for all people with T1D.40
Diabetes Devices
Glucose monitoring and insulin therapy are vital tools to support individuals living with T1D, and devices such as CGM and insulin pumps have become the standard of care for diabetes management (Table).41 Innovations in diabetes technology and device access are imperative for a chronic disease with no cure.
The COVID-19 pandemic created an opportunity to increase access to diabetes devices in inpatient settings. In 2020, the US Food and Drug Administration expanded the use of CGM to support remote monitoring of patients in inpatient hospital settings, simultaneously supporting the glucose monitoring needs of patients with T1D and reducing COVID-19 transmission through reduced patient-clinician contact.42 This effort has been expanded and will continue in 2022 and beyond,43 and aligns with the growing consensus that supports patients wearing both CGMs and insulin pumps in ambulatory settings to improve patient health outcomes.44
Since 2020, innovations in diabetes technology have improved and increased the variety of options available to people with T1D and made them easier to use (Table). New, advanced hybrid closed-loop systems have progressed to offer Bluetooth features, including automatic software upgrades, tubeless systems, and the ability to allow parents to use their smartphones to bolus for children.45-47 The next big step in insulin delivery innovation is the release of functioning, fully closed loop systems, of which several are currently in clinical trials.48 These systems support reduced hypoglycemia and improved time in range.49
Additional innovations in insulin delivery have improved the user experience and expanded therapeutic options, including a variety of smart insulin pens complete with dosing logs50,51 and even a patch to deliver insulin without the burden of injections.52 As barriers to diabetes technology persist,53 innovations in alternate insulin delivery provide people with T1D more options to align with their personal access and technology preferences.
Innovations in CGM address cited barriers to their use, including size or overall wear.53-55 CGMs released in the past few years are smaller in physical size, have longer durations of time between changings, are more accurate, and do not require calibrations for accuracy.
New Diabetes Medications
Many new medications and therapeutic advances have become available in the past 2 years.56 Additionally, more medications are being tested as adjunct therapies to support glycemic management in patients with T1D, including metformin, sodium-glucose cotransporter 1 and 2 inhibitors, pramlintide, glucagon-like polypeptide-1 analogs, and glucagon receptor agonists.57 Other recent advances include stem cell replacement therapy for patients with T1D.58 The ultra-long-acting biosimilar insulins are one medical innovation that has been stalled, rather than propelled, during the COVID-19 pandemic.59
Diabetes Policy Advocacy
People with T1D require insulin to survive. The cost of insulin has increased in recent years, with some studies citing a 64% to 100% increase in the past decade.60,61 In fact, 1 in 4 insulin users report that cost has impacted their insulin use, including rationing their insulin.62 Lockdowns during the COVID-19 pandemic stressed US families financially, increasing the urgency for insulin cost caps.
Although the COVID-19 pandemic halted national conversations on drug financing,63 advocacy efforts have succeeded for specific populations. The new Medicare Part D Senior Savings Model will cap the cost of insulin at $35 for a 30-day supply,64 and 20 states passed legislation capping insulin pricing.62 Efforts to codify national cost caps are under debate, including the passage of the Affordable Insulin Now Act, which passed the House in March 2022 and is currently under review in the Senate.65
Perspective: The Role of Private Philanthropy in Supporting Population Health Innovations
Funders and industry partners play a crucial role in leading and supporting innovations that improve the lives of people with T1D and reduce society’s costs of living with the disease. Data infrastructure is critical to supporting population health. While building the data infrastructure to support population health is both time- and resource-intensive, private foundations such as Helmsley are uniquely positioned—and have a responsibility—to take large, informed risks to help reach all communities with T1D.
The T1DX-QI is the largest source of population health data on T1D in the United States and is becoming the premiere data authority on its incidence, prevalence, and outcomes. The T1DX-QI enables a robust understanding of T1D-related health trends at the population level, as well as trends among clinics and providers. Pilot centers in the T1DX-QI have reported reductions in patients’ A1c and acute diabetes-related events, as well as improvements in device usage and depression screening. The ability to capture changes speaks to the promise and power of these data to demonstrate the clinical impact of QI interventions and to support the spread of best practices and learnings across health systems.
Additional philanthropic efforts have supported innovation in the last 2 years. For example, the JDRF, a nonprofit philanthropic equity firm, has supported efforts in developing artificial pancreas systems and cell therapies currently in clinical trials like teplizumab, a drug that has demonstrated delayed onset of T1D through JDRF’s T1D Fund.66 Industry partners also have an opportunity for significant influence in this area, as they continue to fund meaningful projects to advance care for people with T1D.67
Conclusion
We are optimistic that the innovations summarized here describe a shift in the tide of equitable T1D outcomes; however, future work is required to explore additional health equity successes that do not further exacerbate inequities. We also see further opportunities for innovative ways to engage people with T1D in their health care through conversations on social determinants of health and societal structures.
Corresponding author: Ann Mungmode, MPH, T1D Exchange, 11 Avenue de Lafayette, Boston, MA 02111; Email: amungmode@t1dexchange.org
Disclosures: Dr. Ebekozien serve(d) as a director, officer, partner, employee, advisor, consultant, or trustee for the Medtronic Advisory Board and received research grants from Medtronic Diabetes, Eli Lilly, and Dexcom.
Funding: The T1DX-QI is funded by The Leona M. and Harry B. Hemsley Charitable Trust.
1. Centers for Disease Control and Prevention. National diabetes statistics report. Accessed August 30, 2022. www.cdc.gov/diabetes/data/statistics-report/index.html
2. Centers for Disease Control and Prevention. Diabetes fast facts. Accessed August 30, 2022. www.cdc.gov/diabetes/basics/quick-facts.html
3. O’Malley G, Ebekozien O, Desimone M, et al. COVID-19 hospitalization in adults with type 1 diabetes: results from the T1D Exchange Multicenter Surveillance Study. J Clin Endocrinol Metab. 2020;106(2):e936-e942. doi:10.1210/clinem/dgaa825
4. Ebekozien OA, Noor N, Gallagher MP, Alonso GT. Type 1 diabetes and COVID-19: preliminary findings from a multicenter surveillance study in the U.S. Diabetes Care. 2020;43(8):e83-e85. doi:10.2337/dc20-1088
5. Zimmerman C, Albanese-O’Neill A, Haller MJ. Advances in type 1 diabetes technology over the last decade. Eur Endocrinol. 2019;15(2):70-76. doi:10.17925/ee.2019.15.2.70
6. Wake DJ, Gibb FW, Kar P, et al. Endocrinology in the time of COVID-19: remodelling diabetes services and emerging innovation. Eur J Endocrinol. 2020;183(2):G67-G77. doi:10.1530/eje-20-0377
7. Alonso GT, Corathers S, Shah A, et al. Establishment of the T1D Exchange Quality Improvement Collaborative (T1DX-QI). Clin Diabetes. 2020;38(2):141-151. doi:10.2337/cd19-0032
8. Ginnard OZB, Alonso GT, Corathers SD, et al. Quality improvement in diabetes care: a review of initiatives and outcomes in the T1D Exchange Quality Improvement Collaborative. Clin Diabetes. 2021;39(3):256-263. doi:10.2337/cd21-0029
9. ATTD 2021 invited speaker abstracts. Diabetes Technol Ther. 2021;23(S2):A1-A206. doi:10.1089/dia.2021.2525.abstracts
10. Rompicherla SN, Edelen N, Gallagher R, et al. Children and adolescent patients with pre-existing type 1 diabetes and additional comorbidities have an increased risk of hospitalization from COVID-19; data from the T1D Exchange COVID Registry. Pediatr Diabetes. 2021;22(S30):3-32. doi:10.1111/pedi.13268
11. Abstracts for the T1D Exchange QI Collaborative (T1DX-QI) Learning Session 2021. November 8-9, 2021. J Diabetes. 2021;13(S1):3-17. doi:10.1111/1753-0407.13227
12. The Official Journal of ATTD Advanced Technologies & Treatments for Diabetes conference 27-30 April 2022. Barcelona and online. Diabetes Technol Ther. 2022;24(S1):A1-A237. doi:10.1089/dia.2022.2525.abstracts
13. Ebekozien ON, Kamboj N, Odugbesan MK, et al. Inequities in glycemic outcomes for patients with type 1 diabetes: six-year (2016-2021) longitudinal follow-up by race and ethnicity of 36,390 patients in the T1DX-QI Collaborative. Diabetes. 2022;71(suppl 1). doi:10.2337/db22-167-OR
14. Narayan KA, Noor M, Rompicherla N, et al. No BMI increase during the COVID-pandemic in children and adults with T1D in three continents: joint analysis of ADDN, T1DX, and DPV registries. Diabetes. 2022;71(suppl 1). doi:10.2337/db22-269-OR
15. Lee JY, Lee SWH. Telemedicine cost-effectiveness for diabetes management: a systematic review. Diabetes Technol Ther. 2018;20(7):492-500. doi:10.1089/dia.2018.0098
16. McDonnell ME. Telemedicine in complex diabetes management. Curr Diab Rep. 2018;18(7):42. doi:10.1007/s11892-018-1015-3
17. Lee JM, Carlson E, Albanese-O’Neill A, et al. Adoption of telemedicine for type 1 diabetes care during the COVID-19 pandemic. Diabetes Technol Ther. 2021;23(9):642-651. doi:10.1089/dia.2021.0080
18. Phillip M, Bergenstal RM, Close KL, et al. The digital/virtual diabetes clinic: the future is now–recommendations from an international panel on diabetes digital technologies introduction. Diabetes Technol Ther. 2021;23(2):146-154. doi:10.1089/dia.2020.0375
19. Garg SK, Rodriguez E. COVID‐19 pandemic and diabetes care. Diabetes Technol Ther. 2022;24(S1):S2-S20. doi:10.1089/dia.2022.2501
20. Beliard K, Ebekozien O, Demeterco-Berggren C, et al. Increased DKA at presentation among newly diagnosed type 1 diabetes patients with or without COVID-19: data from a multi-site surveillance registry. J Diabetes. 2021;13(3):270-272. doi:10.1111/1753-0407.13141
21. Ebekozien O, Agarwal S, Noor N, et al. Inequities in diabetic ketoacidosis among patients with type 1 diabetes and COVID-19: data from 52 US clinical centers. J Clin Endocrinol Metab. 2020;106(4):1755-1762. doi:10.1210/clinem/dgaa920
22. Alonso GT, Ebekozien O, Gallagher MP, et al. Diabetic ketoacidosis drives COVID-19 related hospitalizations in children with type 1 diabetes. J Diabetes. 2021;13(8):681-687. doi:10.1111/1753-0407.13184
23. Noor N, Ebekozien O, Levin L, et al. Diabetes technology use for management of type 1 diabetes is associated with fewer adverse COVID-19 outcomes: findings from the T1D Exchange COVID-19 Surveillance Registry. Diabetes Care. 2021;44(8):e160-e162. doi:10.2337/dc21-0074
24. Demeterco-Berggren C, Ebekozien O, Rompicherla S, et al. Age and hospitalization risk in people with type 1 diabetes and COVID-19: data from the T1D Exchange Surveillance Study. J Clin Endocrinol Metab. 2021;107(2):410-418. doi:10.1210/clinem/dgab668
25. DeSalvo DJ, Noor N, Xie C, et al. Patient demographics and clinical outcomes among type 1 diabetes patients using continuous glucose monitors: data from T1D Exchange real-world observational study. J Diabetes Sci Technol. 2021 Oct 9. [Epub ahead of print] doi:10.1177/19322968211049783
26. Gallagher MP, Rompicherla S, Ebekozien O, et al. Differences in COVID-19 outcomes among patients with type 1 diabetes: first vs later surges. J Clin Outcomes Manage. 2022;29(1):27-31. doi:10.12788/jcom.0084
27. Wolf RM, Noor N, Izquierdo R, et al. Increase in newly diagnosed type 1 diabetes in youth during the COVID-19 pandemic in the United States: a multi-center analysis. Pediatr Diabetes. 2022;23(4):433-438. doi:10.1111/pedi.13328
28. Lavik AR, Ebekozien O, Noor N, et al. Trends in type 1 diabetic ketoacidosis during COVID-19 surges at 7 US centers: highest burden on non-Hispanic Black patients. J Clin Endocrinol Metab. 2022;107(7):1948-1955. doi:10.1210/clinem/dgac158
29. van der Linden J, Welsh JB, Hirsch IB, Garg SK. Real-time continuous glucose monitoring during the coronavirus disease 2019 pandemic and its impact on time in range. Diabetes Technol Ther. 2021;23(S1):S1-S7. doi:10.1089/dia.2020.0649
30. Nwosu BU, Al-Halbouni L, Parajuli S, et al. COVID-19 pandemic and pediatric type 1 diabetes: no significant change in glycemic control during the pandemic lockdown of 2020. Front Endocrinol (Lausanne). 2021;12:703905. doi:10.3389/fendo.2021.703905
31. Ellahham S. Artificial intelligence: the future for diabetes care. Am J Med. 2020;133(8):895-900. doi:10.1016/j.amjmed.2020.03.033
32. Nomura A, Noguchi M, Kometani M, et al. Artificial intelligence in current diabetes management and prediction. Curr Diab Rep. 2021;21(12):61. doi:10.1007/s11892-021-01423-2
33. Mungmode A, Noor N, Weinstock RS, et al. Making diabetes electronic medical record data actionable: promoting benchmarking and population health using the T1D Exchange Quality Improvement Portal. Clin Diabetes. Forthcoming 2022.
34. Lavizzo-Mourey RJ, Besser RE, Williams DR. Understanding and mitigating health inequities—past, current, and future directions. N Engl J Med. 2021;384(18):1681-1684. doi:10.1056/NEJMp2008628
35. Majidi S, Ebekozien O, Noor N, et al. Inequities in health outcomes in children and adults with type 1 diabetes: data from the T1D Exchange Quality Improvement Collaborative. Clin Diabetes. 2021;39(3):278-283. doi:10.2337/cd21-0028
36. Ebekozien O, Mungmode A, Odugbesan O, et al. Addressing type 1 diabetes health inequities in the United States: approaches from the T1D Exchange QI Collaborative. J Diabetes. 2022;14(1):79-82. doi:10.1111/1753-0407.13235
37. Odugbesan O, Addala A, Nelson G, et al. Implicit racial-ethnic and insurance-mediated bias to recommending diabetes technology: insights from T1D Exchange multicenter pediatric and adult diabetes provider cohort. Diabetes Technol Ther. 2022 Jun 13. [Epub ahead of print] doi:10.1089/dia.2022.0042
38. Schmitt J, Fogle K, Scott ML, Iyer P. Improving equitable access to continuous glucose monitors for Alabama’s children with type 1 diabetes: a quality improvement project. Diabetes Technol Ther. 2022;24(7):481-491. doi:10.1089/dia.2021.0511
39. Akturk HK, Agarwal S, Hoffecker L, Shah VN. Inequity in racial-ethnic representation in randomized controlled trials of diabetes technologies in type 1 diabetes: critical need for new standards. Diabetes Care. 2021;44(6):e121-e123. doi:10.2337/dc20-3063
40. Ebekozien O, Mungmode A, Buckingham D, et al. Achieving equity in diabetes research: borrowing from the field of quality improvement using a practical framework and improvement tools. Diabetes Spectr. 2022;35(3):304-312. doi:10.2237/dsi22-0002
41. Zhang J, Xu J, Lim J, et al. Wearable glucose monitoring and implantable drug delivery systems for diabetes management. Adv Healthc Mater. 2021;10(17):e2100194. doi:10.1002/adhm.202100194
42. FDA expands remote patient monitoring in hospitals for people with diabetes during COVID-19; manufacturers donate CGM supplies. News release. April 21, 2020. Accessed August 30, 2022. https://www.diabetes.org/newsroom/press-releases/2020/fda-remote-patient-monitoring-cgm
43. Campbell P. FDA grants Dexcom CGM breakthrough designation for in-hospital use. March 2, 2022. Accessed August 30, 2022. https://www.endocrinologynetwork.com/view/fda-grants-dexcom-cgm-breakthrough-designation-for-in-hospital-use
44. Yeh T, Yeung M, Mendelsohn Curanaj FA. Managing patients with insulin pumps and continuous glucose monitors in the hospital: to wear or not to wear. Curr Diab Rep. 2021;21(2):7. doi:10.1007/s11892-021-01375-7
45. Medtronic announces FDA approval for MiniMed 770G insulin pump system. News release. September 21, 2020. Accessed August 30, 2022. https://bit.ly/3TyEna4
46. Tandem Diabetes Care announces commercial launch of the t:slim X2 insulin pump with Control-IQ technology in the United States. News release. January 15, 2020. Accessed August 30, 2022. https://investor.tandemdiabetes.com/news-releases/news-release-details/tandem-diabetes-care-announces-commercial-launch-tslim-x2-0
47. Garza M, Gutow H, Mahoney K. Omnipod 5 cleared by the FDA. Updated August 22, 2022. Accessed August 30, 2022.https://diatribe.org/omnipod-5-approved-fda
48. Boughton CK. Fully closed-loop insulin delivery—are we nearly there yet? Lancet Digit Health. 2021;3(11):e689-e690. doi:10.1016/s2589-7500(21)00218-1
49. Noor N, Kamboj MK, Triolo T, et al. Hybrid closed-loop systems and glycemic outcomes in children and adults with type 1 diabetes: real-world evidence from a U.S.-based multicenter collaborative. Diabetes Care. 2022;45(8):e118-e119. doi:10.2337/dc22-0329
50. Medtronic launches InPen with real-time Guardian Connect CGM data--the first integrated smart insulin pen for people with diabetes on MDI. News release. November 12, 2020. Accessed August 30, 2022. https://bit.ly/3CTSWPL
51. Bigfoot Biomedical receives FDA clearance for Bigfoot Unity Diabetes Management System, featuring first-of-its-kind smart pen caps for insulin pens used to treat type 1 and type 2 diabetes. News release. May 10, 2021. Accessed August 30, 2022. https://bit.ly/3BeyoAh
52. Vieira G. All about the CeQur Simplicity insulin patch. Updated May 24, 2022. Accessed August 30, 2022. https://beyondtype1.org/cequr-simplicity-insulin-patch/.
53. Messer LH, Tanenbaum ML, Cook PF, et al. Cost, hassle, and on-body experience: barriers to diabetes device use in adolescents and potential intervention targets. Diabetes Technol Ther. 2020;22(10):760-767. doi:10.1089/dia.2019.0509
54. Hilliard ME, Levy W, Anderson BJ, et al. Benefits and barriers of continuous glucose monitoring in young children with type 1 diabetes. Diabetes Technol Ther. 2019;21(9):493-498. doi:10.1089/dia.2019.0142
55. Dexcom G7 Release Delayed Until Late 2022. News release. August 8, 2022. Accessed September 7, 2022. https://diatribe.org/dexcom-g7-release-delayed-until-late-2022
56. Drucker DJ. Transforming type 1 diabetes: the next wave of innovation. Diabetologia. 2021;64(5):1059-1065. doi:10.1007/s00125-021-05396-5
57. Garg SK, Rodriguez E, Shah VN, Hirsch IB. New medications for the treatment of diabetes. Diabetes Technol Ther. 2022;24(S1):S190-S208. doi:10.1089/dia.2022.2513
58. Melton D. The promise of stem cell-derived islet replacement therapy. Diabetologia. 2021;64(5):1030-1036. doi:10.1007/s00125-020-05367-2
59. Danne T, Heinemann L, Bolinder J. New insulins, biosimilars, and insulin therapy. Diabetes Technol Ther. 2022;24(S1):S35-S57. doi:10.1089/dia.2022.2503
60. Kenney J. Insulin copay caps–a path to affordability. July 6, 2021. Accessed August 30, 2022.https://diatribechange.org/news/insulin-copay-caps-path-affordability
61. Glied SA, Zhu B. Not so sweet: insulin affordability over time. September 25, 2020. Accessed August 30, 2022. https://www.commonwealthfund.org/publications/issue-briefs/2020/sep/not-so-sweet-insulin-affordability-over-time
62. American Diabetes Association. Insulin and drug affordability. Accessed August 30, 2022. https://www.diabetes.org/advocacy/insulin-and-drug-affordability
63. Sullivan P. Chances for drug pricing, surprise billing action fade until November. March 24, 2020. Accessed August 30, 2022. https://thehill.com/policy/healthcare/489334-chances-for-drug-pricing-surprise-billing-action-fade-until-november/
64. Brown TD. How Medicare’s new Senior Savings Model makes insulin more affordable. June 4, 2020. Accessed August 30, 2022. https://www.diabetes.org/blog/how-medicares-new-senior-savings-model-makes-insulin-more-affordable
65. American Diabetes Association. ADA applauds the U.S. House of Representatives passage of the Affordable Insulin Now Act. News release. April 1, 2022. https://www.diabetes.org/newsroom/official-statement/2022/ada-applauds-us-house-of-representatives-passage-of-the-affordable-insulin-now-act
66. JDRF. Driving T1D cures during challenging times. 2022.
67. Medtronic announces ongoing initiatives to address health equity for people of color living with diabetes. News release. April 7, 2021. Access August 30, 2022. https://bit.ly/3KGTOZU
From the T1D Exchange, Boston, MA (Ann Mungmode, Nicole Rioles, Jesse Cases, Dr. Ebekozien); The Leona M. and Harry B. Hemsley Charitable Trust, New York, NY (Laurel Koester); and the University of Mississippi School of Population Health, Jackson, MS (Dr. Ebekozien).
Abstract
There have been remarkable innovations in diabetes management since the start of the COVID-19 pandemic, but these groundbreaking innovations are drawing limited focus as the field focuses on the adverse impact of the pandemic on patients with diabetes. This article reviews select population health innovations in diabetes management that have become available over the past 2 years of the COVID-19 pandemic from the perspective of the T1D Exchange Quality Improvement Collaborative, a learning health network that focuses on improving care and outcomes for individuals with type 1 diabetes (T1D). Such innovations include expanded telemedicine access, collection of real-world data, machine learning and artificial intelligence, and new diabetes medications and devices. In addition, multiple innovative studies have been undertaken to explore contributors to health inequities in diabetes, and advocacy efforts for specific populations have been successful. Looking to the future, work is required to explore additional health equity successes that do not further exacerbate inequities and to look for additional innovative ways to engage people with T1D in their health care through conversations on social determinants of health and societal structures.
Keywords: type 1 diabetes, learning health network, continuous glucose monitoring, health equity
One in 10 people in the United States has diabetes.1 Diabetes is the nation’s second leading cause of death, costing the US health system more than $300 billion annually.2 The COVID-19 pandemic presented additional health burdens for people living with diabetes. For example, preexisting diabetes was identified as a risk factor for COVID-19–associated morbidity and mortality.3,4 Over the past 2 years, there have been remarkable innovations in diabetes management, including stem cell therapy and new medication options. Additionally, improved technology solutions have aided in diabetes management through continuous glucose monitors (CGM), smart insulin pens, advanced hybrid closed-loop systems, and continuous subcutaneous insulin injections.5,6 Unfortunately, these groundbreaking innovations are drawing limited focus, as the field is rightfully focused on the adverse impact of the pandemic on patients with diabetes.
Learning health networks like the T1D Exchange Quality Improvement Collaborative (T1DX-QI) have implemented some of these innovative solutions to improve care for people with diabetes.7 T1DX-QI has more than 50 data-sharing endocrinology centers that care for over 75,000 people with diabetes across the United States (Figure 1). Centers participating in the T1DX-QI use quality improvement (QI) and implementation science methods to quickly translate research into evidence-based clinical practice. T1DX-QI leads diabetes population health and health system research and supports widespread transferability across health care organizations through regular collaborative calls, conferences, and case study documentation.8
In this review, we summarize impactful population health innovations in diabetes management that have become available over the past 2 years of the COVID-19 pandemic from the perspective of T1DX-QI (see Figure 2 for relevant definitions). This review is limited in scope and is not meant to be an exhaustive list of innovations. The review also reflects significant changes from the perspective of academic diabetes centers, which may not apply to rural or primary care diabetes practices.
Methods
The first (A.M.), second (H.H.), and senior (O.E.) authors conducted a scoping review of published literature using terms related to diabetes, population health, and innovation on PubMed Central and Google Scholar for the period March 2020 to June 2022. To complement the review, A.M. and O.E. also reviewed abstracts from presentations at major international diabetes conferences, including the American Diabetes Association (ADA), the International Society for Pediatric and Adolescent Diabetes (ISPAD), the T1DX-QI Learning Session Conference, and the Advanced Technologies & Treatments for Diabetes (ATTD) 2020 to 2022 conferences.9-14 The authors also searched FDA.gov and ClinicalTrials.gov for relevant insights. A.M. and O.E. sorted the reviewed literature into major themes (Figure 3) from the population health improvement perspective of the T1DX-QI.
Population Health Innovations in Diabetes Management
Expansion of Telemedicine Access
Telemedicine is cost-effective for patients with diabetes,15 including those with complex cases.16 Before the COVID-19 pandemic, telemedicine and virtual care were rare in diabetes management. However, the pandemic offered a new opportunity to expand the practice of telemedicine in diabetes management. A study from the T1DX-QI showed that telemedicine visits grew from comprising <1% of visits pre-pandemic (December 2019) to 95.2% during the pandemic (August 2020).17 Additional studies, like those conducted by Phillip et al,18 confirmed the noninferiority of telemedicine practice for patients with diabetes.Telemedicine was also found to be an effective strategy to educate patients on the use of diabetes technologies.19
Real-World Data and Disease Surveillance
As the COVID-19 pandemic exacerbated outcomes for people with type 1 diabetes (T1D), a need arose to understand the immediate effects of the pandemic on people with T1D through real-world data and disease surveillance. In April 2020, the T1DX-QI initiated a multicenter surveillance study to collect data and analyze the impact of COVID-19 on people with T1D. The existing health collaborative served as a springboard for robust surveillance study, documenting numerous works on the effects of COVID-19.3,4,20-28 Other investigators also embraced the power of real-world surveillance and real-world data.29,30
Big Data, Machine Learning, and Artificial Intelligence
The past 2 years have seen a shift toward embracing the incredible opportunity to tap the large volume of data generated from routine care for practical insights.31 In particular, researchers have demonstrated the widespread application of machine learning and artificial intelligence to improve diabetes management.32 The T1DX-QI also harnessed the growing power of big data by expanding the functionality of innovative benchmarking software. The T1DX QI Portal uses electronic medical record data of diabetes patients for clinic-to-clinic benchmarking and data analysis, using business intelligence solutions.33
Health Equity
While inequities across various health outcomes have been well documented for years,34 the COVID-19 pandemic further exaggerated racial/ethnic health inequities in T1D.23,35 In response, several organizations have outlined specific strategies to address these health inequities. Emboldened by the pandemic, the T1DX-QI announced a multipronged approach to address health inequities among patients with T1D through the Health Equity Advancement Lab (HEAL).36 One of HEAL’s main components is using real-world data to champion population-level insights and demonstrate progress in QI efforts.
Multiple innovative studies have been undertaken to explore contributors to health inequities in diabetes, and these studies are expanding our understanding of the chasm.37 There have also been innovative solutions to addressing these inequities, with multiple studies published over the past 2 years.38 A source of inequity among patients with T1D is the lack of representation of racial/ethnic minorities with T1D in clinical trials.39 The T1DX-QI suggests that the equity-adapted framework for QI can be applied by research leaders to support trial diversity and representation, ensuring future device innovations are meaningful for all people with T1D.40
Diabetes Devices
Glucose monitoring and insulin therapy are vital tools to support individuals living with T1D, and devices such as CGM and insulin pumps have become the standard of care for diabetes management (Table).41 Innovations in diabetes technology and device access are imperative for a chronic disease with no cure.
The COVID-19 pandemic created an opportunity to increase access to diabetes devices in inpatient settings. In 2020, the US Food and Drug Administration expanded the use of CGM to support remote monitoring of patients in inpatient hospital settings, simultaneously supporting the glucose monitoring needs of patients with T1D and reducing COVID-19 transmission through reduced patient-clinician contact.42 This effort has been expanded and will continue in 2022 and beyond,43 and aligns with the growing consensus that supports patients wearing both CGMs and insulin pumps in ambulatory settings to improve patient health outcomes.44
Since 2020, innovations in diabetes technology have improved and increased the variety of options available to people with T1D and made them easier to use (Table). New, advanced hybrid closed-loop systems have progressed to offer Bluetooth features, including automatic software upgrades, tubeless systems, and the ability to allow parents to use their smartphones to bolus for children.45-47 The next big step in insulin delivery innovation is the release of functioning, fully closed loop systems, of which several are currently in clinical trials.48 These systems support reduced hypoglycemia and improved time in range.49
Additional innovations in insulin delivery have improved the user experience and expanded therapeutic options, including a variety of smart insulin pens complete with dosing logs50,51 and even a patch to deliver insulin without the burden of injections.52 As barriers to diabetes technology persist,53 innovations in alternate insulin delivery provide people with T1D more options to align with their personal access and technology preferences.
Innovations in CGM address cited barriers to their use, including size or overall wear.53-55 CGMs released in the past few years are smaller in physical size, have longer durations of time between changings, are more accurate, and do not require calibrations for accuracy.
New Diabetes Medications
Many new medications and therapeutic advances have become available in the past 2 years.56 Additionally, more medications are being tested as adjunct therapies to support glycemic management in patients with T1D, including metformin, sodium-glucose cotransporter 1 and 2 inhibitors, pramlintide, glucagon-like polypeptide-1 analogs, and glucagon receptor agonists.57 Other recent advances include stem cell replacement therapy for patients with T1D.58 The ultra-long-acting biosimilar insulins are one medical innovation that has been stalled, rather than propelled, during the COVID-19 pandemic.59
Diabetes Policy Advocacy
People with T1D require insulin to survive. The cost of insulin has increased in recent years, with some studies citing a 64% to 100% increase in the past decade.60,61 In fact, 1 in 4 insulin users report that cost has impacted their insulin use, including rationing their insulin.62 Lockdowns during the COVID-19 pandemic stressed US families financially, increasing the urgency for insulin cost caps.
Although the COVID-19 pandemic halted national conversations on drug financing,63 advocacy efforts have succeeded for specific populations. The new Medicare Part D Senior Savings Model will cap the cost of insulin at $35 for a 30-day supply,64 and 20 states passed legislation capping insulin pricing.62 Efforts to codify national cost caps are under debate, including the passage of the Affordable Insulin Now Act, which passed the House in March 2022 and is currently under review in the Senate.65
Perspective: The Role of Private Philanthropy in Supporting Population Health Innovations
Funders and industry partners play a crucial role in leading and supporting innovations that improve the lives of people with T1D and reduce society’s costs of living with the disease. Data infrastructure is critical to supporting population health. While building the data infrastructure to support population health is both time- and resource-intensive, private foundations such as Helmsley are uniquely positioned—and have a responsibility—to take large, informed risks to help reach all communities with T1D.
The T1DX-QI is the largest source of population health data on T1D in the United States and is becoming the premiere data authority on its incidence, prevalence, and outcomes. The T1DX-QI enables a robust understanding of T1D-related health trends at the population level, as well as trends among clinics and providers. Pilot centers in the T1DX-QI have reported reductions in patients’ A1c and acute diabetes-related events, as well as improvements in device usage and depression screening. The ability to capture changes speaks to the promise and power of these data to demonstrate the clinical impact of QI interventions and to support the spread of best practices and learnings across health systems.
Additional philanthropic efforts have supported innovation in the last 2 years. For example, the JDRF, a nonprofit philanthropic equity firm, has supported efforts in developing artificial pancreas systems and cell therapies currently in clinical trials like teplizumab, a drug that has demonstrated delayed onset of T1D through JDRF’s T1D Fund.66 Industry partners also have an opportunity for significant influence in this area, as they continue to fund meaningful projects to advance care for people with T1D.67
Conclusion
We are optimistic that the innovations summarized here describe a shift in the tide of equitable T1D outcomes; however, future work is required to explore additional health equity successes that do not further exacerbate inequities. We also see further opportunities for innovative ways to engage people with T1D in their health care through conversations on social determinants of health and societal structures.
Corresponding author: Ann Mungmode, MPH, T1D Exchange, 11 Avenue de Lafayette, Boston, MA 02111; Email: amungmode@t1dexchange.org
Disclosures: Dr. Ebekozien serve(d) as a director, officer, partner, employee, advisor, consultant, or trustee for the Medtronic Advisory Board and received research grants from Medtronic Diabetes, Eli Lilly, and Dexcom.
Funding: The T1DX-QI is funded by The Leona M. and Harry B. Hemsley Charitable Trust.
From the T1D Exchange, Boston, MA (Ann Mungmode, Nicole Rioles, Jesse Cases, Dr. Ebekozien); The Leona M. and Harry B. Hemsley Charitable Trust, New York, NY (Laurel Koester); and the University of Mississippi School of Population Health, Jackson, MS (Dr. Ebekozien).
Abstract
There have been remarkable innovations in diabetes management since the start of the COVID-19 pandemic, but these groundbreaking innovations are drawing limited focus as the field focuses on the adverse impact of the pandemic on patients with diabetes. This article reviews select population health innovations in diabetes management that have become available over the past 2 years of the COVID-19 pandemic from the perspective of the T1D Exchange Quality Improvement Collaborative, a learning health network that focuses on improving care and outcomes for individuals with type 1 diabetes (T1D). Such innovations include expanded telemedicine access, collection of real-world data, machine learning and artificial intelligence, and new diabetes medications and devices. In addition, multiple innovative studies have been undertaken to explore contributors to health inequities in diabetes, and advocacy efforts for specific populations have been successful. Looking to the future, work is required to explore additional health equity successes that do not further exacerbate inequities and to look for additional innovative ways to engage people with T1D in their health care through conversations on social determinants of health and societal structures.
Keywords: type 1 diabetes, learning health network, continuous glucose monitoring, health equity
One in 10 people in the United States has diabetes.1 Diabetes is the nation’s second leading cause of death, costing the US health system more than $300 billion annually.2 The COVID-19 pandemic presented additional health burdens for people living with diabetes. For example, preexisting diabetes was identified as a risk factor for COVID-19–associated morbidity and mortality.3,4 Over the past 2 years, there have been remarkable innovations in diabetes management, including stem cell therapy and new medication options. Additionally, improved technology solutions have aided in diabetes management through continuous glucose monitors (CGM), smart insulin pens, advanced hybrid closed-loop systems, and continuous subcutaneous insulin injections.5,6 Unfortunately, these groundbreaking innovations are drawing limited focus, as the field is rightfully focused on the adverse impact of the pandemic on patients with diabetes.
Learning health networks like the T1D Exchange Quality Improvement Collaborative (T1DX-QI) have implemented some of these innovative solutions to improve care for people with diabetes.7 T1DX-QI has more than 50 data-sharing endocrinology centers that care for over 75,000 people with diabetes across the United States (Figure 1). Centers participating in the T1DX-QI use quality improvement (QI) and implementation science methods to quickly translate research into evidence-based clinical practice. T1DX-QI leads diabetes population health and health system research and supports widespread transferability across health care organizations through regular collaborative calls, conferences, and case study documentation.8
In this review, we summarize impactful population health innovations in diabetes management that have become available over the past 2 years of the COVID-19 pandemic from the perspective of T1DX-QI (see Figure 2 for relevant definitions). This review is limited in scope and is not meant to be an exhaustive list of innovations. The review also reflects significant changes from the perspective of academic diabetes centers, which may not apply to rural or primary care diabetes practices.
Methods
The first (A.M.), second (H.H.), and senior (O.E.) authors conducted a scoping review of published literature using terms related to diabetes, population health, and innovation on PubMed Central and Google Scholar for the period March 2020 to June 2022. To complement the review, A.M. and O.E. also reviewed abstracts from presentations at major international diabetes conferences, including the American Diabetes Association (ADA), the International Society for Pediatric and Adolescent Diabetes (ISPAD), the T1DX-QI Learning Session Conference, and the Advanced Technologies & Treatments for Diabetes (ATTD) 2020 to 2022 conferences.9-14 The authors also searched FDA.gov and ClinicalTrials.gov for relevant insights. A.M. and O.E. sorted the reviewed literature into major themes (Figure 3) from the population health improvement perspective of the T1DX-QI.
Population Health Innovations in Diabetes Management
Expansion of Telemedicine Access
Telemedicine is cost-effective for patients with diabetes,15 including those with complex cases.16 Before the COVID-19 pandemic, telemedicine and virtual care were rare in diabetes management. However, the pandemic offered a new opportunity to expand the practice of telemedicine in diabetes management. A study from the T1DX-QI showed that telemedicine visits grew from comprising <1% of visits pre-pandemic (December 2019) to 95.2% during the pandemic (August 2020).17 Additional studies, like those conducted by Phillip et al,18 confirmed the noninferiority of telemedicine practice for patients with diabetes.Telemedicine was also found to be an effective strategy to educate patients on the use of diabetes technologies.19
Real-World Data and Disease Surveillance
As the COVID-19 pandemic exacerbated outcomes for people with type 1 diabetes (T1D), a need arose to understand the immediate effects of the pandemic on people with T1D through real-world data and disease surveillance. In April 2020, the T1DX-QI initiated a multicenter surveillance study to collect data and analyze the impact of COVID-19 on people with T1D. The existing health collaborative served as a springboard for robust surveillance study, documenting numerous works on the effects of COVID-19.3,4,20-28 Other investigators also embraced the power of real-world surveillance and real-world data.29,30
Big Data, Machine Learning, and Artificial Intelligence
The past 2 years have seen a shift toward embracing the incredible opportunity to tap the large volume of data generated from routine care for practical insights.31 In particular, researchers have demonstrated the widespread application of machine learning and artificial intelligence to improve diabetes management.32 The T1DX-QI also harnessed the growing power of big data by expanding the functionality of innovative benchmarking software. The T1DX QI Portal uses electronic medical record data of diabetes patients for clinic-to-clinic benchmarking and data analysis, using business intelligence solutions.33
Health Equity
While inequities across various health outcomes have been well documented for years,34 the COVID-19 pandemic further exaggerated racial/ethnic health inequities in T1D.23,35 In response, several organizations have outlined specific strategies to address these health inequities. Emboldened by the pandemic, the T1DX-QI announced a multipronged approach to address health inequities among patients with T1D through the Health Equity Advancement Lab (HEAL).36 One of HEAL’s main components is using real-world data to champion population-level insights and demonstrate progress in QI efforts.
Multiple innovative studies have been undertaken to explore contributors to health inequities in diabetes, and these studies are expanding our understanding of the chasm.37 There have also been innovative solutions to addressing these inequities, with multiple studies published over the past 2 years.38 A source of inequity among patients with T1D is the lack of representation of racial/ethnic minorities with T1D in clinical trials.39 The T1DX-QI suggests that the equity-adapted framework for QI can be applied by research leaders to support trial diversity and representation, ensuring future device innovations are meaningful for all people with T1D.40
Diabetes Devices
Glucose monitoring and insulin therapy are vital tools to support individuals living with T1D, and devices such as CGM and insulin pumps have become the standard of care for diabetes management (Table).41 Innovations in diabetes technology and device access are imperative for a chronic disease with no cure.
The COVID-19 pandemic created an opportunity to increase access to diabetes devices in inpatient settings. In 2020, the US Food and Drug Administration expanded the use of CGM to support remote monitoring of patients in inpatient hospital settings, simultaneously supporting the glucose monitoring needs of patients with T1D and reducing COVID-19 transmission through reduced patient-clinician contact.42 This effort has been expanded and will continue in 2022 and beyond,43 and aligns with the growing consensus that supports patients wearing both CGMs and insulin pumps in ambulatory settings to improve patient health outcomes.44
Since 2020, innovations in diabetes technology have improved and increased the variety of options available to people with T1D and made them easier to use (Table). New, advanced hybrid closed-loop systems have progressed to offer Bluetooth features, including automatic software upgrades, tubeless systems, and the ability to allow parents to use their smartphones to bolus for children.45-47 The next big step in insulin delivery innovation is the release of functioning, fully closed loop systems, of which several are currently in clinical trials.48 These systems support reduced hypoglycemia and improved time in range.49
Additional innovations in insulin delivery have improved the user experience and expanded therapeutic options, including a variety of smart insulin pens complete with dosing logs50,51 and even a patch to deliver insulin without the burden of injections.52 As barriers to diabetes technology persist,53 innovations in alternate insulin delivery provide people with T1D more options to align with their personal access and technology preferences.
Innovations in CGM address cited barriers to their use, including size or overall wear.53-55 CGMs released in the past few years are smaller in physical size, have longer durations of time between changings, are more accurate, and do not require calibrations for accuracy.
New Diabetes Medications
Many new medications and therapeutic advances have become available in the past 2 years.56 Additionally, more medications are being tested as adjunct therapies to support glycemic management in patients with T1D, including metformin, sodium-glucose cotransporter 1 and 2 inhibitors, pramlintide, glucagon-like polypeptide-1 analogs, and glucagon receptor agonists.57 Other recent advances include stem cell replacement therapy for patients with T1D.58 The ultra-long-acting biosimilar insulins are one medical innovation that has been stalled, rather than propelled, during the COVID-19 pandemic.59
Diabetes Policy Advocacy
People with T1D require insulin to survive. The cost of insulin has increased in recent years, with some studies citing a 64% to 100% increase in the past decade.60,61 In fact, 1 in 4 insulin users report that cost has impacted their insulin use, including rationing their insulin.62 Lockdowns during the COVID-19 pandemic stressed US families financially, increasing the urgency for insulin cost caps.
Although the COVID-19 pandemic halted national conversations on drug financing,63 advocacy efforts have succeeded for specific populations. The new Medicare Part D Senior Savings Model will cap the cost of insulin at $35 for a 30-day supply,64 and 20 states passed legislation capping insulin pricing.62 Efforts to codify national cost caps are under debate, including the passage of the Affordable Insulin Now Act, which passed the House in March 2022 and is currently under review in the Senate.65
Perspective: The Role of Private Philanthropy in Supporting Population Health Innovations
Funders and industry partners play a crucial role in leading and supporting innovations that improve the lives of people with T1D and reduce society’s costs of living with the disease. Data infrastructure is critical to supporting population health. While building the data infrastructure to support population health is both time- and resource-intensive, private foundations such as Helmsley are uniquely positioned—and have a responsibility—to take large, informed risks to help reach all communities with T1D.
The T1DX-QI is the largest source of population health data on T1D in the United States and is becoming the premiere data authority on its incidence, prevalence, and outcomes. The T1DX-QI enables a robust understanding of T1D-related health trends at the population level, as well as trends among clinics and providers. Pilot centers in the T1DX-QI have reported reductions in patients’ A1c and acute diabetes-related events, as well as improvements in device usage and depression screening. The ability to capture changes speaks to the promise and power of these data to demonstrate the clinical impact of QI interventions and to support the spread of best practices and learnings across health systems.
Additional philanthropic efforts have supported innovation in the last 2 years. For example, the JDRF, a nonprofit philanthropic equity firm, has supported efforts in developing artificial pancreas systems and cell therapies currently in clinical trials like teplizumab, a drug that has demonstrated delayed onset of T1D through JDRF’s T1D Fund.66 Industry partners also have an opportunity for significant influence in this area, as they continue to fund meaningful projects to advance care for people with T1D.67
Conclusion
We are optimistic that the innovations summarized here describe a shift in the tide of equitable T1D outcomes; however, future work is required to explore additional health equity successes that do not further exacerbate inequities. We also see further opportunities for innovative ways to engage people with T1D in their health care through conversations on social determinants of health and societal structures.
Corresponding author: Ann Mungmode, MPH, T1D Exchange, 11 Avenue de Lafayette, Boston, MA 02111; Email: amungmode@t1dexchange.org
Disclosures: Dr. Ebekozien serve(d) as a director, officer, partner, employee, advisor, consultant, or trustee for the Medtronic Advisory Board and received research grants from Medtronic Diabetes, Eli Lilly, and Dexcom.
Funding: The T1DX-QI is funded by The Leona M. and Harry B. Hemsley Charitable Trust.
1. Centers for Disease Control and Prevention. National diabetes statistics report. Accessed August 30, 2022. www.cdc.gov/diabetes/data/statistics-report/index.html
2. Centers for Disease Control and Prevention. Diabetes fast facts. Accessed August 30, 2022. www.cdc.gov/diabetes/basics/quick-facts.html
3. O’Malley G, Ebekozien O, Desimone M, et al. COVID-19 hospitalization in adults with type 1 diabetes: results from the T1D Exchange Multicenter Surveillance Study. J Clin Endocrinol Metab. 2020;106(2):e936-e942. doi:10.1210/clinem/dgaa825
4. Ebekozien OA, Noor N, Gallagher MP, Alonso GT. Type 1 diabetes and COVID-19: preliminary findings from a multicenter surveillance study in the U.S. Diabetes Care. 2020;43(8):e83-e85. doi:10.2337/dc20-1088
5. Zimmerman C, Albanese-O’Neill A, Haller MJ. Advances in type 1 diabetes technology over the last decade. Eur Endocrinol. 2019;15(2):70-76. doi:10.17925/ee.2019.15.2.70
6. Wake DJ, Gibb FW, Kar P, et al. Endocrinology in the time of COVID-19: remodelling diabetes services and emerging innovation. Eur J Endocrinol. 2020;183(2):G67-G77. doi:10.1530/eje-20-0377
7. Alonso GT, Corathers S, Shah A, et al. Establishment of the T1D Exchange Quality Improvement Collaborative (T1DX-QI). Clin Diabetes. 2020;38(2):141-151. doi:10.2337/cd19-0032
8. Ginnard OZB, Alonso GT, Corathers SD, et al. Quality improvement in diabetes care: a review of initiatives and outcomes in the T1D Exchange Quality Improvement Collaborative. Clin Diabetes. 2021;39(3):256-263. doi:10.2337/cd21-0029
9. ATTD 2021 invited speaker abstracts. Diabetes Technol Ther. 2021;23(S2):A1-A206. doi:10.1089/dia.2021.2525.abstracts
10. Rompicherla SN, Edelen N, Gallagher R, et al. Children and adolescent patients with pre-existing type 1 diabetes and additional comorbidities have an increased risk of hospitalization from COVID-19; data from the T1D Exchange COVID Registry. Pediatr Diabetes. 2021;22(S30):3-32. doi:10.1111/pedi.13268
11. Abstracts for the T1D Exchange QI Collaborative (T1DX-QI) Learning Session 2021. November 8-9, 2021. J Diabetes. 2021;13(S1):3-17. doi:10.1111/1753-0407.13227
12. The Official Journal of ATTD Advanced Technologies & Treatments for Diabetes conference 27-30 April 2022. Barcelona and online. Diabetes Technol Ther. 2022;24(S1):A1-A237. doi:10.1089/dia.2022.2525.abstracts
13. Ebekozien ON, Kamboj N, Odugbesan MK, et al. Inequities in glycemic outcomes for patients with type 1 diabetes: six-year (2016-2021) longitudinal follow-up by race and ethnicity of 36,390 patients in the T1DX-QI Collaborative. Diabetes. 2022;71(suppl 1). doi:10.2337/db22-167-OR
14. Narayan KA, Noor M, Rompicherla N, et al. No BMI increase during the COVID-pandemic in children and adults with T1D in three continents: joint analysis of ADDN, T1DX, and DPV registries. Diabetes. 2022;71(suppl 1). doi:10.2337/db22-269-OR
15. Lee JY, Lee SWH. Telemedicine cost-effectiveness for diabetes management: a systematic review. Diabetes Technol Ther. 2018;20(7):492-500. doi:10.1089/dia.2018.0098
16. McDonnell ME. Telemedicine in complex diabetes management. Curr Diab Rep. 2018;18(7):42. doi:10.1007/s11892-018-1015-3
17. Lee JM, Carlson E, Albanese-O’Neill A, et al. Adoption of telemedicine for type 1 diabetes care during the COVID-19 pandemic. Diabetes Technol Ther. 2021;23(9):642-651. doi:10.1089/dia.2021.0080
18. Phillip M, Bergenstal RM, Close KL, et al. The digital/virtual diabetes clinic: the future is now–recommendations from an international panel on diabetes digital technologies introduction. Diabetes Technol Ther. 2021;23(2):146-154. doi:10.1089/dia.2020.0375
19. Garg SK, Rodriguez E. COVID‐19 pandemic and diabetes care. Diabetes Technol Ther. 2022;24(S1):S2-S20. doi:10.1089/dia.2022.2501
20. Beliard K, Ebekozien O, Demeterco-Berggren C, et al. Increased DKA at presentation among newly diagnosed type 1 diabetes patients with or without COVID-19: data from a multi-site surveillance registry. J Diabetes. 2021;13(3):270-272. doi:10.1111/1753-0407.13141
21. Ebekozien O, Agarwal S, Noor N, et al. Inequities in diabetic ketoacidosis among patients with type 1 diabetes and COVID-19: data from 52 US clinical centers. J Clin Endocrinol Metab. 2020;106(4):1755-1762. doi:10.1210/clinem/dgaa920
22. Alonso GT, Ebekozien O, Gallagher MP, et al. Diabetic ketoacidosis drives COVID-19 related hospitalizations in children with type 1 diabetes. J Diabetes. 2021;13(8):681-687. doi:10.1111/1753-0407.13184
23. Noor N, Ebekozien O, Levin L, et al. Diabetes technology use for management of type 1 diabetes is associated with fewer adverse COVID-19 outcomes: findings from the T1D Exchange COVID-19 Surveillance Registry. Diabetes Care. 2021;44(8):e160-e162. doi:10.2337/dc21-0074
24. Demeterco-Berggren C, Ebekozien O, Rompicherla S, et al. Age and hospitalization risk in people with type 1 diabetes and COVID-19: data from the T1D Exchange Surveillance Study. J Clin Endocrinol Metab. 2021;107(2):410-418. doi:10.1210/clinem/dgab668
25. DeSalvo DJ, Noor N, Xie C, et al. Patient demographics and clinical outcomes among type 1 diabetes patients using continuous glucose monitors: data from T1D Exchange real-world observational study. J Diabetes Sci Technol. 2021 Oct 9. [Epub ahead of print] doi:10.1177/19322968211049783
26. Gallagher MP, Rompicherla S, Ebekozien O, et al. Differences in COVID-19 outcomes among patients with type 1 diabetes: first vs later surges. J Clin Outcomes Manage. 2022;29(1):27-31. doi:10.12788/jcom.0084
27. Wolf RM, Noor N, Izquierdo R, et al. Increase in newly diagnosed type 1 diabetes in youth during the COVID-19 pandemic in the United States: a multi-center analysis. Pediatr Diabetes. 2022;23(4):433-438. doi:10.1111/pedi.13328
28. Lavik AR, Ebekozien O, Noor N, et al. Trends in type 1 diabetic ketoacidosis during COVID-19 surges at 7 US centers: highest burden on non-Hispanic Black patients. J Clin Endocrinol Metab. 2022;107(7):1948-1955. doi:10.1210/clinem/dgac158
29. van der Linden J, Welsh JB, Hirsch IB, Garg SK. Real-time continuous glucose monitoring during the coronavirus disease 2019 pandemic and its impact on time in range. Diabetes Technol Ther. 2021;23(S1):S1-S7. doi:10.1089/dia.2020.0649
30. Nwosu BU, Al-Halbouni L, Parajuli S, et al. COVID-19 pandemic and pediatric type 1 diabetes: no significant change in glycemic control during the pandemic lockdown of 2020. Front Endocrinol (Lausanne). 2021;12:703905. doi:10.3389/fendo.2021.703905
31. Ellahham S. Artificial intelligence: the future for diabetes care. Am J Med. 2020;133(8):895-900. doi:10.1016/j.amjmed.2020.03.033
32. Nomura A, Noguchi M, Kometani M, et al. Artificial intelligence in current diabetes management and prediction. Curr Diab Rep. 2021;21(12):61. doi:10.1007/s11892-021-01423-2
33. Mungmode A, Noor N, Weinstock RS, et al. Making diabetes electronic medical record data actionable: promoting benchmarking and population health using the T1D Exchange Quality Improvement Portal. Clin Diabetes. Forthcoming 2022.
34. Lavizzo-Mourey RJ, Besser RE, Williams DR. Understanding and mitigating health inequities—past, current, and future directions. N Engl J Med. 2021;384(18):1681-1684. doi:10.1056/NEJMp2008628
35. Majidi S, Ebekozien O, Noor N, et al. Inequities in health outcomes in children and adults with type 1 diabetes: data from the T1D Exchange Quality Improvement Collaborative. Clin Diabetes. 2021;39(3):278-283. doi:10.2337/cd21-0028
36. Ebekozien O, Mungmode A, Odugbesan O, et al. Addressing type 1 diabetes health inequities in the United States: approaches from the T1D Exchange QI Collaborative. J Diabetes. 2022;14(1):79-82. doi:10.1111/1753-0407.13235
37. Odugbesan O, Addala A, Nelson G, et al. Implicit racial-ethnic and insurance-mediated bias to recommending diabetes technology: insights from T1D Exchange multicenter pediatric and adult diabetes provider cohort. Diabetes Technol Ther. 2022 Jun 13. [Epub ahead of print] doi:10.1089/dia.2022.0042
38. Schmitt J, Fogle K, Scott ML, Iyer P. Improving equitable access to continuous glucose monitors for Alabama’s children with type 1 diabetes: a quality improvement project. Diabetes Technol Ther. 2022;24(7):481-491. doi:10.1089/dia.2021.0511
39. Akturk HK, Agarwal S, Hoffecker L, Shah VN. Inequity in racial-ethnic representation in randomized controlled trials of diabetes technologies in type 1 diabetes: critical need for new standards. Diabetes Care. 2021;44(6):e121-e123. doi:10.2337/dc20-3063
40. Ebekozien O, Mungmode A, Buckingham D, et al. Achieving equity in diabetes research: borrowing from the field of quality improvement using a practical framework and improvement tools. Diabetes Spectr. 2022;35(3):304-312. doi:10.2237/dsi22-0002
41. Zhang J, Xu J, Lim J, et al. Wearable glucose monitoring and implantable drug delivery systems for diabetes management. Adv Healthc Mater. 2021;10(17):e2100194. doi:10.1002/adhm.202100194
42. FDA expands remote patient monitoring in hospitals for people with diabetes during COVID-19; manufacturers donate CGM supplies. News release. April 21, 2020. Accessed August 30, 2022. https://www.diabetes.org/newsroom/press-releases/2020/fda-remote-patient-monitoring-cgm
43. Campbell P. FDA grants Dexcom CGM breakthrough designation for in-hospital use. March 2, 2022. Accessed August 30, 2022. https://www.endocrinologynetwork.com/view/fda-grants-dexcom-cgm-breakthrough-designation-for-in-hospital-use
44. Yeh T, Yeung M, Mendelsohn Curanaj FA. Managing patients with insulin pumps and continuous glucose monitors in the hospital: to wear or not to wear. Curr Diab Rep. 2021;21(2):7. doi:10.1007/s11892-021-01375-7
45. Medtronic announces FDA approval for MiniMed 770G insulin pump system. News release. September 21, 2020. Accessed August 30, 2022. https://bit.ly/3TyEna4
46. Tandem Diabetes Care announces commercial launch of the t:slim X2 insulin pump with Control-IQ technology in the United States. News release. January 15, 2020. Accessed August 30, 2022. https://investor.tandemdiabetes.com/news-releases/news-release-details/tandem-diabetes-care-announces-commercial-launch-tslim-x2-0
47. Garza M, Gutow H, Mahoney K. Omnipod 5 cleared by the FDA. Updated August 22, 2022. Accessed August 30, 2022.https://diatribe.org/omnipod-5-approved-fda
48. Boughton CK. Fully closed-loop insulin delivery—are we nearly there yet? Lancet Digit Health. 2021;3(11):e689-e690. doi:10.1016/s2589-7500(21)00218-1
49. Noor N, Kamboj MK, Triolo T, et al. Hybrid closed-loop systems and glycemic outcomes in children and adults with type 1 diabetes: real-world evidence from a U.S.-based multicenter collaborative. Diabetes Care. 2022;45(8):e118-e119. doi:10.2337/dc22-0329
50. Medtronic launches InPen with real-time Guardian Connect CGM data--the first integrated smart insulin pen for people with diabetes on MDI. News release. November 12, 2020. Accessed August 30, 2022. https://bit.ly/3CTSWPL
51. Bigfoot Biomedical receives FDA clearance for Bigfoot Unity Diabetes Management System, featuring first-of-its-kind smart pen caps for insulin pens used to treat type 1 and type 2 diabetes. News release. May 10, 2021. Accessed August 30, 2022. https://bit.ly/3BeyoAh
52. Vieira G. All about the CeQur Simplicity insulin patch. Updated May 24, 2022. Accessed August 30, 2022. https://beyondtype1.org/cequr-simplicity-insulin-patch/.
53. Messer LH, Tanenbaum ML, Cook PF, et al. Cost, hassle, and on-body experience: barriers to diabetes device use in adolescents and potential intervention targets. Diabetes Technol Ther. 2020;22(10):760-767. doi:10.1089/dia.2019.0509
54. Hilliard ME, Levy W, Anderson BJ, et al. Benefits and barriers of continuous glucose monitoring in young children with type 1 diabetes. Diabetes Technol Ther. 2019;21(9):493-498. doi:10.1089/dia.2019.0142
55. Dexcom G7 Release Delayed Until Late 2022. News release. August 8, 2022. Accessed September 7, 2022. https://diatribe.org/dexcom-g7-release-delayed-until-late-2022
56. Drucker DJ. Transforming type 1 diabetes: the next wave of innovation. Diabetologia. 2021;64(5):1059-1065. doi:10.1007/s00125-021-05396-5
57. Garg SK, Rodriguez E, Shah VN, Hirsch IB. New medications for the treatment of diabetes. Diabetes Technol Ther. 2022;24(S1):S190-S208. doi:10.1089/dia.2022.2513
58. Melton D. The promise of stem cell-derived islet replacement therapy. Diabetologia. 2021;64(5):1030-1036. doi:10.1007/s00125-020-05367-2
59. Danne T, Heinemann L, Bolinder J. New insulins, biosimilars, and insulin therapy. Diabetes Technol Ther. 2022;24(S1):S35-S57. doi:10.1089/dia.2022.2503
60. Kenney J. Insulin copay caps–a path to affordability. July 6, 2021. Accessed August 30, 2022.https://diatribechange.org/news/insulin-copay-caps-path-affordability
61. Glied SA, Zhu B. Not so sweet: insulin affordability over time. September 25, 2020. Accessed August 30, 2022. https://www.commonwealthfund.org/publications/issue-briefs/2020/sep/not-so-sweet-insulin-affordability-over-time
62. American Diabetes Association. Insulin and drug affordability. Accessed August 30, 2022. https://www.diabetes.org/advocacy/insulin-and-drug-affordability
63. Sullivan P. Chances for drug pricing, surprise billing action fade until November. March 24, 2020. Accessed August 30, 2022. https://thehill.com/policy/healthcare/489334-chances-for-drug-pricing-surprise-billing-action-fade-until-november/
64. Brown TD. How Medicare’s new Senior Savings Model makes insulin more affordable. June 4, 2020. Accessed August 30, 2022. https://www.diabetes.org/blog/how-medicares-new-senior-savings-model-makes-insulin-more-affordable
65. American Diabetes Association. ADA applauds the U.S. House of Representatives passage of the Affordable Insulin Now Act. News release. April 1, 2022. https://www.diabetes.org/newsroom/official-statement/2022/ada-applauds-us-house-of-representatives-passage-of-the-affordable-insulin-now-act
66. JDRF. Driving T1D cures during challenging times. 2022.
67. Medtronic announces ongoing initiatives to address health equity for people of color living with diabetes. News release. April 7, 2021. Access August 30, 2022. https://bit.ly/3KGTOZU
1. Centers for Disease Control and Prevention. National diabetes statistics report. Accessed August 30, 2022. www.cdc.gov/diabetes/data/statistics-report/index.html
2. Centers for Disease Control and Prevention. Diabetes fast facts. Accessed August 30, 2022. www.cdc.gov/diabetes/basics/quick-facts.html
3. O’Malley G, Ebekozien O, Desimone M, et al. COVID-19 hospitalization in adults with type 1 diabetes: results from the T1D Exchange Multicenter Surveillance Study. J Clin Endocrinol Metab. 2020;106(2):e936-e942. doi:10.1210/clinem/dgaa825
4. Ebekozien OA, Noor N, Gallagher MP, Alonso GT. Type 1 diabetes and COVID-19: preliminary findings from a multicenter surveillance study in the U.S. Diabetes Care. 2020;43(8):e83-e85. doi:10.2337/dc20-1088
5. Zimmerman C, Albanese-O’Neill A, Haller MJ. Advances in type 1 diabetes technology over the last decade. Eur Endocrinol. 2019;15(2):70-76. doi:10.17925/ee.2019.15.2.70
6. Wake DJ, Gibb FW, Kar P, et al. Endocrinology in the time of COVID-19: remodelling diabetes services and emerging innovation. Eur J Endocrinol. 2020;183(2):G67-G77. doi:10.1530/eje-20-0377
7. Alonso GT, Corathers S, Shah A, et al. Establishment of the T1D Exchange Quality Improvement Collaborative (T1DX-QI). Clin Diabetes. 2020;38(2):141-151. doi:10.2337/cd19-0032
8. Ginnard OZB, Alonso GT, Corathers SD, et al. Quality improvement in diabetes care: a review of initiatives and outcomes in the T1D Exchange Quality Improvement Collaborative. Clin Diabetes. 2021;39(3):256-263. doi:10.2337/cd21-0029
9. ATTD 2021 invited speaker abstracts. Diabetes Technol Ther. 2021;23(S2):A1-A206. doi:10.1089/dia.2021.2525.abstracts
10. Rompicherla SN, Edelen N, Gallagher R, et al. Children and adolescent patients with pre-existing type 1 diabetes and additional comorbidities have an increased risk of hospitalization from COVID-19; data from the T1D Exchange COVID Registry. Pediatr Diabetes. 2021;22(S30):3-32. doi:10.1111/pedi.13268
11. Abstracts for the T1D Exchange QI Collaborative (T1DX-QI) Learning Session 2021. November 8-9, 2021. J Diabetes. 2021;13(S1):3-17. doi:10.1111/1753-0407.13227
12. The Official Journal of ATTD Advanced Technologies & Treatments for Diabetes conference 27-30 April 2022. Barcelona and online. Diabetes Technol Ther. 2022;24(S1):A1-A237. doi:10.1089/dia.2022.2525.abstracts
13. Ebekozien ON, Kamboj N, Odugbesan MK, et al. Inequities in glycemic outcomes for patients with type 1 diabetes: six-year (2016-2021) longitudinal follow-up by race and ethnicity of 36,390 patients in the T1DX-QI Collaborative. Diabetes. 2022;71(suppl 1). doi:10.2337/db22-167-OR
14. Narayan KA, Noor M, Rompicherla N, et al. No BMI increase during the COVID-pandemic in children and adults with T1D in three continents: joint analysis of ADDN, T1DX, and DPV registries. Diabetes. 2022;71(suppl 1). doi:10.2337/db22-269-OR
15. Lee JY, Lee SWH. Telemedicine cost-effectiveness for diabetes management: a systematic review. Diabetes Technol Ther. 2018;20(7):492-500. doi:10.1089/dia.2018.0098
16. McDonnell ME. Telemedicine in complex diabetes management. Curr Diab Rep. 2018;18(7):42. doi:10.1007/s11892-018-1015-3
17. Lee JM, Carlson E, Albanese-O’Neill A, et al. Adoption of telemedicine for type 1 diabetes care during the COVID-19 pandemic. Diabetes Technol Ther. 2021;23(9):642-651. doi:10.1089/dia.2021.0080
18. Phillip M, Bergenstal RM, Close KL, et al. The digital/virtual diabetes clinic: the future is now–recommendations from an international panel on diabetes digital technologies introduction. Diabetes Technol Ther. 2021;23(2):146-154. doi:10.1089/dia.2020.0375
19. Garg SK, Rodriguez E. COVID‐19 pandemic and diabetes care. Diabetes Technol Ther. 2022;24(S1):S2-S20. doi:10.1089/dia.2022.2501
20. Beliard K, Ebekozien O, Demeterco-Berggren C, et al. Increased DKA at presentation among newly diagnosed type 1 diabetes patients with or without COVID-19: data from a multi-site surveillance registry. J Diabetes. 2021;13(3):270-272. doi:10.1111/1753-0407.13141
21. Ebekozien O, Agarwal S, Noor N, et al. Inequities in diabetic ketoacidosis among patients with type 1 diabetes and COVID-19: data from 52 US clinical centers. J Clin Endocrinol Metab. 2020;106(4):1755-1762. doi:10.1210/clinem/dgaa920
22. Alonso GT, Ebekozien O, Gallagher MP, et al. Diabetic ketoacidosis drives COVID-19 related hospitalizations in children with type 1 diabetes. J Diabetes. 2021;13(8):681-687. doi:10.1111/1753-0407.13184
23. Noor N, Ebekozien O, Levin L, et al. Diabetes technology use for management of type 1 diabetes is associated with fewer adverse COVID-19 outcomes: findings from the T1D Exchange COVID-19 Surveillance Registry. Diabetes Care. 2021;44(8):e160-e162. doi:10.2337/dc21-0074
24. Demeterco-Berggren C, Ebekozien O, Rompicherla S, et al. Age and hospitalization risk in people with type 1 diabetes and COVID-19: data from the T1D Exchange Surveillance Study. J Clin Endocrinol Metab. 2021;107(2):410-418. doi:10.1210/clinem/dgab668
25. DeSalvo DJ, Noor N, Xie C, et al. Patient demographics and clinical outcomes among type 1 diabetes patients using continuous glucose monitors: data from T1D Exchange real-world observational study. J Diabetes Sci Technol. 2021 Oct 9. [Epub ahead of print] doi:10.1177/19322968211049783
26. Gallagher MP, Rompicherla S, Ebekozien O, et al. Differences in COVID-19 outcomes among patients with type 1 diabetes: first vs later surges. J Clin Outcomes Manage. 2022;29(1):27-31. doi:10.12788/jcom.0084
27. Wolf RM, Noor N, Izquierdo R, et al. Increase in newly diagnosed type 1 diabetes in youth during the COVID-19 pandemic in the United States: a multi-center analysis. Pediatr Diabetes. 2022;23(4):433-438. doi:10.1111/pedi.13328
28. Lavik AR, Ebekozien O, Noor N, et al. Trends in type 1 diabetic ketoacidosis during COVID-19 surges at 7 US centers: highest burden on non-Hispanic Black patients. J Clin Endocrinol Metab. 2022;107(7):1948-1955. doi:10.1210/clinem/dgac158
29. van der Linden J, Welsh JB, Hirsch IB, Garg SK. Real-time continuous glucose monitoring during the coronavirus disease 2019 pandemic and its impact on time in range. Diabetes Technol Ther. 2021;23(S1):S1-S7. doi:10.1089/dia.2020.0649
30. Nwosu BU, Al-Halbouni L, Parajuli S, et al. COVID-19 pandemic and pediatric type 1 diabetes: no significant change in glycemic control during the pandemic lockdown of 2020. Front Endocrinol (Lausanne). 2021;12:703905. doi:10.3389/fendo.2021.703905
31. Ellahham S. Artificial intelligence: the future for diabetes care. Am J Med. 2020;133(8):895-900. doi:10.1016/j.amjmed.2020.03.033
32. Nomura A, Noguchi M, Kometani M, et al. Artificial intelligence in current diabetes management and prediction. Curr Diab Rep. 2021;21(12):61. doi:10.1007/s11892-021-01423-2
33. Mungmode A, Noor N, Weinstock RS, et al. Making diabetes electronic medical record data actionable: promoting benchmarking and population health using the T1D Exchange Quality Improvement Portal. Clin Diabetes. Forthcoming 2022.
34. Lavizzo-Mourey RJ, Besser RE, Williams DR. Understanding and mitigating health inequities—past, current, and future directions. N Engl J Med. 2021;384(18):1681-1684. doi:10.1056/NEJMp2008628
35. Majidi S, Ebekozien O, Noor N, et al. Inequities in health outcomes in children and adults with type 1 diabetes: data from the T1D Exchange Quality Improvement Collaborative. Clin Diabetes. 2021;39(3):278-283. doi:10.2337/cd21-0028
36. Ebekozien O, Mungmode A, Odugbesan O, et al. Addressing type 1 diabetes health inequities in the United States: approaches from the T1D Exchange QI Collaborative. J Diabetes. 2022;14(1):79-82. doi:10.1111/1753-0407.13235
37. Odugbesan O, Addala A, Nelson G, et al. Implicit racial-ethnic and insurance-mediated bias to recommending diabetes technology: insights from T1D Exchange multicenter pediatric and adult diabetes provider cohort. Diabetes Technol Ther. 2022 Jun 13. [Epub ahead of print] doi:10.1089/dia.2022.0042
38. Schmitt J, Fogle K, Scott ML, Iyer P. Improving equitable access to continuous glucose monitors for Alabama’s children with type 1 diabetes: a quality improvement project. Diabetes Technol Ther. 2022;24(7):481-491. doi:10.1089/dia.2021.0511
39. Akturk HK, Agarwal S, Hoffecker L, Shah VN. Inequity in racial-ethnic representation in randomized controlled trials of diabetes technologies in type 1 diabetes: critical need for new standards. Diabetes Care. 2021;44(6):e121-e123. doi:10.2337/dc20-3063
40. Ebekozien O, Mungmode A, Buckingham D, et al. Achieving equity in diabetes research: borrowing from the field of quality improvement using a practical framework and improvement tools. Diabetes Spectr. 2022;35(3):304-312. doi:10.2237/dsi22-0002
41. Zhang J, Xu J, Lim J, et al. Wearable glucose monitoring and implantable drug delivery systems for diabetes management. Adv Healthc Mater. 2021;10(17):e2100194. doi:10.1002/adhm.202100194
42. FDA expands remote patient monitoring in hospitals for people with diabetes during COVID-19; manufacturers donate CGM supplies. News release. April 21, 2020. Accessed August 30, 2022. https://www.diabetes.org/newsroom/press-releases/2020/fda-remote-patient-monitoring-cgm
43. Campbell P. FDA grants Dexcom CGM breakthrough designation for in-hospital use. March 2, 2022. Accessed August 30, 2022. https://www.endocrinologynetwork.com/view/fda-grants-dexcom-cgm-breakthrough-designation-for-in-hospital-use
44. Yeh T, Yeung M, Mendelsohn Curanaj FA. Managing patients with insulin pumps and continuous glucose monitors in the hospital: to wear or not to wear. Curr Diab Rep. 2021;21(2):7. doi:10.1007/s11892-021-01375-7
45. Medtronic announces FDA approval for MiniMed 770G insulin pump system. News release. September 21, 2020. Accessed August 30, 2022. https://bit.ly/3TyEna4
46. Tandem Diabetes Care announces commercial launch of the t:slim X2 insulin pump with Control-IQ technology in the United States. News release. January 15, 2020. Accessed August 30, 2022. https://investor.tandemdiabetes.com/news-releases/news-release-details/tandem-diabetes-care-announces-commercial-launch-tslim-x2-0
47. Garza M, Gutow H, Mahoney K. Omnipod 5 cleared by the FDA. Updated August 22, 2022. Accessed August 30, 2022.https://diatribe.org/omnipod-5-approved-fda
48. Boughton CK. Fully closed-loop insulin delivery—are we nearly there yet? Lancet Digit Health. 2021;3(11):e689-e690. doi:10.1016/s2589-7500(21)00218-1
49. Noor N, Kamboj MK, Triolo T, et al. Hybrid closed-loop systems and glycemic outcomes in children and adults with type 1 diabetes: real-world evidence from a U.S.-based multicenter collaborative. Diabetes Care. 2022;45(8):e118-e119. doi:10.2337/dc22-0329
50. Medtronic launches InPen with real-time Guardian Connect CGM data--the first integrated smart insulin pen for people with diabetes on MDI. News release. November 12, 2020. Accessed August 30, 2022. https://bit.ly/3CTSWPL
51. Bigfoot Biomedical receives FDA clearance for Bigfoot Unity Diabetes Management System, featuring first-of-its-kind smart pen caps for insulin pens used to treat type 1 and type 2 diabetes. News release. May 10, 2021. Accessed August 30, 2022. https://bit.ly/3BeyoAh
52. Vieira G. All about the CeQur Simplicity insulin patch. Updated May 24, 2022. Accessed August 30, 2022. https://beyondtype1.org/cequr-simplicity-insulin-patch/.
53. Messer LH, Tanenbaum ML, Cook PF, et al. Cost, hassle, and on-body experience: barriers to diabetes device use in adolescents and potential intervention targets. Diabetes Technol Ther. 2020;22(10):760-767. doi:10.1089/dia.2019.0509
54. Hilliard ME, Levy W, Anderson BJ, et al. Benefits and barriers of continuous glucose monitoring in young children with type 1 diabetes. Diabetes Technol Ther. 2019;21(9):493-498. doi:10.1089/dia.2019.0142
55. Dexcom G7 Release Delayed Until Late 2022. News release. August 8, 2022. Accessed September 7, 2022. https://diatribe.org/dexcom-g7-release-delayed-until-late-2022
56. Drucker DJ. Transforming type 1 diabetes: the next wave of innovation. Diabetologia. 2021;64(5):1059-1065. doi:10.1007/s00125-021-05396-5
57. Garg SK, Rodriguez E, Shah VN, Hirsch IB. New medications for the treatment of diabetes. Diabetes Technol Ther. 2022;24(S1):S190-S208. doi:10.1089/dia.2022.2513
58. Melton D. The promise of stem cell-derived islet replacement therapy. Diabetologia. 2021;64(5):1030-1036. doi:10.1007/s00125-020-05367-2
59. Danne T, Heinemann L, Bolinder J. New insulins, biosimilars, and insulin therapy. Diabetes Technol Ther. 2022;24(S1):S35-S57. doi:10.1089/dia.2022.2503
60. Kenney J. Insulin copay caps–a path to affordability. July 6, 2021. Accessed August 30, 2022.https://diatribechange.org/news/insulin-copay-caps-path-affordability
61. Glied SA, Zhu B. Not so sweet: insulin affordability over time. September 25, 2020. Accessed August 30, 2022. https://www.commonwealthfund.org/publications/issue-briefs/2020/sep/not-so-sweet-insulin-affordability-over-time
62. American Diabetes Association. Insulin and drug affordability. Accessed August 30, 2022. https://www.diabetes.org/advocacy/insulin-and-drug-affordability
63. Sullivan P. Chances for drug pricing, surprise billing action fade until November. March 24, 2020. Accessed August 30, 2022. https://thehill.com/policy/healthcare/489334-chances-for-drug-pricing-surprise-billing-action-fade-until-november/
64. Brown TD. How Medicare’s new Senior Savings Model makes insulin more affordable. June 4, 2020. Accessed August 30, 2022. https://www.diabetes.org/blog/how-medicares-new-senior-savings-model-makes-insulin-more-affordable
65. American Diabetes Association. ADA applauds the U.S. House of Representatives passage of the Affordable Insulin Now Act. News release. April 1, 2022. https://www.diabetes.org/newsroom/official-statement/2022/ada-applauds-us-house-of-representatives-passage-of-the-affordable-insulin-now-act
66. JDRF. Driving T1D cures during challenging times. 2022.
67. Medtronic announces ongoing initiatives to address health equity for people of color living with diabetes. News release. April 7, 2021. Access August 30, 2022. https://bit.ly/3KGTOZU
Deprescribing in Older Adults in Community and Nursing Home Settings
Study 1 Overview (Bayliss et al)
Objective: To examine the effect of a deprescribing educational intervention on medication use in older adults with cognitive impairment.
Design: This was a pragmatic, cluster randomized trial conducted in 8 primary care clinics that are part of a nonprofit health care system.
Setting and participants: The primary care clinic populations ranged from 170 to 1125 patients per clinic. The primary care clinics were randomly assigned to intervention or control using a uniform distribution in blocks by clinic size. Eligibility criteria for participants at those practices included age 65 years or older; health plan enrollment at least 1 year prior to intervention; diagnosis of Alzheimer disease and related dementia (ADRD) or mild cognitive impairment (MCI) by International Statistical Classification of Diseases and Related Health Problems, Tenth Revision code or from problem list; 1 or more chronic conditions from those in the Chronic Conditions Warehouse; and 5 or more long-term medications. Those who scheduled a visit at their primary care clinic in advance were eligible for the intervention. Primary care clinicians in intervention clinics were eligible to receive the clinician portion of the intervention. A total of 1433 participants were enrolled in the intervention group, and 1579 participants were enrolled in the control group.
Intervention: The intervention included 2 components: a patient and family component with materials mailed in advance of their primary care visits and a clinician component comprising monthly educational materials on deprescribing and notification in the electronic health record about visits with patient participants. The patient and family component consisted of a brochure titled “Managing Medication” and a questionnaire on attitudes toward deprescribing intended to educate patients and family about deprescribing. Clinicians at intervention clinics received an educational presentation at a monthly clinician meeting as well as tip sheets and a poster on deprescribing topics, and they also were notified of upcoming appointments with patients who received the patient component of the intervention. For the control group, patients and family did not receive any materials, and clinicians did not receive intervention materials or notification of participants enrolled in the trial. Usual care in both intervention and control groups included medication reconciliation and electronic health record alerts for potentially high-risk medications.
Main outcome measures: The primary outcomes of the study were the number of long-term medications per individual and the proportion of patients prescribed 1 or more potentially inappropriate medications. Outcome measurements were extracted from the electronic clinical data, and outcomes were assessed at 6 months, which involved comparing counts of medications at baseline with medications at 6 months. Long-term medications were defined as medications that are prescribed for 28 days or more based on pharmacy dispensing data. Potentially inappropriate medications (PIMs) were defined using the Beers list of medications to avoid in those with cognitive impairment and opioid medications. Analyses were conducted as intention to treat.
Main results: In the intervention group and control group, 56.2% and 54.4% of participants were women, and the mean age was 80.1 years (SD, 7.2) and 79.9 years (SD, 7.5), respectively. At baseline, the mean number of long-term medications was 7.0 (SD, 2.1) in the intervention group and 7.0 (SD, 2.2) in the control group. The proportion of patients taking any PIMs was 30.5% in the intervention group and 29.6% in the control group. At 6 months, the mean number of long-term medications was 6.4 in the intervention group and 6.5 in the control group, with an adjusted difference of –0.1 (95% CI, –0.2 to 0.04; P = .14); the proportion of patients with any PIMs was 17.8% in the intervention group and 20.9% in the control group, with an adjusted difference of –3.2% (95% CI, –6.2 to 0.4; P = .08). Preplanned analyses to examine subgroup differences for those with a higher number of medications (7+ vs 5 or 6 medications) did not find different effects of the intervention.
Conclusion: This educational intervention on deprescribing did not result in reductions in the number of medications or the use of PIMs in patients with cognitive impairment.
Study 2 Overview (Gedde et al)
Objective: To examine the effect of a deprescribing intervention (COSMOS) on medication use for nursing home residents.
Design: This was a randomized clinical trial.
Setting and participants: This trial was conducted in 67 units in 33 nursing homes in Norway. Participants were nursing home residents recruited from August 2014 to March 2015. Inclusion criteria included adults aged 65 years and older with at least 2 years of residency in nursing homes. Exclusion criteria included diagnosis of schizophrenia and a life expectancy of 6 months or less. Participants were followed for 4 months; participants were considered lost to follow-up if they died or moved from the nursing home unit. The analyses were per protocol and did not include those lost to follow-up or those who did not undergo a medication review in the intervention group. A total of 217 and 211 residents were included in the intervention and control groups, respectively.
Intervention: The intervention contained 5 components: communication and advance care planning, systematic pain management, medication reviews with collegial mentoring, organization of activities adjusted to needs and preferences, and safety. For medication review, the nursing home physician reviewed medications together with a nurse and study physicians who provided mentoring. The medication review involved a structured process that used assessment tools for behavioral and psychological symptoms of dementia (BPSD), activities of daily living (ADL), pain, cognitive status, well-being and quality of life, and clinical metrics of blood pressure, pulse, and body mass index. The study utilized the START/STOPP criteria1 for medication use in addition to a list of medications with anticholinergic properties for the medication review. In addition, drug interactions were documented through a drug interaction database; the team also incorporated patient wishes and concerns in the medication reviews. The nursing home physician made final decisions on medications. For the control group, nursing home residents received usual care without this intervention.
Main outcome measures: The primary outcome of the study was the mean change in the number of prescribed psychotropic medications, both regularly scheduled and total medications (which also included on-demand drugs) received at 4 months when compared to baseline. Psychotropic medications included antipsychotics, anxiolytics, hypnotics or sedatives, antidepressants, and antidementia drugs. Secondary outcomes included mean changes in BPSD using the Neuropsychiatric Inventory-Nursing home version (NPI-NH) and the Cornell Scale for Depression for Dementia (CSDD) and ADL using the Physical Self Maintenance Scale (PSMS).
Main results: In both the intervention and control groups, 76% of participants were women, and mean age was 86.3 years (SD, 7.95) in the intervention group and 86.6 years (SD, 7.21) in the control group. At baseline, the mean number of total medications was 10.9 (SD, 4.6) in the intervention group and 10.9 (SD, 4.7) in the control group, and the mean number of psychotropic medications was 2.2 (SD, 1.6) and 2.2 (SD, 1.7) in the intervention and control groups, respectively. At 4 months, the mean change from baseline of total psychotropic medications was –0.34 in the intervention group and 0.01 in the control group (P < .001), and the mean change of regularly scheduled psychotropic medications was –0.21 in the intervention group and 0.02 in the control group (P < .001). Measures of BPSD and depression did not differ between intervention and control groups, and ADL showed a small improvement in the intervention group.
Conclusion: This intervention reduced the use of psychotropic medications in nursing home residents without worsening BPSD or depression and may have yielded improvements in ADL.
Commentary
Polypharmacy is common among older adults, as many of them have multiple chronic conditions and often take multiple medications for managing them. Polypharmacy increases the risk of drug interactions and adverse effects from medications; older adults who are frail and/or who have cognitive impairment are especially at risk. Reducing medication use, especially medications likely to cause adverse effects such as those with anticholinergic properties, has the potential to yield beneficial effects while reducing the burden of taking medications. A large randomized trial found that a pharmacist-led education intervention can be effective in reducing PIM use in community-dwelling older adults,2 and that targeting patient motivation and capacity to deprescribe could be effective.3 This study by Bayliss and colleagues (Study 1), however, fell short of the effects seen in the earlier D-PRESCRIBE trial. One of the reasons for these findings may be that the clinician portion of the intervention was less intensive than that used in the earlier trial; specifically, in the present study, clinicians were not provided with or expected to utilize tools for structured medication review or deprescribing. Although the intervention primes the patient and family for discussions around deprescribing through the use of a brochure and questionnaire, the clinician portion of the intervention was less structured. Another example of an effective intervention that provided a more structured deprescribing intervention beyond education of clinicians utilized electronic decision-support to assist with deprescribing.4
The findings from the Gedde et al study (Study 2) are comparable to those of prior studies in the nursing home population,5 where participants are likely to take a large number of medications, including psychotropic medications, and are more likely to be frail. However, Gedde and colleagues employed a bundled intervention6 that included other components besides medication review, and thus it is unclear whether the effect on ADL can be attributed to the deprescribing of medications alone. Gedde et al’s finding that deprescribing can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression is an important contribution to our knowledge about polypharmacy and deprescribing in older patients. Thus, nursing home residents, their families, and clinicians could expect that the deprescribing of psychotropic medications does not lead to worsening symptoms. Of note, the clinician portion of the intervention in the Gedde et al study was quite structured, and this structure may have contributed to the observed effects.
Applications for Clinical Practice and System Implementation
Both studies add to the literature on deprescribing and may offer options for researchers and clinicians who are considering potential components of an effective deprescribing intervention. Patient activation for deprescribing via the methods used in these 2 studies may help to prime patients for conversations about deprescribing; however, as shown by the Bayliss et al study, a more structured approach to clinical encounters may be needed when deprescribing, such as the use of tools in the electronic health record, in order to reduce the use of medication deemed unnecessary or potentially harmful. Further studies should examine the effect of deprescribing on medication use, but perhaps even more importantly, how deprescribing impacts patient outcomes both in terms of risks and benefits.
Practice Points
- A more structured approach to clinical encounters (eg, the use of tools in the electronic health record) may be needed when deprescribing unnecessary or potentially harmful medications in older patients in community settings.
- In the nursing home setting, structured deprescribing intervention can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression.
–William W. Hung, MD, MPH
1. O’Mahony D, O’Sullivan D, Byrne S, et al. STOPP/START criteria for potentially inappropriate prescribing in older people: version 2. Age Ageing. 2015;44(2):213-218. doi:10.1093/ageing/afu145
2. Martin P, Tamblyn R, Benedetti A, et al. Effect of a pharmacist-led educational intervention on inappropriate medication prescriptions in older adults: the D-PRESCRIBE randomized clinical trial. JAMA. 2018;320(18):1889-1898. doi:10.1001/jama.2018.16131
3. Martin P, Tannenbaum C. A realist evaluation of patients’ decisions to deprescribe in the EMPOWER trial. BMJ Open. 2017;7(4):e015959. doi:10.1136/bmjopen-2017-015959
4. Rieckert A, Reeves D, Altiner A, et al. Use of an electronic decision support tool to reduce polypharmacy in elderly people with chronic diseases: cluster randomised controlled trial. BMJ. 2020;369:m1822. doi:10.1136/bmj.m1822
5. Fournier A, Anrys P, Beuscart JB, et al. Use and deprescribing of potentially inappropriate medications in frail nursing home residents. Drugs Aging. 2020;37(12):917-924. doi:10.1007/s40266-020-00805-7
6. Husebø BS, Ballard C, Aarsland D, et al. The effect of a multicomponent intervention on quality of life in residents of nursing homes: a randomized controlled trial (COSMOS). J Am Med Dir Assoc. 2019;20(3):330-339. doi:10.1016/j.jamda.2018.11.006
Study 1 Overview (Bayliss et al)
Objective: To examine the effect of a deprescribing educational intervention on medication use in older adults with cognitive impairment.
Design: This was a pragmatic, cluster randomized trial conducted in 8 primary care clinics that are part of a nonprofit health care system.
Setting and participants: The primary care clinic populations ranged from 170 to 1125 patients per clinic. The primary care clinics were randomly assigned to intervention or control using a uniform distribution in blocks by clinic size. Eligibility criteria for participants at those practices included age 65 years or older; health plan enrollment at least 1 year prior to intervention; diagnosis of Alzheimer disease and related dementia (ADRD) or mild cognitive impairment (MCI) by International Statistical Classification of Diseases and Related Health Problems, Tenth Revision code or from problem list; 1 or more chronic conditions from those in the Chronic Conditions Warehouse; and 5 or more long-term medications. Those who scheduled a visit at their primary care clinic in advance were eligible for the intervention. Primary care clinicians in intervention clinics were eligible to receive the clinician portion of the intervention. A total of 1433 participants were enrolled in the intervention group, and 1579 participants were enrolled in the control group.
Intervention: The intervention included 2 components: a patient and family component with materials mailed in advance of their primary care visits and a clinician component comprising monthly educational materials on deprescribing and notification in the electronic health record about visits with patient participants. The patient and family component consisted of a brochure titled “Managing Medication” and a questionnaire on attitudes toward deprescribing intended to educate patients and family about deprescribing. Clinicians at intervention clinics received an educational presentation at a monthly clinician meeting as well as tip sheets and a poster on deprescribing topics, and they also were notified of upcoming appointments with patients who received the patient component of the intervention. For the control group, patients and family did not receive any materials, and clinicians did not receive intervention materials or notification of participants enrolled in the trial. Usual care in both intervention and control groups included medication reconciliation and electronic health record alerts for potentially high-risk medications.
Main outcome measures: The primary outcomes of the study were the number of long-term medications per individual and the proportion of patients prescribed 1 or more potentially inappropriate medications. Outcome measurements were extracted from the electronic clinical data, and outcomes were assessed at 6 months, which involved comparing counts of medications at baseline with medications at 6 months. Long-term medications were defined as medications that are prescribed for 28 days or more based on pharmacy dispensing data. Potentially inappropriate medications (PIMs) were defined using the Beers list of medications to avoid in those with cognitive impairment and opioid medications. Analyses were conducted as intention to treat.
Main results: In the intervention group and control group, 56.2% and 54.4% of participants were women, and the mean age was 80.1 years (SD, 7.2) and 79.9 years (SD, 7.5), respectively. At baseline, the mean number of long-term medications was 7.0 (SD, 2.1) in the intervention group and 7.0 (SD, 2.2) in the control group. The proportion of patients taking any PIMs was 30.5% in the intervention group and 29.6% in the control group. At 6 months, the mean number of long-term medications was 6.4 in the intervention group and 6.5 in the control group, with an adjusted difference of –0.1 (95% CI, –0.2 to 0.04; P = .14); the proportion of patients with any PIMs was 17.8% in the intervention group and 20.9% in the control group, with an adjusted difference of –3.2% (95% CI, –6.2 to 0.4; P = .08). Preplanned analyses to examine subgroup differences for those with a higher number of medications (7+ vs 5 or 6 medications) did not find different effects of the intervention.
Conclusion: This educational intervention on deprescribing did not result in reductions in the number of medications or the use of PIMs in patients with cognitive impairment.
Study 2 Overview (Gedde et al)
Objective: To examine the effect of a deprescribing intervention (COSMOS) on medication use for nursing home residents.
Design: This was a randomized clinical trial.
Setting and participants: This trial was conducted in 67 units in 33 nursing homes in Norway. Participants were nursing home residents recruited from August 2014 to March 2015. Inclusion criteria included adults aged 65 years and older with at least 2 years of residency in nursing homes. Exclusion criteria included diagnosis of schizophrenia and a life expectancy of 6 months or less. Participants were followed for 4 months; participants were considered lost to follow-up if they died or moved from the nursing home unit. The analyses were per protocol and did not include those lost to follow-up or those who did not undergo a medication review in the intervention group. A total of 217 and 211 residents were included in the intervention and control groups, respectively.
Intervention: The intervention contained 5 components: communication and advance care planning, systematic pain management, medication reviews with collegial mentoring, organization of activities adjusted to needs and preferences, and safety. For medication review, the nursing home physician reviewed medications together with a nurse and study physicians who provided mentoring. The medication review involved a structured process that used assessment tools for behavioral and psychological symptoms of dementia (BPSD), activities of daily living (ADL), pain, cognitive status, well-being and quality of life, and clinical metrics of blood pressure, pulse, and body mass index. The study utilized the START/STOPP criteria1 for medication use in addition to a list of medications with anticholinergic properties for the medication review. In addition, drug interactions were documented through a drug interaction database; the team also incorporated patient wishes and concerns in the medication reviews. The nursing home physician made final decisions on medications. For the control group, nursing home residents received usual care without this intervention.
Main outcome measures: The primary outcome of the study was the mean change in the number of prescribed psychotropic medications, both regularly scheduled and total medications (which also included on-demand drugs) received at 4 months when compared to baseline. Psychotropic medications included antipsychotics, anxiolytics, hypnotics or sedatives, antidepressants, and antidementia drugs. Secondary outcomes included mean changes in BPSD using the Neuropsychiatric Inventory-Nursing home version (NPI-NH) and the Cornell Scale for Depression for Dementia (CSDD) and ADL using the Physical Self Maintenance Scale (PSMS).
Main results: In both the intervention and control groups, 76% of participants were women, and mean age was 86.3 years (SD, 7.95) in the intervention group and 86.6 years (SD, 7.21) in the control group. At baseline, the mean number of total medications was 10.9 (SD, 4.6) in the intervention group and 10.9 (SD, 4.7) in the control group, and the mean number of psychotropic medications was 2.2 (SD, 1.6) and 2.2 (SD, 1.7) in the intervention and control groups, respectively. At 4 months, the mean change from baseline of total psychotropic medications was –0.34 in the intervention group and 0.01 in the control group (P < .001), and the mean change of regularly scheduled psychotropic medications was –0.21 in the intervention group and 0.02 in the control group (P < .001). Measures of BPSD and depression did not differ between intervention and control groups, and ADL showed a small improvement in the intervention group.
Conclusion: This intervention reduced the use of psychotropic medications in nursing home residents without worsening BPSD or depression and may have yielded improvements in ADL.
Commentary
Polypharmacy is common among older adults, as many of them have multiple chronic conditions and often take multiple medications for managing them. Polypharmacy increases the risk of drug interactions and adverse effects from medications; older adults who are frail and/or who have cognitive impairment are especially at risk. Reducing medication use, especially medications likely to cause adverse effects such as those with anticholinergic properties, has the potential to yield beneficial effects while reducing the burden of taking medications. A large randomized trial found that a pharmacist-led education intervention can be effective in reducing PIM use in community-dwelling older adults,2 and that targeting patient motivation and capacity to deprescribe could be effective.3 This study by Bayliss and colleagues (Study 1), however, fell short of the effects seen in the earlier D-PRESCRIBE trial. One of the reasons for these findings may be that the clinician portion of the intervention was less intensive than that used in the earlier trial; specifically, in the present study, clinicians were not provided with or expected to utilize tools for structured medication review or deprescribing. Although the intervention primes the patient and family for discussions around deprescribing through the use of a brochure and questionnaire, the clinician portion of the intervention was less structured. Another example of an effective intervention that provided a more structured deprescribing intervention beyond education of clinicians utilized electronic decision-support to assist with deprescribing.4
The findings from the Gedde et al study (Study 2) are comparable to those of prior studies in the nursing home population,5 where participants are likely to take a large number of medications, including psychotropic medications, and are more likely to be frail. However, Gedde and colleagues employed a bundled intervention6 that included other components besides medication review, and thus it is unclear whether the effect on ADL can be attributed to the deprescribing of medications alone. Gedde et al’s finding that deprescribing can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression is an important contribution to our knowledge about polypharmacy and deprescribing in older patients. Thus, nursing home residents, their families, and clinicians could expect that the deprescribing of psychotropic medications does not lead to worsening symptoms. Of note, the clinician portion of the intervention in the Gedde et al study was quite structured, and this structure may have contributed to the observed effects.
Applications for Clinical Practice and System Implementation
Both studies add to the literature on deprescribing and may offer options for researchers and clinicians who are considering potential components of an effective deprescribing intervention. Patient activation for deprescribing via the methods used in these 2 studies may help to prime patients for conversations about deprescribing; however, as shown by the Bayliss et al study, a more structured approach to clinical encounters may be needed when deprescribing, such as the use of tools in the electronic health record, in order to reduce the use of medication deemed unnecessary or potentially harmful. Further studies should examine the effect of deprescribing on medication use, but perhaps even more importantly, how deprescribing impacts patient outcomes both in terms of risks and benefits.
Practice Points
- A more structured approach to clinical encounters (eg, the use of tools in the electronic health record) may be needed when deprescribing unnecessary or potentially harmful medications in older patients in community settings.
- In the nursing home setting, structured deprescribing intervention can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression.
–William W. Hung, MD, MPH
Study 1 Overview (Bayliss et al)
Objective: To examine the effect of a deprescribing educational intervention on medication use in older adults with cognitive impairment.
Design: This was a pragmatic, cluster randomized trial conducted in 8 primary care clinics that are part of a nonprofit health care system.
Setting and participants: The primary care clinic populations ranged from 170 to 1125 patients per clinic. The primary care clinics were randomly assigned to intervention or control using a uniform distribution in blocks by clinic size. Eligibility criteria for participants at those practices included age 65 years or older; health plan enrollment at least 1 year prior to intervention; diagnosis of Alzheimer disease and related dementia (ADRD) or mild cognitive impairment (MCI) by International Statistical Classification of Diseases and Related Health Problems, Tenth Revision code or from problem list; 1 or more chronic conditions from those in the Chronic Conditions Warehouse; and 5 or more long-term medications. Those who scheduled a visit at their primary care clinic in advance were eligible for the intervention. Primary care clinicians in intervention clinics were eligible to receive the clinician portion of the intervention. A total of 1433 participants were enrolled in the intervention group, and 1579 participants were enrolled in the control group.
Intervention: The intervention included 2 components: a patient and family component with materials mailed in advance of their primary care visits and a clinician component comprising monthly educational materials on deprescribing and notification in the electronic health record about visits with patient participants. The patient and family component consisted of a brochure titled “Managing Medication” and a questionnaire on attitudes toward deprescribing intended to educate patients and family about deprescribing. Clinicians at intervention clinics received an educational presentation at a monthly clinician meeting as well as tip sheets and a poster on deprescribing topics, and they also were notified of upcoming appointments with patients who received the patient component of the intervention. For the control group, patients and family did not receive any materials, and clinicians did not receive intervention materials or notification of participants enrolled in the trial. Usual care in both intervention and control groups included medication reconciliation and electronic health record alerts for potentially high-risk medications.
Main outcome measures: The primary outcomes of the study were the number of long-term medications per individual and the proportion of patients prescribed 1 or more potentially inappropriate medications. Outcome measurements were extracted from the electronic clinical data, and outcomes were assessed at 6 months, which involved comparing counts of medications at baseline with medications at 6 months. Long-term medications were defined as medications that are prescribed for 28 days or more based on pharmacy dispensing data. Potentially inappropriate medications (PIMs) were defined using the Beers list of medications to avoid in those with cognitive impairment and opioid medications. Analyses were conducted as intention to treat.
Main results: In the intervention group and control group, 56.2% and 54.4% of participants were women, and the mean age was 80.1 years (SD, 7.2) and 79.9 years (SD, 7.5), respectively. At baseline, the mean number of long-term medications was 7.0 (SD, 2.1) in the intervention group and 7.0 (SD, 2.2) in the control group. The proportion of patients taking any PIMs was 30.5% in the intervention group and 29.6% in the control group. At 6 months, the mean number of long-term medications was 6.4 in the intervention group and 6.5 in the control group, with an adjusted difference of –0.1 (95% CI, –0.2 to 0.04; P = .14); the proportion of patients with any PIMs was 17.8% in the intervention group and 20.9% in the control group, with an adjusted difference of –3.2% (95% CI, –6.2 to 0.4; P = .08). Preplanned analyses to examine subgroup differences for those with a higher number of medications (7+ vs 5 or 6 medications) did not find different effects of the intervention.
Conclusion: This educational intervention on deprescribing did not result in reductions in the number of medications or the use of PIMs in patients with cognitive impairment.
Study 2 Overview (Gedde et al)
Objective: To examine the effect of a deprescribing intervention (COSMOS) on medication use for nursing home residents.
Design: This was a randomized clinical trial.
Setting and participants: This trial was conducted in 67 units in 33 nursing homes in Norway. Participants were nursing home residents recruited from August 2014 to March 2015. Inclusion criteria included adults aged 65 years and older with at least 2 years of residency in nursing homes. Exclusion criteria included diagnosis of schizophrenia and a life expectancy of 6 months or less. Participants were followed for 4 months; participants were considered lost to follow-up if they died or moved from the nursing home unit. The analyses were per protocol and did not include those lost to follow-up or those who did not undergo a medication review in the intervention group. A total of 217 and 211 residents were included in the intervention and control groups, respectively.
Intervention: The intervention contained 5 components: communication and advance care planning, systematic pain management, medication reviews with collegial mentoring, organization of activities adjusted to needs and preferences, and safety. For medication review, the nursing home physician reviewed medications together with a nurse and study physicians who provided mentoring. The medication review involved a structured process that used assessment tools for behavioral and psychological symptoms of dementia (BPSD), activities of daily living (ADL), pain, cognitive status, well-being and quality of life, and clinical metrics of blood pressure, pulse, and body mass index. The study utilized the START/STOPP criteria1 for medication use in addition to a list of medications with anticholinergic properties for the medication review. In addition, drug interactions were documented through a drug interaction database; the team also incorporated patient wishes and concerns in the medication reviews. The nursing home physician made final decisions on medications. For the control group, nursing home residents received usual care without this intervention.
Main outcome measures: The primary outcome of the study was the mean change in the number of prescribed psychotropic medications, both regularly scheduled and total medications (which also included on-demand drugs) received at 4 months when compared to baseline. Psychotropic medications included antipsychotics, anxiolytics, hypnotics or sedatives, antidepressants, and antidementia drugs. Secondary outcomes included mean changes in BPSD using the Neuropsychiatric Inventory-Nursing home version (NPI-NH) and the Cornell Scale for Depression for Dementia (CSDD) and ADL using the Physical Self Maintenance Scale (PSMS).
Main results: In both the intervention and control groups, 76% of participants were women, and mean age was 86.3 years (SD, 7.95) in the intervention group and 86.6 years (SD, 7.21) in the control group. At baseline, the mean number of total medications was 10.9 (SD, 4.6) in the intervention group and 10.9 (SD, 4.7) in the control group, and the mean number of psychotropic medications was 2.2 (SD, 1.6) and 2.2 (SD, 1.7) in the intervention and control groups, respectively. At 4 months, the mean change from baseline of total psychotropic medications was –0.34 in the intervention group and 0.01 in the control group (P < .001), and the mean change of regularly scheduled psychotropic medications was –0.21 in the intervention group and 0.02 in the control group (P < .001). Measures of BPSD and depression did not differ between intervention and control groups, and ADL showed a small improvement in the intervention group.
Conclusion: This intervention reduced the use of psychotropic medications in nursing home residents without worsening BPSD or depression and may have yielded improvements in ADL.
Commentary
Polypharmacy is common among older adults, as many of them have multiple chronic conditions and often take multiple medications for managing them. Polypharmacy increases the risk of drug interactions and adverse effects from medications; older adults who are frail and/or who have cognitive impairment are especially at risk. Reducing medication use, especially medications likely to cause adverse effects such as those with anticholinergic properties, has the potential to yield beneficial effects while reducing the burden of taking medications. A large randomized trial found that a pharmacist-led education intervention can be effective in reducing PIM use in community-dwelling older adults,2 and that targeting patient motivation and capacity to deprescribe could be effective.3 This study by Bayliss and colleagues (Study 1), however, fell short of the effects seen in the earlier D-PRESCRIBE trial. One of the reasons for these findings may be that the clinician portion of the intervention was less intensive than that used in the earlier trial; specifically, in the present study, clinicians were not provided with or expected to utilize tools for structured medication review or deprescribing. Although the intervention primes the patient and family for discussions around deprescribing through the use of a brochure and questionnaire, the clinician portion of the intervention was less structured. Another example of an effective intervention that provided a more structured deprescribing intervention beyond education of clinicians utilized electronic decision-support to assist with deprescribing.4
The findings from the Gedde et al study (Study 2) are comparable to those of prior studies in the nursing home population,5 where participants are likely to take a large number of medications, including psychotropic medications, and are more likely to be frail. However, Gedde and colleagues employed a bundled intervention6 that included other components besides medication review, and thus it is unclear whether the effect on ADL can be attributed to the deprescribing of medications alone. Gedde et al’s finding that deprescribing can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression is an important contribution to our knowledge about polypharmacy and deprescribing in older patients. Thus, nursing home residents, their families, and clinicians could expect that the deprescribing of psychotropic medications does not lead to worsening symptoms. Of note, the clinician portion of the intervention in the Gedde et al study was quite structured, and this structure may have contributed to the observed effects.
Applications for Clinical Practice and System Implementation
Both studies add to the literature on deprescribing and may offer options for researchers and clinicians who are considering potential components of an effective deprescribing intervention. Patient activation for deprescribing via the methods used in these 2 studies may help to prime patients for conversations about deprescribing; however, as shown by the Bayliss et al study, a more structured approach to clinical encounters may be needed when deprescribing, such as the use of tools in the electronic health record, in order to reduce the use of medication deemed unnecessary or potentially harmful. Further studies should examine the effect of deprescribing on medication use, but perhaps even more importantly, how deprescribing impacts patient outcomes both in terms of risks and benefits.
Practice Points
- A more structured approach to clinical encounters (eg, the use of tools in the electronic health record) may be needed when deprescribing unnecessary or potentially harmful medications in older patients in community settings.
- In the nursing home setting, structured deprescribing intervention can reduce the use of psychotropic medications while not leading to differences in behavioral and psychologic symptoms or depression.
–William W. Hung, MD, MPH
1. O’Mahony D, O’Sullivan D, Byrne S, et al. STOPP/START criteria for potentially inappropriate prescribing in older people: version 2. Age Ageing. 2015;44(2):213-218. doi:10.1093/ageing/afu145
2. Martin P, Tamblyn R, Benedetti A, et al. Effect of a pharmacist-led educational intervention on inappropriate medication prescriptions in older adults: the D-PRESCRIBE randomized clinical trial. JAMA. 2018;320(18):1889-1898. doi:10.1001/jama.2018.16131
3. Martin P, Tannenbaum C. A realist evaluation of patients’ decisions to deprescribe in the EMPOWER trial. BMJ Open. 2017;7(4):e015959. doi:10.1136/bmjopen-2017-015959
4. Rieckert A, Reeves D, Altiner A, et al. Use of an electronic decision support tool to reduce polypharmacy in elderly people with chronic diseases: cluster randomised controlled trial. BMJ. 2020;369:m1822. doi:10.1136/bmj.m1822
5. Fournier A, Anrys P, Beuscart JB, et al. Use and deprescribing of potentially inappropriate medications in frail nursing home residents. Drugs Aging. 2020;37(12):917-924. doi:10.1007/s40266-020-00805-7
6. Husebø BS, Ballard C, Aarsland D, et al. The effect of a multicomponent intervention on quality of life in residents of nursing homes: a randomized controlled trial (COSMOS). J Am Med Dir Assoc. 2019;20(3):330-339. doi:10.1016/j.jamda.2018.11.006
1. O’Mahony D, O’Sullivan D, Byrne S, et al. STOPP/START criteria for potentially inappropriate prescribing in older people: version 2. Age Ageing. 2015;44(2):213-218. doi:10.1093/ageing/afu145
2. Martin P, Tamblyn R, Benedetti A, et al. Effect of a pharmacist-led educational intervention on inappropriate medication prescriptions in older adults: the D-PRESCRIBE randomized clinical trial. JAMA. 2018;320(18):1889-1898. doi:10.1001/jama.2018.16131
3. Martin P, Tannenbaum C. A realist evaluation of patients’ decisions to deprescribe in the EMPOWER trial. BMJ Open. 2017;7(4):e015959. doi:10.1136/bmjopen-2017-015959
4. Rieckert A, Reeves D, Altiner A, et al. Use of an electronic decision support tool to reduce polypharmacy in elderly people with chronic diseases: cluster randomised controlled trial. BMJ. 2020;369:m1822. doi:10.1136/bmj.m1822
5. Fournier A, Anrys P, Beuscart JB, et al. Use and deprescribing of potentially inappropriate medications in frail nursing home residents. Drugs Aging. 2020;37(12):917-924. doi:10.1007/s40266-020-00805-7
6. Husebø BS, Ballard C, Aarsland D, et al. The effect of a multicomponent intervention on quality of life in residents of nursing homes: a randomized controlled trial (COSMOS). J Am Med Dir Assoc. 2019;20(3):330-339. doi:10.1016/j.jamda.2018.11.006
Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients
Study 1 Overview (Oberhaus et al)
Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.
Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.
Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.
Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.
Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).
Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.
Study 2 Overview (Shenkin et al)
Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.
Design: Prospective randomized diagnostic test accuracy study.
Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.
Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.
Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).
Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.
Commentary
Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.
In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.
In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IV–based evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.
Application for Clinical Practice and System Implementation
The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.
Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.
The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Practice Points
- Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
- Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai
1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24
2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865
3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8
4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x
Study 1 Overview (Oberhaus et al)
Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.
Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.
Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.
Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.
Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).
Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.
Study 2 Overview (Shenkin et al)
Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.
Design: Prospective randomized diagnostic test accuracy study.
Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.
Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.
Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).
Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.
Commentary
Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.
In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.
In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IV–based evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.
Application for Clinical Practice and System Implementation
The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.
Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.
The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Practice Points
- Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
- Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai
Study 1 Overview (Oberhaus et al)
Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.
Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.
Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.
Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.
Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).
Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.
Study 2 Overview (Shenkin et al)
Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.
Design: Prospective randomized diagnostic test accuracy study.
Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.
Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.
Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).
Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.
Commentary
Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.
In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.
In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IV–based evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.
Application for Clinical Practice and System Implementation
The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.
Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.
The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Practice Points
- Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
- Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai
1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24
2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865
3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8
4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x
1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24
2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865
3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8
4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x
Barriers to System Quality Improvement in Health Care
Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; ebarkoudah@bwh.harvard.edu
Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3
The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5
The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9
Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.
A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.
1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719
2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.
3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.
4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.
5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107
6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x
7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012
8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21
9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559
10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482
11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047
Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; ebarkoudah@bwh.harvard.edu
Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3
The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5
The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9
Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.
A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.
Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; ebarkoudah@bwh.harvard.edu
Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3
The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5
The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9
Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.
A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.
1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719
2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.
3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.
4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.
5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107
6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x
7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012
8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21
9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559
10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482
11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047
1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719
2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.
3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.
4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.
5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107
6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x
7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012
8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21
9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559
10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482
11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047