User login
Evaluation of Pharmacologic Interventions for Weight Management in a Veteran Population
The American Heart Association, the American College of Cardiology, and the Obesity Society define overweight as a body mass index (BMI) of 25 to 29.9 and obesity as a BMI ≥ 30. Morbid obesity is defined as a BMI ≥ 35 or 40.2,3 Based on these BMI cutoffs, the Endocrine Society recommends diet and lifestyle as the foundation of weight management and pharmacotherapy for those with a BMI ≥ 30 without comorbidities. In patients with a BMI ≥ 27, weight management medications may be considered if a patient has comorbid hypertension, T2DM, dyslipidemia, metabolic syndrome, obstructive sleep apnea, or nonalcoholic fatty liver disease. Patients with BMI > 40 are eligible for weight loss surgery.4
Lifestyle and dietary interventions are the foundation of current weight management guidelines from the Endocrine Society.4 At a minimum, guidelines recommended enrolling motivated patients in a high-intensity lifestyle intervention class of at least 14 sessions in the first 6 months to reach a goal weight loss of 5 to 10% from baseline and to maintain a reduction of 3 to 5% from baseline.3 Medications are recommended as an adjunct to lifestyle and dietary changes. Most weight management medications work in the brain to stimulate satiety signaling, which helps motivated patients adhere to their dietary interventions, assist those who have been unsuccessful in earlier weight loss attempts, and help maintain weight.3,4
Guidelines recommend 7 weight management medications, including orlistat (both prescription strength and over-the-counter), liraglutide, phentermine, phentermine/topiramate, lorcaserin, and naltrexone/bupropion. Using medications to assist with weight loss increases likelihood that patients will achieve 5 to 10% weight loss from baseline.5,6 Studies looking at long-term effects of these medications on weight loss have found improvements in blood pressure (BP), biomarkers for cardiovascular disease, and T2DM-related comorbidities.3,5,7
Positive effects on comorbidities have been found to be related to drug class and mechanism of action (MOA); those that also are approved for T2DM have demonstrated the most favorable cardiovascular effects.7 Other medications that work as stimulants or as modulators of serotonin pathways are associated with increased risks, prompting the US Food and Drug Administration (FDA) to remove some medications from the market.7,8 In January 2020, lorcaserin was taken off the market because of increased risk of cancer found in postmarketing surveillance.9 The benefit of weight loss must be weighed against the risk of medication use.
Monthly follow-up is recommended with weight management medications in the beginning to assess safety and efficacy; medications should be discontinued if weight loss is inadequate in the first 3 months.1,3,4 Limited studies have assessed the long-term use of weight management medications in a real-world setting. Medications are prescribed for weight management at Veteran Health Indiana (VHI) in outpatient clinics, including primary care, endocrinology, and gastrointestinal (GI) specialties. However, prescribing practices, outcomes, and adherence to guideline recommendations have not been studied. Data from this study will be used to better understand how VHI can serve its veterans through diet, lifestyle, and pharmacologic interventions.
Methods
We conducted a single-center, retrospective chart review for patients started on weight management medications at VHI. A patient list was generated based on prescription fills from June 1, 2017 to June 30, 2019. All data were obtained using the Computerized Patient Record System and patients were not contacted. This study was approved by the Indiana University Health Institutional Review Board and the VHI Research and Development Committee.
At the time of study, orlistat, liraglutide, phentermine/topiramate,
Patients were included in the study if they received a prescription of any 1 of the 5 available medications during the enrollment period. Patients were excluded if they received a prescription from or were treated by a civilian health care provider, if they never used the medication, or if their weight loss was attributed to a cancer diagnosis. These criteria produced 86 patients of whom 96 unique weight loss prescriptions were generated. Data were collected for each instance of medication use so that some patients were included multiple times. In this case, data collection for the failed medication ended when failure was documented, and new data points began when new medication was prescribed; all data collected were per medication, not per patient. This method was used to account for medication failure and provide accurate weight loss results based on medication choice within this institution.
The primary outcomes included total weight loss and weight loss as a percentage of baseline weight at 3, 6, 12, and > 12 months of therapy. Secondary outcomes included weight loss of 5% from baseline, rate of successful weight maintenance after initial weight loss of 5% from baseline, adverse drug reaction (ADR) monitoring, and use of weight management medications across clinics at VHI.
Demographic data included race, age, sex, baseline weight, BMI, and comorbid medical conditions. Comorbidities were collected based on the most recent primary care clinical note before initiating medication. Medication data collected included medications used to manage comorbidities. Data related to weight management medication included prescribing clinic, reason for medication discontinuation, or bariatric surgery intervention if applicable.
Efficacy outcome data included weight and BMI across therapy duration. Safety outcomes data included heart rate, BP, and ADRs that resulted in medication discontinuation as documented in the electronic health record (EHR).
We used descriptive statistics, including mean, standard deviation (SD), range, and percentage. For continuous data, Kruskal-Wallis tests were used because of nonparametric data distribution among the different medications with a prespecified α = 0.05. With the observed sample sizes and SDs in this study, post hoc poststudy power calculations showed that the study had 80% power at a 5% significance level to detect weight changes of 8.6 kg, 7.3 kg, and 12.4 kg at 3, 6, and 12 months, respectively, using nonparametric tests.
Results
A total of 86 patients were identified based on prescription fills, which produced 99 unique instances of medication use. Of the 99 identified, 3 met exclusion criteria and were not included in the final analysis. Among included veterans, 16 were female and 80 were male (Table 1). Most of those included identified as White race (86%), male (83%), and mean age 53 years. At baseline, mean weight was 130 kg and mean BMI 41.
Comorbidities and Medication Use
Hypertension (66%), hyperlipidemia (64%), and psychiatric diagnoses (50%) were most common comorbid conditions. Substance use (23%) and T2DM (40%) were the most common comorbidities influencing medication choice. Substance use evaluation included amphetamines and cocaine for this analysis.
Phentermine/topiramate is the preferred first-line agent unless patients have contraindications for use, in which case naltrexone/bupropion is recommended, based on guidelines for weight management medications within the VHI system. However, for patients with comorbid T2DM, liraglutide is preferred because of its beneficial effects for both weight loss and blood glucose control.2 Most patients at VHI were started on liraglutide (44%) or phentermine/topiramate (42%), which was in line with recommendations. Our sample included ≥ 1 prescription for each medication available at our facility, although the number of patients on each medication was not equal. Of note, the one patient taking lorcaserin at the time of study discontinued therapy in response to recent FDA guidance.9
Medications for comorbid conditions could contribute to weight gain. Of the patient sample, β blockers (n = 24) and anticonvulsants, including gabapentin and pregabalin (n = 22) were the most common Other medications that could have contributed to weight gain included sulfonylureas (n = 5), antipsychotics (n = 4), tricyclic antidepressants (n = 2), and hormone replacement therapies (n = 2).
Primary Outcomes
The mean weight of participants dropped from 129.9 to 114.2 kg over the 12 months of weight management medication therapy for a absolute difference of 15.8 kg (Figure 1 and eTable 1 available at doi:10.12788/fp.0117). Weight loss was recorded at 3, 6, 12, and > 12 months of weight management therapy. At each time point, weight loss was statistically significant (P < .001) compared with baseline (Table 2), even though not every patient had weight loss records at each time point.
When classified by medication choice,
Secondary Outcomes
More than one-half of the patients analyzed lost 5 to 10% from baseline while taking weight management medication.
Among patients who lost at least 5% from baseline, we performed further analysis to assess weight maintenance of 3 to 5% from baseline for 12 months.
We found that most of our prescriptions (n = 50) were entered by the endocrinology department in conjunction with the MOVE! program (eTable 3 available at doi:10.12788/fp.0117). All 4 of our primary care clinics prescribed weight loss medication; however, 1 clinic prescribed the most. Other prescriptions came from community-based outpatient clinics or other specialties, including gastroenterology, orthopedics, and sleep medicine.
Nineteen (18%) patients experienced an adverse event (AE) that led to medication discontinuation, which was recorded in their chart (eTable 4 available at doi:10.12788/fp.0117). Most common AEs were GI upset with liraglutide or orlistat or dull aching and pain with phentermine/topiramate. Two severe AEs occurred: One patient experienced a change in mental health status and suicide attempt with naltrexone/bupropion; and 1 patient discontinued phentermine/topiramate because of a change in neurologic status.
Primarily medications were stopped because of inadequate weight loss (n = 13), and most patients tried additional medications. However, 1 medication failure resulted in sleeve gastrectomy. Other reasons for medication discontinuation included missed MOVE! appointments, patient lost to follow-up, and patient-elected discontinuation.
Discussion
This study evaluated the use and outcomes of weight management medication among veterans at VHI. The study aimed to better understand the efficacy and safety of these medications while exposing potential weaknesses in care and to promote avenues to improve weight loss and maintenance.
Clinical trials for weight management medications reported weight loss of 8 to 10 kg over 56 weeks: 21 to 63% of patients losing at least 5% from baseline weight.10-14 The findings from our study found a higher average weight loss (−15.8 kg) than that reported in trials and a consistent percentage of patients (58.3%) who achieved at least 5% weight loss. It is promising to see that when used in a noncontrolled setting, these medications were able to produce weight loss consistent with results seen in large, controlled trials.
Pi-Sunyer and colleagues found continued weight loss after the initial 5% weight loss to an eventual 10% weight loss in many patients.10 Additionally, Smith and colleagues found that nearly 68% of their participants who took lorcaserin were able to maintain 3 to 5% weight loss over 12 months.13 Sjöström and colleagues acknowledged that many patients taking orlistat for an extended period began to gain weight, although at one-half the rate than that seen in the placebo group.12 This study found that fewer patients were able to maintain their weight loss over 12 months, with only 30% of patients maintaining 3 to 5% weight loss from baseline. This difference in weight maintenance likely was because of the uncontrolled nature of this study. Once patients reach their initial weight loss goal, even the most motivated patients will have trouble maintaining that weight.4 Despite the challenges associated with maintaining weight loss, the quality of life benefits patients gained and potential reductions in health care spending support using resources to improve these outcomes.2,14,15
Pi-Sunyer and colleagues reported high incidences of nausea (40%), vomiting (16%), diarrhea (21%), and constipation (20%) with liraglutide.10 Sjöström and colleagues reported 7% of patients experienced GI upset with orlistat.12 Comparatively, only 17% of our patients reported AEs that required discontinuation, including GI upset. One patient in our study discontinued naltrexone/bupropion because of a significant change in mental status and suicide attempt. Clinical trials did not report a greater risk of depression or suicidality compared with placebo; however, there is a warning on the labeling of naltrexone/bupropion for increased suicidality with the use of antidepressant agents.16,17 The neurologic AE that required discontinuation of phentermine/topiramate at our institution is unique based on published information.11,18
The data from this study reinforced the observation that weight maintenance is the most challenging aspect of weight loss. Although our data showed clinically meaningful weight loss from baseline, many patients regained their weight, and some exceeded their baseline weight. Beyond providing these medications, this evidence suggests the need for close, continued follow-up through patients’ weight loss journey.
Limitations
Because this is a retrospective chart review, data collection was influenced by and limited to information that had been recorded in the EHR. AEs that resulted in medication discontinuation were assessed from the patient’s chart, which might not be correct if providers did not update the records. Follow-up was not always scheduled at regular intervals after medication initiation, resulting in varying sample numbers at each time point, potentially interfering with true weight loss averages. Although not included in this analysis, it might be beneficial to evaluate adherence to recommendations for follow-up with laboratory and weight monitoring to better capture where future monitoring can be improved. Second, there was an unbalanced number of patients taking each medication. Specifically, we saw a change in weight with orlistat that exceeded what is consistently seen in larger, more controlled trials. Although this is an effect of the real world, small sample sizes cannot be generalized to the larger population and might result in data reflecting that of an outlier. Last, there is a lack of generalizability because of the veteran population demographic, which is more male and lacks ethnic diversity. This study also was carried out at a single, educational tertiary medical center, which might not apply to all populations.
Conclusions
Despite the limitations discussed, this study shows that the use of weight management medications in a general veteran population produces initial weight loss consistent with previous studies. However, there is room for continued improvement in follow-up strategies to promote greater weight maintenance after initial weight loss. Considering the high health care costs, personal burden, and potential long-term complications associated with obesity, efforts to promote development of programs that support weight management and maintenance are imperative.
Acknowledgment
This material is the result of work supported with resources and the use of facilities at Veteran Health Indiana.
1. Centers for Disease Control and Prevention. Adult obesity facts. Accessed April 2020. https://www.cdc.gov/obesity/data/adult.html
2. The Management of Overweight and Obesity Working Group. VA/DoD Clinical Practice Guideline for Screening and Management of Overweight and Obesity. Accessed March 13, 2021. https://www.healthquality.va.gov/guidelines/CD/obesity/VADoDCPGManagementOfOverweightAndObesityFinal.pdf
3. Jensen MD, Ryan DH, Apovian CM, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines; Obesity Society. 2013 AHA/ACC/TOS guideline for the management of overweight and obesity in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the Obesity Society. J Am Coll Cardiol. 2014;63(25, pt B):2985-3023. doi:10.1016/j.jacc.2013.11.004
4. Apovian CM, Aronne LJ, Bessesen DH, et al; Endocrine Society. Pharmacological management of obesity: an Endocrine Society clinical practice guideline. J Clin Endocrinol Metab 2015;100(2):342-362. doi:10.1210/jc.2014-3415
5. Rucker D, Padwal R, Li SK, Curioni C, Lau DCW. Long term pharmacotherapy for obesity and overweight: updated meta-analysis. BMJ. 2007;335(7631):1194-1199. doi:10.1136/bmj.39385.413113.25
6. Siebenhofer A, Winterholer, S, Jeitler K, et al. Long-term effects of weight-reducing drugs in people with hypertension. Cochrane Database Syst Rev 2021;1:CD007654. doi:10.1002/14651858.CD007654.pub5
7. Bramante CT, Raatz S, Bomber EM, Oberle MM, Ryder JR. Cardiovascular risks and benefits of medications used for weight loss. Front Endocrinol (Lausanne). 2020;10:883. doi:10.3389/fendo.2019.00883
8. Christensen R, Kristensen PK, Bartels EM, Bliddal H, Astrup A. Efficacy and safety of the weight-loss drug rimonabant: a meta-analysis of randomized trials. Lancet. 2007;370(9600):1706-1713. doi:10.1016/S0140-6736(07)61721-8
9. US Food and Drug Administration. FDA requests the withdrawal of the weight-loss drug Blevique, Belvique XR (lorcaserin) from the market. Accessed April 2020. https://www.fda.gov/drugs/drug-safety-and-availability/fda-requests-withdrawal-weight-loss-drug-belviq-belviq-xr-lorcaserin-market
10. Pi-Sunyer X, Astrup A, Fujioka K, et al; SCALE Obesity and Prediabetes NN8022-1839 Study Group. A randomized, controlled trial of 3.0 mg of liraglutide in weight management. N Engl J Med. 2015;373(1):11-22. doi:10.1056/NEJMoa1411892
11. Gadde KM, Allison DB, Ryan DH, et al. Effects of low-dose, controlled-release phentermine plus topiramate combination on weight and associated comorbidities in overweight and obese adults (CONQUER): a randomized, placebo-controlled, phase 3 trial. Lancet. 2011;377(9774):1341-1352. doi:10.1016/S0140-6736(11)60205-5
12. Sjöström L, Rissanen A, Andersen T, et al. Randomised placebo-controlled trial of orlistat for weight loss and prevention of weight regain in obese patients. European Multicentre Orlistat Study Group. Lancet. 1998;352(9123):167-172. doi:10.1016/s0140-6736(97)11509-4
13. Smith SR, Weissman NJ, Anderson CM, et al; Behavioral Modification and Lorcaserin for Overweight and Obesity Management (BLOOM) Study Group. Multicenter, placebo-controlled trial of lorcaserin for weight loss. N Engl J Med. 2010;363(3):245-256. doi:10.1056/NEJMoa0909809
14. Warkentin LM, Das D, Majumdar SR, Johnson JA, Padwal RS. The effect of weight loss on health-related quality of life: systematic review and meta-analysis of randomized trials. Obes Rev. 2014;15(3):169-182. doi:10.1111/obr.12113
15. Finkelstein EA, Trogdon JG, Cohen JW, Dietz W. Annual medical spending attributable to obesity: payer-and service-specific estimates. Health Aff (Millwood). 2009;28(5):w822-831. doi:10.1377/hlthaff.28.5.w822
16. Greenway FL, Fujioka K, Plodkowski RA, et al; COR-I Study Group. Effect of naltrexone plus bupropion on weight loss in overweight and obese adults (COR-I): a multicenter, randomized, double-blind, placebo-controlled phase 3 trial. Lancet. 2010;376(9741):595-605. doi:10.1016/S0140-6736(10)60888-4
17. Contrave. Prescribing information. Nalpropion Pharmaceuticals, Inc; 2019.
18. Qsymia. Prescribing information. VIVUS Inc; 2018.
The American Heart Association, the American College of Cardiology, and the Obesity Society define overweight as a body mass index (BMI) of 25 to 29.9 and obesity as a BMI ≥ 30. Morbid obesity is defined as a BMI ≥ 35 or 40.2,3 Based on these BMI cutoffs, the Endocrine Society recommends diet and lifestyle as the foundation of weight management and pharmacotherapy for those with a BMI ≥ 30 without comorbidities. In patients with a BMI ≥ 27, weight management medications may be considered if a patient has comorbid hypertension, T2DM, dyslipidemia, metabolic syndrome, obstructive sleep apnea, or nonalcoholic fatty liver disease. Patients with BMI > 40 are eligible for weight loss surgery.4
Lifestyle and dietary interventions are the foundation of current weight management guidelines from the Endocrine Society.4 At a minimum, guidelines recommended enrolling motivated patients in a high-intensity lifestyle intervention class of at least 14 sessions in the first 6 months to reach a goal weight loss of 5 to 10% from baseline and to maintain a reduction of 3 to 5% from baseline.3 Medications are recommended as an adjunct to lifestyle and dietary changes. Most weight management medications work in the brain to stimulate satiety signaling, which helps motivated patients adhere to their dietary interventions, assist those who have been unsuccessful in earlier weight loss attempts, and help maintain weight.3,4
Guidelines recommend 7 weight management medications, including orlistat (both prescription strength and over-the-counter), liraglutide, phentermine, phentermine/topiramate, lorcaserin, and naltrexone/bupropion. Using medications to assist with weight loss increases likelihood that patients will achieve 5 to 10% weight loss from baseline.5,6 Studies looking at long-term effects of these medications on weight loss have found improvements in blood pressure (BP), biomarkers for cardiovascular disease, and T2DM-related comorbidities.3,5,7
Positive effects on comorbidities have been found to be related to drug class and mechanism of action (MOA); those that also are approved for T2DM have demonstrated the most favorable cardiovascular effects.7 Other medications that work as stimulants or as modulators of serotonin pathways are associated with increased risks, prompting the US Food and Drug Administration (FDA) to remove some medications from the market.7,8 In January 2020, lorcaserin was taken off the market because of increased risk of cancer found in postmarketing surveillance.9 The benefit of weight loss must be weighed against the risk of medication use.
Monthly follow-up is recommended with weight management medications in the beginning to assess safety and efficacy; medications should be discontinued if weight loss is inadequate in the first 3 months.1,3,4 Limited studies have assessed the long-term use of weight management medications in a real-world setting. Medications are prescribed for weight management at Veteran Health Indiana (VHI) in outpatient clinics, including primary care, endocrinology, and gastrointestinal (GI) specialties. However, prescribing practices, outcomes, and adherence to guideline recommendations have not been studied. Data from this study will be used to better understand how VHI can serve its veterans through diet, lifestyle, and pharmacologic interventions.
Methods
We conducted a single-center, retrospective chart review for patients started on weight management medications at VHI. A patient list was generated based on prescription fills from June 1, 2017 to June 30, 2019. All data were obtained using the Computerized Patient Record System and patients were not contacted. This study was approved by the Indiana University Health Institutional Review Board and the VHI Research and Development Committee.
At the time of study, orlistat, liraglutide, phentermine/topiramate,
Patients were included in the study if they received a prescription of any 1 of the 5 available medications during the enrollment period. Patients were excluded if they received a prescription from or were treated by a civilian health care provider, if they never used the medication, or if their weight loss was attributed to a cancer diagnosis. These criteria produced 86 patients of whom 96 unique weight loss prescriptions were generated. Data were collected for each instance of medication use so that some patients were included multiple times. In this case, data collection for the failed medication ended when failure was documented, and new data points began when new medication was prescribed; all data collected were per medication, not per patient. This method was used to account for medication failure and provide accurate weight loss results based on medication choice within this institution.
The primary outcomes included total weight loss and weight loss as a percentage of baseline weight at 3, 6, 12, and > 12 months of therapy. Secondary outcomes included weight loss of 5% from baseline, rate of successful weight maintenance after initial weight loss of 5% from baseline, adverse drug reaction (ADR) monitoring, and use of weight management medications across clinics at VHI.
Demographic data included race, age, sex, baseline weight, BMI, and comorbid medical conditions. Comorbidities were collected based on the most recent primary care clinical note before initiating medication. Medication data collected included medications used to manage comorbidities. Data related to weight management medication included prescribing clinic, reason for medication discontinuation, or bariatric surgery intervention if applicable.
Efficacy outcome data included weight and BMI across therapy duration. Safety outcomes data included heart rate, BP, and ADRs that resulted in medication discontinuation as documented in the electronic health record (EHR).
We used descriptive statistics, including mean, standard deviation (SD), range, and percentage. For continuous data, Kruskal-Wallis tests were used because of nonparametric data distribution among the different medications with a prespecified α = 0.05. With the observed sample sizes and SDs in this study, post hoc poststudy power calculations showed that the study had 80% power at a 5% significance level to detect weight changes of 8.6 kg, 7.3 kg, and 12.4 kg at 3, 6, and 12 months, respectively, using nonparametric tests.
Results
A total of 86 patients were identified based on prescription fills, which produced 99 unique instances of medication use. Of the 99 identified, 3 met exclusion criteria and were not included in the final analysis. Among included veterans, 16 were female and 80 were male (Table 1). Most of those included identified as White race (86%), male (83%), and mean age 53 years. At baseline, mean weight was 130 kg and mean BMI 41.
Comorbidities and Medication Use
Hypertension (66%), hyperlipidemia (64%), and psychiatric diagnoses (50%) were most common comorbid conditions. Substance use (23%) and T2DM (40%) were the most common comorbidities influencing medication choice. Substance use evaluation included amphetamines and cocaine for this analysis.
Phentermine/topiramate is the preferred first-line agent unless patients have contraindications for use, in which case naltrexone/bupropion is recommended, based on guidelines for weight management medications within the VHI system. However, for patients with comorbid T2DM, liraglutide is preferred because of its beneficial effects for both weight loss and blood glucose control.2 Most patients at VHI were started on liraglutide (44%) or phentermine/topiramate (42%), which was in line with recommendations. Our sample included ≥ 1 prescription for each medication available at our facility, although the number of patients on each medication was not equal. Of note, the one patient taking lorcaserin at the time of study discontinued therapy in response to recent FDA guidance.9
Medications for comorbid conditions could contribute to weight gain. Of the patient sample, β blockers (n = 24) and anticonvulsants, including gabapentin and pregabalin (n = 22) were the most common Other medications that could have contributed to weight gain included sulfonylureas (n = 5), antipsychotics (n = 4), tricyclic antidepressants (n = 2), and hormone replacement therapies (n = 2).
Primary Outcomes
The mean weight of participants dropped from 129.9 to 114.2 kg over the 12 months of weight management medication therapy for a absolute difference of 15.8 kg (Figure 1 and eTable 1 available at doi:10.12788/fp.0117). Weight loss was recorded at 3, 6, 12, and > 12 months of weight management therapy. At each time point, weight loss was statistically significant (P < .001) compared with baseline (Table 2), even though not every patient had weight loss records at each time point.
When classified by medication choice,
Secondary Outcomes
More than one-half of the patients analyzed lost 5 to 10% from baseline while taking weight management medication.
Among patients who lost at least 5% from baseline, we performed further analysis to assess weight maintenance of 3 to 5% from baseline for 12 months.
We found that most of our prescriptions (n = 50) were entered by the endocrinology department in conjunction with the MOVE! program (eTable 3 available at doi:10.12788/fp.0117). All 4 of our primary care clinics prescribed weight loss medication; however, 1 clinic prescribed the most. Other prescriptions came from community-based outpatient clinics or other specialties, including gastroenterology, orthopedics, and sleep medicine.
Nineteen (18%) patients experienced an adverse event (AE) that led to medication discontinuation, which was recorded in their chart (eTable 4 available at doi:10.12788/fp.0117). Most common AEs were GI upset with liraglutide or orlistat or dull aching and pain with phentermine/topiramate. Two severe AEs occurred: One patient experienced a change in mental health status and suicide attempt with naltrexone/bupropion; and 1 patient discontinued phentermine/topiramate because of a change in neurologic status.
Primarily medications were stopped because of inadequate weight loss (n = 13), and most patients tried additional medications. However, 1 medication failure resulted in sleeve gastrectomy. Other reasons for medication discontinuation included missed MOVE! appointments, patient lost to follow-up, and patient-elected discontinuation.
Discussion
This study evaluated the use and outcomes of weight management medication among veterans at VHI. The study aimed to better understand the efficacy and safety of these medications while exposing potential weaknesses in care and to promote avenues to improve weight loss and maintenance.
Clinical trials for weight management medications reported weight loss of 8 to 10 kg over 56 weeks: 21 to 63% of patients losing at least 5% from baseline weight.10-14 The findings from our study found a higher average weight loss (−15.8 kg) than that reported in trials and a consistent percentage of patients (58.3%) who achieved at least 5% weight loss. It is promising to see that when used in a noncontrolled setting, these medications were able to produce weight loss consistent with results seen in large, controlled trials.
Pi-Sunyer and colleagues found continued weight loss after the initial 5% weight loss to an eventual 10% weight loss in many patients.10 Additionally, Smith and colleagues found that nearly 68% of their participants who took lorcaserin were able to maintain 3 to 5% weight loss over 12 months.13 Sjöström and colleagues acknowledged that many patients taking orlistat for an extended period began to gain weight, although at one-half the rate than that seen in the placebo group.12 This study found that fewer patients were able to maintain their weight loss over 12 months, with only 30% of patients maintaining 3 to 5% weight loss from baseline. This difference in weight maintenance likely was because of the uncontrolled nature of this study. Once patients reach their initial weight loss goal, even the most motivated patients will have trouble maintaining that weight.4 Despite the challenges associated with maintaining weight loss, the quality of life benefits patients gained and potential reductions in health care spending support using resources to improve these outcomes.2,14,15
Pi-Sunyer and colleagues reported high incidences of nausea (40%), vomiting (16%), diarrhea (21%), and constipation (20%) with liraglutide.10 Sjöström and colleagues reported 7% of patients experienced GI upset with orlistat.12 Comparatively, only 17% of our patients reported AEs that required discontinuation, including GI upset. One patient in our study discontinued naltrexone/bupropion because of a significant change in mental status and suicide attempt. Clinical trials did not report a greater risk of depression or suicidality compared with placebo; however, there is a warning on the labeling of naltrexone/bupropion for increased suicidality with the use of antidepressant agents.16,17 The neurologic AE that required discontinuation of phentermine/topiramate at our institution is unique based on published information.11,18
The data from this study reinforced the observation that weight maintenance is the most challenging aspect of weight loss. Although our data showed clinically meaningful weight loss from baseline, many patients regained their weight, and some exceeded their baseline weight. Beyond providing these medications, this evidence suggests the need for close, continued follow-up through patients’ weight loss journey.
Limitations
Because this is a retrospective chart review, data collection was influenced by and limited to information that had been recorded in the EHR. AEs that resulted in medication discontinuation were assessed from the patient’s chart, which might not be correct if providers did not update the records. Follow-up was not always scheduled at regular intervals after medication initiation, resulting in varying sample numbers at each time point, potentially interfering with true weight loss averages. Although not included in this analysis, it might be beneficial to evaluate adherence to recommendations for follow-up with laboratory and weight monitoring to better capture where future monitoring can be improved. Second, there was an unbalanced number of patients taking each medication. Specifically, we saw a change in weight with orlistat that exceeded what is consistently seen in larger, more controlled trials. Although this is an effect of the real world, small sample sizes cannot be generalized to the larger population and might result in data reflecting that of an outlier. Last, there is a lack of generalizability because of the veteran population demographic, which is more male and lacks ethnic diversity. This study also was carried out at a single, educational tertiary medical center, which might not apply to all populations.
Conclusions
Despite the limitations discussed, this study shows that the use of weight management medications in a general veteran population produces initial weight loss consistent with previous studies. However, there is room for continued improvement in follow-up strategies to promote greater weight maintenance after initial weight loss. Considering the high health care costs, personal burden, and potential long-term complications associated with obesity, efforts to promote development of programs that support weight management and maintenance are imperative.
Acknowledgment
This material is the result of work supported with resources and the use of facilities at Veteran Health Indiana.
The American Heart Association, the American College of Cardiology, and the Obesity Society define overweight as a body mass index (BMI) of 25 to 29.9 and obesity as a BMI ≥ 30. Morbid obesity is defined as a BMI ≥ 35 or 40.2,3 Based on these BMI cutoffs, the Endocrine Society recommends diet and lifestyle as the foundation of weight management and pharmacotherapy for those with a BMI ≥ 30 without comorbidities. In patients with a BMI ≥ 27, weight management medications may be considered if a patient has comorbid hypertension, T2DM, dyslipidemia, metabolic syndrome, obstructive sleep apnea, or nonalcoholic fatty liver disease. Patients with BMI > 40 are eligible for weight loss surgery.4
Lifestyle and dietary interventions are the foundation of current weight management guidelines from the Endocrine Society.4 At a minimum, guidelines recommended enrolling motivated patients in a high-intensity lifestyle intervention class of at least 14 sessions in the first 6 months to reach a goal weight loss of 5 to 10% from baseline and to maintain a reduction of 3 to 5% from baseline.3 Medications are recommended as an adjunct to lifestyle and dietary changes. Most weight management medications work in the brain to stimulate satiety signaling, which helps motivated patients adhere to their dietary interventions, assist those who have been unsuccessful in earlier weight loss attempts, and help maintain weight.3,4
Guidelines recommend 7 weight management medications, including orlistat (both prescription strength and over-the-counter), liraglutide, phentermine, phentermine/topiramate, lorcaserin, and naltrexone/bupropion. Using medications to assist with weight loss increases likelihood that patients will achieve 5 to 10% weight loss from baseline.5,6 Studies looking at long-term effects of these medications on weight loss have found improvements in blood pressure (BP), biomarkers for cardiovascular disease, and T2DM-related comorbidities.3,5,7
Positive effects on comorbidities have been found to be related to drug class and mechanism of action (MOA); those that also are approved for T2DM have demonstrated the most favorable cardiovascular effects.7 Other medications that work as stimulants or as modulators of serotonin pathways are associated with increased risks, prompting the US Food and Drug Administration (FDA) to remove some medications from the market.7,8 In January 2020, lorcaserin was taken off the market because of increased risk of cancer found in postmarketing surveillance.9 The benefit of weight loss must be weighed against the risk of medication use.
Monthly follow-up is recommended with weight management medications in the beginning to assess safety and efficacy; medications should be discontinued if weight loss is inadequate in the first 3 months.1,3,4 Limited studies have assessed the long-term use of weight management medications in a real-world setting. Medications are prescribed for weight management at Veteran Health Indiana (VHI) in outpatient clinics, including primary care, endocrinology, and gastrointestinal (GI) specialties. However, prescribing practices, outcomes, and adherence to guideline recommendations have not been studied. Data from this study will be used to better understand how VHI can serve its veterans through diet, lifestyle, and pharmacologic interventions.
Methods
We conducted a single-center, retrospective chart review for patients started on weight management medications at VHI. A patient list was generated based on prescription fills from June 1, 2017 to June 30, 2019. All data were obtained using the Computerized Patient Record System and patients were not contacted. This study was approved by the Indiana University Health Institutional Review Board and the VHI Research and Development Committee.
At the time of study, orlistat, liraglutide, phentermine/topiramate,
Patients were included in the study if they received a prescription of any 1 of the 5 available medications during the enrollment period. Patients were excluded if they received a prescription from or were treated by a civilian health care provider, if they never used the medication, or if their weight loss was attributed to a cancer diagnosis. These criteria produced 86 patients of whom 96 unique weight loss prescriptions were generated. Data were collected for each instance of medication use so that some patients were included multiple times. In this case, data collection for the failed medication ended when failure was documented, and new data points began when new medication was prescribed; all data collected were per medication, not per patient. This method was used to account for medication failure and provide accurate weight loss results based on medication choice within this institution.
The primary outcomes included total weight loss and weight loss as a percentage of baseline weight at 3, 6, 12, and > 12 months of therapy. Secondary outcomes included weight loss of 5% from baseline, rate of successful weight maintenance after initial weight loss of 5% from baseline, adverse drug reaction (ADR) monitoring, and use of weight management medications across clinics at VHI.
Demographic data included race, age, sex, baseline weight, BMI, and comorbid medical conditions. Comorbidities were collected based on the most recent primary care clinical note before initiating medication. Medication data collected included medications used to manage comorbidities. Data related to weight management medication included prescribing clinic, reason for medication discontinuation, or bariatric surgery intervention if applicable.
Efficacy outcome data included weight and BMI across therapy duration. Safety outcomes data included heart rate, BP, and ADRs that resulted in medication discontinuation as documented in the electronic health record (EHR).
We used descriptive statistics, including mean, standard deviation (SD), range, and percentage. For continuous data, Kruskal-Wallis tests were used because of nonparametric data distribution among the different medications with a prespecified α = 0.05. With the observed sample sizes and SDs in this study, post hoc poststudy power calculations showed that the study had 80% power at a 5% significance level to detect weight changes of 8.6 kg, 7.3 kg, and 12.4 kg at 3, 6, and 12 months, respectively, using nonparametric tests.
Results
A total of 86 patients were identified based on prescription fills, which produced 99 unique instances of medication use. Of the 99 identified, 3 met exclusion criteria and were not included in the final analysis. Among included veterans, 16 were female and 80 were male (Table 1). Most of those included identified as White race (86%), male (83%), and mean age 53 years. At baseline, mean weight was 130 kg and mean BMI 41.
Comorbidities and Medication Use
Hypertension (66%), hyperlipidemia (64%), and psychiatric diagnoses (50%) were most common comorbid conditions. Substance use (23%) and T2DM (40%) were the most common comorbidities influencing medication choice. Substance use evaluation included amphetamines and cocaine for this analysis.
Phentermine/topiramate is the preferred first-line agent unless patients have contraindications for use, in which case naltrexone/bupropion is recommended, based on guidelines for weight management medications within the VHI system. However, for patients with comorbid T2DM, liraglutide is preferred because of its beneficial effects for both weight loss and blood glucose control.2 Most patients at VHI were started on liraglutide (44%) or phentermine/topiramate (42%), which was in line with recommendations. Our sample included ≥ 1 prescription for each medication available at our facility, although the number of patients on each medication was not equal. Of note, the one patient taking lorcaserin at the time of study discontinued therapy in response to recent FDA guidance.9
Medications for comorbid conditions could contribute to weight gain. Of the patient sample, β blockers (n = 24) and anticonvulsants, including gabapentin and pregabalin (n = 22) were the most common Other medications that could have contributed to weight gain included sulfonylureas (n = 5), antipsychotics (n = 4), tricyclic antidepressants (n = 2), and hormone replacement therapies (n = 2).
Primary Outcomes
The mean weight of participants dropped from 129.9 to 114.2 kg over the 12 months of weight management medication therapy for a absolute difference of 15.8 kg (Figure 1 and eTable 1 available at doi:10.12788/fp.0117). Weight loss was recorded at 3, 6, 12, and > 12 months of weight management therapy. At each time point, weight loss was statistically significant (P < .001) compared with baseline (Table 2), even though not every patient had weight loss records at each time point.
When classified by medication choice,
Secondary Outcomes
More than one-half of the patients analyzed lost 5 to 10% from baseline while taking weight management medication.
Among patients who lost at least 5% from baseline, we performed further analysis to assess weight maintenance of 3 to 5% from baseline for 12 months.
We found that most of our prescriptions (n = 50) were entered by the endocrinology department in conjunction with the MOVE! program (eTable 3 available at doi:10.12788/fp.0117). All 4 of our primary care clinics prescribed weight loss medication; however, 1 clinic prescribed the most. Other prescriptions came from community-based outpatient clinics or other specialties, including gastroenterology, orthopedics, and sleep medicine.
Nineteen (18%) patients experienced an adverse event (AE) that led to medication discontinuation, which was recorded in their chart (eTable 4 available at doi:10.12788/fp.0117). Most common AEs were GI upset with liraglutide or orlistat or dull aching and pain with phentermine/topiramate. Two severe AEs occurred: One patient experienced a change in mental health status and suicide attempt with naltrexone/bupropion; and 1 patient discontinued phentermine/topiramate because of a change in neurologic status.
Primarily medications were stopped because of inadequate weight loss (n = 13), and most patients tried additional medications. However, 1 medication failure resulted in sleeve gastrectomy. Other reasons for medication discontinuation included missed MOVE! appointments, patient lost to follow-up, and patient-elected discontinuation.
Discussion
This study evaluated the use and outcomes of weight management medication among veterans at VHI. The study aimed to better understand the efficacy and safety of these medications while exposing potential weaknesses in care and to promote avenues to improve weight loss and maintenance.
Clinical trials for weight management medications reported weight loss of 8 to 10 kg over 56 weeks: 21 to 63% of patients losing at least 5% from baseline weight.10-14 The findings from our study found a higher average weight loss (−15.8 kg) than that reported in trials and a consistent percentage of patients (58.3%) who achieved at least 5% weight loss. It is promising to see that when used in a noncontrolled setting, these medications were able to produce weight loss consistent with results seen in large, controlled trials.
Pi-Sunyer and colleagues found continued weight loss after the initial 5% weight loss to an eventual 10% weight loss in many patients.10 Additionally, Smith and colleagues found that nearly 68% of their participants who took lorcaserin were able to maintain 3 to 5% weight loss over 12 months.13 Sjöström and colleagues acknowledged that many patients taking orlistat for an extended period began to gain weight, although at one-half the rate than that seen in the placebo group.12 This study found that fewer patients were able to maintain their weight loss over 12 months, with only 30% of patients maintaining 3 to 5% weight loss from baseline. This difference in weight maintenance likely was because of the uncontrolled nature of this study. Once patients reach their initial weight loss goal, even the most motivated patients will have trouble maintaining that weight.4 Despite the challenges associated with maintaining weight loss, the quality of life benefits patients gained and potential reductions in health care spending support using resources to improve these outcomes.2,14,15
Pi-Sunyer and colleagues reported high incidences of nausea (40%), vomiting (16%), diarrhea (21%), and constipation (20%) with liraglutide.10 Sjöström and colleagues reported 7% of patients experienced GI upset with orlistat.12 Comparatively, only 17% of our patients reported AEs that required discontinuation, including GI upset. One patient in our study discontinued naltrexone/bupropion because of a significant change in mental status and suicide attempt. Clinical trials did not report a greater risk of depression or suicidality compared with placebo; however, there is a warning on the labeling of naltrexone/bupropion for increased suicidality with the use of antidepressant agents.16,17 The neurologic AE that required discontinuation of phentermine/topiramate at our institution is unique based on published information.11,18
The data from this study reinforced the observation that weight maintenance is the most challenging aspect of weight loss. Although our data showed clinically meaningful weight loss from baseline, many patients regained their weight, and some exceeded their baseline weight. Beyond providing these medications, this evidence suggests the need for close, continued follow-up through patients’ weight loss journey.
Limitations
Because this is a retrospective chart review, data collection was influenced by and limited to information that had been recorded in the EHR. AEs that resulted in medication discontinuation were assessed from the patient’s chart, which might not be correct if providers did not update the records. Follow-up was not always scheduled at regular intervals after medication initiation, resulting in varying sample numbers at each time point, potentially interfering with true weight loss averages. Although not included in this analysis, it might be beneficial to evaluate adherence to recommendations for follow-up with laboratory and weight monitoring to better capture where future monitoring can be improved. Second, there was an unbalanced number of patients taking each medication. Specifically, we saw a change in weight with orlistat that exceeded what is consistently seen in larger, more controlled trials. Although this is an effect of the real world, small sample sizes cannot be generalized to the larger population and might result in data reflecting that of an outlier. Last, there is a lack of generalizability because of the veteran population demographic, which is more male and lacks ethnic diversity. This study also was carried out at a single, educational tertiary medical center, which might not apply to all populations.
Conclusions
Despite the limitations discussed, this study shows that the use of weight management medications in a general veteran population produces initial weight loss consistent with previous studies. However, there is room for continued improvement in follow-up strategies to promote greater weight maintenance after initial weight loss. Considering the high health care costs, personal burden, and potential long-term complications associated with obesity, efforts to promote development of programs that support weight management and maintenance are imperative.
Acknowledgment
This material is the result of work supported with resources and the use of facilities at Veteran Health Indiana.
1. Centers for Disease Control and Prevention. Adult obesity facts. Accessed April 2020. https://www.cdc.gov/obesity/data/adult.html
2. The Management of Overweight and Obesity Working Group. VA/DoD Clinical Practice Guideline for Screening and Management of Overweight and Obesity. Accessed March 13, 2021. https://www.healthquality.va.gov/guidelines/CD/obesity/VADoDCPGManagementOfOverweightAndObesityFinal.pdf
3. Jensen MD, Ryan DH, Apovian CM, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines; Obesity Society. 2013 AHA/ACC/TOS guideline for the management of overweight and obesity in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the Obesity Society. J Am Coll Cardiol. 2014;63(25, pt B):2985-3023. doi:10.1016/j.jacc.2013.11.004
4. Apovian CM, Aronne LJ, Bessesen DH, et al; Endocrine Society. Pharmacological management of obesity: an Endocrine Society clinical practice guideline. J Clin Endocrinol Metab 2015;100(2):342-362. doi:10.1210/jc.2014-3415
5. Rucker D, Padwal R, Li SK, Curioni C, Lau DCW. Long term pharmacotherapy for obesity and overweight: updated meta-analysis. BMJ. 2007;335(7631):1194-1199. doi:10.1136/bmj.39385.413113.25
6. Siebenhofer A, Winterholer, S, Jeitler K, et al. Long-term effects of weight-reducing drugs in people with hypertension. Cochrane Database Syst Rev 2021;1:CD007654. doi:10.1002/14651858.CD007654.pub5
7. Bramante CT, Raatz S, Bomber EM, Oberle MM, Ryder JR. Cardiovascular risks and benefits of medications used for weight loss. Front Endocrinol (Lausanne). 2020;10:883. doi:10.3389/fendo.2019.00883
8. Christensen R, Kristensen PK, Bartels EM, Bliddal H, Astrup A. Efficacy and safety of the weight-loss drug rimonabant: a meta-analysis of randomized trials. Lancet. 2007;370(9600):1706-1713. doi:10.1016/S0140-6736(07)61721-8
9. US Food and Drug Administration. FDA requests the withdrawal of the weight-loss drug Blevique, Belvique XR (lorcaserin) from the market. Accessed April 2020. https://www.fda.gov/drugs/drug-safety-and-availability/fda-requests-withdrawal-weight-loss-drug-belviq-belviq-xr-lorcaserin-market
10. Pi-Sunyer X, Astrup A, Fujioka K, et al; SCALE Obesity and Prediabetes NN8022-1839 Study Group. A randomized, controlled trial of 3.0 mg of liraglutide in weight management. N Engl J Med. 2015;373(1):11-22. doi:10.1056/NEJMoa1411892
11. Gadde KM, Allison DB, Ryan DH, et al. Effects of low-dose, controlled-release phentermine plus topiramate combination on weight and associated comorbidities in overweight and obese adults (CONQUER): a randomized, placebo-controlled, phase 3 trial. Lancet. 2011;377(9774):1341-1352. doi:10.1016/S0140-6736(11)60205-5
12. Sjöström L, Rissanen A, Andersen T, et al. Randomised placebo-controlled trial of orlistat for weight loss and prevention of weight regain in obese patients. European Multicentre Orlistat Study Group. Lancet. 1998;352(9123):167-172. doi:10.1016/s0140-6736(97)11509-4
13. Smith SR, Weissman NJ, Anderson CM, et al; Behavioral Modification and Lorcaserin for Overweight and Obesity Management (BLOOM) Study Group. Multicenter, placebo-controlled trial of lorcaserin for weight loss. N Engl J Med. 2010;363(3):245-256. doi:10.1056/NEJMoa0909809
14. Warkentin LM, Das D, Majumdar SR, Johnson JA, Padwal RS. The effect of weight loss on health-related quality of life: systematic review and meta-analysis of randomized trials. Obes Rev. 2014;15(3):169-182. doi:10.1111/obr.12113
15. Finkelstein EA, Trogdon JG, Cohen JW, Dietz W. Annual medical spending attributable to obesity: payer-and service-specific estimates. Health Aff (Millwood). 2009;28(5):w822-831. doi:10.1377/hlthaff.28.5.w822
16. Greenway FL, Fujioka K, Plodkowski RA, et al; COR-I Study Group. Effect of naltrexone plus bupropion on weight loss in overweight and obese adults (COR-I): a multicenter, randomized, double-blind, placebo-controlled phase 3 trial. Lancet. 2010;376(9741):595-605. doi:10.1016/S0140-6736(10)60888-4
17. Contrave. Prescribing information. Nalpropion Pharmaceuticals, Inc; 2019.
18. Qsymia. Prescribing information. VIVUS Inc; 2018.
1. Centers for Disease Control and Prevention. Adult obesity facts. Accessed April 2020. https://www.cdc.gov/obesity/data/adult.html
2. The Management of Overweight and Obesity Working Group. VA/DoD Clinical Practice Guideline for Screening and Management of Overweight and Obesity. Accessed March 13, 2021. https://www.healthquality.va.gov/guidelines/CD/obesity/VADoDCPGManagementOfOverweightAndObesityFinal.pdf
3. Jensen MD, Ryan DH, Apovian CM, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines; Obesity Society. 2013 AHA/ACC/TOS guideline for the management of overweight and obesity in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the Obesity Society. J Am Coll Cardiol. 2014;63(25, pt B):2985-3023. doi:10.1016/j.jacc.2013.11.004
4. Apovian CM, Aronne LJ, Bessesen DH, et al; Endocrine Society. Pharmacological management of obesity: an Endocrine Society clinical practice guideline. J Clin Endocrinol Metab 2015;100(2):342-362. doi:10.1210/jc.2014-3415
5. Rucker D, Padwal R, Li SK, Curioni C, Lau DCW. Long term pharmacotherapy for obesity and overweight: updated meta-analysis. BMJ. 2007;335(7631):1194-1199. doi:10.1136/bmj.39385.413113.25
6. Siebenhofer A, Winterholer, S, Jeitler K, et al. Long-term effects of weight-reducing drugs in people with hypertension. Cochrane Database Syst Rev 2021;1:CD007654. doi:10.1002/14651858.CD007654.pub5
7. Bramante CT, Raatz S, Bomber EM, Oberle MM, Ryder JR. Cardiovascular risks and benefits of medications used for weight loss. Front Endocrinol (Lausanne). 2020;10:883. doi:10.3389/fendo.2019.00883
8. Christensen R, Kristensen PK, Bartels EM, Bliddal H, Astrup A. Efficacy and safety of the weight-loss drug rimonabant: a meta-analysis of randomized trials. Lancet. 2007;370(9600):1706-1713. doi:10.1016/S0140-6736(07)61721-8
9. US Food and Drug Administration. FDA requests the withdrawal of the weight-loss drug Blevique, Belvique XR (lorcaserin) from the market. Accessed April 2020. https://www.fda.gov/drugs/drug-safety-and-availability/fda-requests-withdrawal-weight-loss-drug-belviq-belviq-xr-lorcaserin-market
10. Pi-Sunyer X, Astrup A, Fujioka K, et al; SCALE Obesity and Prediabetes NN8022-1839 Study Group. A randomized, controlled trial of 3.0 mg of liraglutide in weight management. N Engl J Med. 2015;373(1):11-22. doi:10.1056/NEJMoa1411892
11. Gadde KM, Allison DB, Ryan DH, et al. Effects of low-dose, controlled-release phentermine plus topiramate combination on weight and associated comorbidities in overweight and obese adults (CONQUER): a randomized, placebo-controlled, phase 3 trial. Lancet. 2011;377(9774):1341-1352. doi:10.1016/S0140-6736(11)60205-5
12. Sjöström L, Rissanen A, Andersen T, et al. Randomised placebo-controlled trial of orlistat for weight loss and prevention of weight regain in obese patients. European Multicentre Orlistat Study Group. Lancet. 1998;352(9123):167-172. doi:10.1016/s0140-6736(97)11509-4
13. Smith SR, Weissman NJ, Anderson CM, et al; Behavioral Modification and Lorcaserin for Overweight and Obesity Management (BLOOM) Study Group. Multicenter, placebo-controlled trial of lorcaserin for weight loss. N Engl J Med. 2010;363(3):245-256. doi:10.1056/NEJMoa0909809
14. Warkentin LM, Das D, Majumdar SR, Johnson JA, Padwal RS. The effect of weight loss on health-related quality of life: systematic review and meta-analysis of randomized trials. Obes Rev. 2014;15(3):169-182. doi:10.1111/obr.12113
15. Finkelstein EA, Trogdon JG, Cohen JW, Dietz W. Annual medical spending attributable to obesity: payer-and service-specific estimates. Health Aff (Millwood). 2009;28(5):w822-831. doi:10.1377/hlthaff.28.5.w822
16. Greenway FL, Fujioka K, Plodkowski RA, et al; COR-I Study Group. Effect of naltrexone plus bupropion on weight loss in overweight and obese adults (COR-I): a multicenter, randomized, double-blind, placebo-controlled phase 3 trial. Lancet. 2010;376(9741):595-605. doi:10.1016/S0140-6736(10)60888-4
17. Contrave. Prescribing information. Nalpropion Pharmaceuticals, Inc; 2019.
18. Qsymia. Prescribing information. VIVUS Inc; 2018.
Outcomes Associated With Pharmacist- Led Consult Service for Opioid Tapering and Pharmacotherapy
In the late 1980s and early 1990s, an emphasis on better pain management led health care professionals (HCPs) to increase prescribing of opioids to better manage patient’s pain. In 1991, 76 million prescriptions were written for opioids in the United States, and by 2011, the number had nearly tripled to 219 million.1 Overdose rates increased as well, nearly tripling from 1999 to 2014.2 Of the 52,404 US deaths from drug overdoses in the in 2015, 63% involved an opioid.2
Opioid Safety Initiative
In response to the growing opioid epidemic, the US Department of Veterans Affairs (VA) created the Opioid Safety Initiative in 2014.3 This comprehensive, multifaceted initiative was designed to improve the care and safety of veterans managed with opioid therapy and promote rational opioid prescribing and monitoring. In 2016 the Centers for Disease Control and Prevention (CDC) issued guidelines for opioid prescriptions, and the following year the VA and the US Department of Defense (DoD) updated the VA/DoD Clinical Practice Guidelines for Opioid Therapy for Chronic Pain (VA/DoD guidelines).4,5 After the release of these guidelines, the use of opioid tapers expanded. However, due to public outcry of forced opioid tapering in 2019, the US Food and Drug Administration updated its opioid labeling requirements to provide clearer guidance on opioid tapers for tolerant patients.6,7
As a result, HCPs began to develop various strategies to balance the safety and efficacy of opioid use in patients with chronic pain. The West Palm Beach VA Medical Center (WPBVAMC) in Florida has a Pain Clinic that includes 2 pain management clinical pharmacy specialists (CPSs) with specialized training in pain management, who are uniquely qualified to assess and evaluate medication therapy in complex pain patient cases. These CPSs were involved in the face-to-face management of patients requiring specialized pain care and participated in a pain pharmacy electronic consult (eConsult) service to document pain management consultative recommendations for patients appropriate for management at the primary care level. This formalized process increased specialty pain care access for veterans whose pain was managed by primary care providers (PCPs).
The pain pharmacy eConsult service was initiated at the WPBVAMC in June 2013 to assist PCPs in the management of outpatients with chronic pain. The eConsult service includes evaluation of a patient’s electronic health records (EHRs) by CPSs. The eConsult service also provided PCPs with the option to engage a pharmacist who could provide recommendations for opioid dosing conversion, opioid tapering, pain pharmacotherapy, or drug screen interpretation, without the necessity for an additional patient visit.
Subsequent to the release of the 2016 CDC (and later the 2017 VA/DoD) guidelines recommending reducing morphine equivalent daily dose (MEDD) levels, the WPBVAMC had a large increase in pain eConsult requests for opioid tapering and opioid pharmacotherapy. A 3.4-fold increase in requests occurred in March, April, and May vs the following 9 months, and a nearly 4-fold increase in requests for opioid tapers during the same period. However, the impact of the completed eConsults was unclear. Therefore, the primary objective of this study was to assess the effect of CPS services for opioid tapering and opioid pharmacotherapy by quantifying the number of recommendations accepted/implemented by PCPs. The secondary objectives included evaluating harms associated with the recommendations (eg, increase in visits to the emergency department [ED], hospitalizations, suicide attempts, or PCP visits) and provider satisfaction.
Methods
A retrospective chart review was completed to assess data of patients from the WPBVAMC and its associated community-based outpatient clinics (CBOCs). The project was approved by the WPBVAMC Scientific Advisory Committee as part of the facility’s performance improvement efforts.
Included patients had a pain pharmacy eConsult placed between April 1, 2016 and March 31, 2017. EHRs were reviewed and only eConsults for opioid pharmacotherapy recommendation or opioid tapers were evaluated. eConsults were excluded if the request was discontinued, completed by a HCP other than the pain CPS, or placed for an opioid dose conversion, nonopioid pharmacotherapy, or drug screen interpretation.
Data for analyses were entered into Microsoft Excel 2016 and were securely saved and accessible to relevant researchers. Patient protected health information used during patient care remained confidential.
Demographic data were collected, including age, gender, race, pertinent medical comorbidities (eg, diabetes mellitus, sleep apnea), and mental health comorbidities. Pain scores were collected at baseline and 6-months postconsult. Pain medications used by patients were noted at baseline and 6 months postconsult, including concomitant opioid and benzodiazepine use, MEDD, and other pain medication. The duration of time needed by pain CPS to complete each eConsult and total time from eConsult entered to HCP implementation of the initial recommendation was collected. The number of actionable recommendations (eg, changes in drug therapy, urine drug screens [UDSs], and referrals to other services also were recorded and reviewed 6 months postconsult to determine the number and percentage of recommendations implemented by the HCP. The EHR was examined to determine adverse events (AEs) (eg, any documentation of suicide attempt, calls to the Veterans Crisis Line, or death 6 month postconsult). Collected data also included new eConsults, the reason for opioid tapering either by HCP or patient, and assessment of economic harms (count of the number of visits to ED, hospitalizations, or unscheduled PCP visits with uncontrolled pain as chief reason within 6 months postconsult). Last, PCPs were sent a survey to assess their satisfaction with the pain eConsult service.
Results
Of 517 eConsults received from April 1, 2016 to March 31, 2017, 285 (55.1%) met inclusion criteria (Figure). Using a random number generator, 100 eConsults were further reviewed for outcomes of interest.
In this cohort, the mean age was 61 years, 87% were male, and 80% were White individuals. Most patients (83%) had ≥ 1 mental health comorbidity, and 53% had ≥ 2, with depressive symptoms, tobacco use, and/or posttraumatic stress disorder the most common diagnoses (Table 1). Eighty-seven percent of eConsults were for opioid tapers and the remaining 13% were for opioid pharmacotherapy.
The median pain score at time of consult was 6 on a 10-point scale, with no change at 6 months postconsult. However, 41% of patients overall had a median 3.3-point drop in pain score, 17% had no change in pain score, and 42% had a median 2.6-point increase in pain score.
At time of consult, 24% of patients had an opioid and benzodiazepine prescribed concurrently. At the time of the initial request, the mean MEDD was 177.5 mg (median, 165; range, 0-577.5). At 6 months postconsult, the average MEDD was 71 mg (median, 90; range, 0-450) for a mean 44% MEDD decrease. Eighteen percent of patients had no change in MEDD, and 5% had an increase.
One concern was the number of patients whose pain management regimen consisted of either opioids as monotherapy or a combination of opioids and skeletal muscle relaxants (SMRs), which can increase the opioid overdose risk and are not indicated for long-term use (except for baclofen for spasticity). Thirty-five percent of patients were taking either opioid monotherapy or opioids and SMRs for chronic pain management at time of consult and 28% were taking opioid monotherapy or opioids and SMRs 6 months postconsult.
Electronic Consults
Table 2 describes the reasons eConsults were requested. The most common reason was to taper the dose to be in compliance with the CDC 2016 guideline recommendation of MEDD < 90 mg, which was later increased to 100 mg by the VA/DoD guideline.
On average, eConsults were completed within a mean of 11.5 days of the PCP request, including nights and weekends. The CPS spent a mean 66.8 minutes to complete each eConsult. Once the eConsult was completed, PCPs took a mean of 9 days to initiate the primary recommendation. This 9-day average does not include 11 eConsults with no accepted recommendations and 11 eConsults for which the PCP implemented the primary recommendation before the CPS completed the consult, most likely due to a phone call or direct contact with the CPS at the time the eConsult was ordered.
A mean 3.5 actionable recommendations were made by the CPS and a mean 1.6 recommendations were implemented within 6 months by the PCP. At least 1 recommendation was accepted/implemented for 89% of patients, with a mean 55% recommendations that were accepted/implemented. Eleven percent of the eConsult final recommendations were not accepted by PCPs and clear documentation of the reasons were not provided.
Adverse Outcomes
In the 6 months postconsult, 11 patients (7 men and 4 women) experienced 32 AEs (Table 3). Eight patients had 15 ED visits, with 3 of the visits resulting in hospitalizations, 8 patients had 9 unscheduled PCP visits, 1 patient reported suicidal ideation and 2 patients made a total of 4 calls to the Veterans Crisis Line. There were also 2 deaths; however, both were due to end-stage disease (cirrhosis and amyotrophic lateral sclerosis) and not believed to be related to eConsult recommendations.
Eight patients had a history of substance use disorders (SUDs) and 8 had a history of a mood disorder or psychosis. One patient had both SUD and a mood/psychosis-related mental health disorder, including a reported suicidal attempt/ideation at an ED visit and a subsequent hospitalization. A similar number of AEs occurred in patients with decreases in MEDD of 0 to 24% compared with those that received more aggressive tapers of 75 to 100% (Table 4).
Primary Care Providers
Nine patients were reconsulted, with only 1 secondary to the PCP not implementing recommendations from the initial consult. No factors were found that correlated with likelihood of a patient being reconsulted.
Surveys on PCP satisfaction with the eConsult service were completed by 29 of the 55 PCPs. PCP feedback was generally positive with nearly 90% of PCPs planning to use the service in the future as well as recommending use to other providers.
PCPs also were given the option to indicate the most important factor for overall satisfaction with eConsult service (time, access, safety, expectations or confidence). Safety was provider’s top choice with time being a close second.
Discussion
Most (89%) PCPs accepted at least 1 recommendation from the completed eConsult, and MEDDs decreased by 60%, likely reducing the patient’s risk of overdose or other AEs from opioids. There also was a slight reduction in patient’s mean pain scores; however, 41% had a decrease and 42% had an increase in pain scores. There was no clear relationship when pain scores were compared with MEDDs, likely giving credence to the idea that pain scores are largely subjective and an unreliable surrogate marker for assessing effectiveness of analgesic regimens.
Eleven patients experienced AEs, including 1 patient for whom the recommendations were not implemented by the PCP. Eight of the 11 had multiple AEs. One interesting finding was that 7 of the 11 patients with an AE tested positive for unexpected substances on routine UDS or were arrested for driving while intoxicated (DWI). However, only 3 of the 7 had an active SUD diagnosis. With 25% of the AEs coming from patients with a history of SUD, it is important that any history of SUD be documented in the EHR. Maintaining this documentation can be especially difficult if patients switch VA medical centers or receive services outside the VA. Thorough and accurate history and chart review should ideally be completed before prescribing opioids.
Guidelines
While the PCPs were following VA/DoD and CDC recommendations for opioid tapering to < 100 or 90 mg MEDD, respectively, there is weak evidence in these guidelines to support specific MEDD cutoffs. The CDC guidelines even state, “a single dosage threshold for safe opioid use could not be identified.”5 One of the largest issues when using MEDD as a cutoff is the lack of agreement on its calculation. In 2014, Nuckols and colleagues al conducted a study to compare the existing guidelines on the use of opioids for chronic pain. While 13 guidelines were considered eligible, most recommendations were supported only by observational data or expert recommendations, and there was no consensus on what constitutes a “morphine equivalent.”8 Currently there is no universally accepted opioid-conversion method, resulting in a substantial problem when calculating a MEDD.9 A survey of 8 online opioid dose conversion tools found a -55% to +242% variation.10 As Fudin and colleagues concluded in response to the large variations found in these various analyses, the studies “unequivocally disqualify the validity of embracing MEDD to assess risk in any meaningful statistical way.”11 Pharmacogenetics, drug tolerance, drug-drug interactions, body surface area, and organ function are patient- specific factors that are not taken into consideration when relying solely on a MEDD calculation. Tapering to lowest functional dose rather than a specific number or cutoff may be a more effective way to treat patients, and providers should use the guidelines as recommendations and not a hardline mandate.
At 6 months, 6 patients were receiving no pain medications from the VA, and 24 of the patients were tapered from their opiate to discontinuation. It is unclear whether patients are no longer taking opioids or switched their care to non-VA providers to receive medications, including opioids, privately. This is difficult to verify, though a prescription drug monitoring program (PDMP) could be used to assess patient adherence. As many of the patients that were tapered due to identification of aberrant behaviors, lack of continuity of care across health care systems may result in future patient harm.
The results of this analysis highlight the importance of checking PDMP databases and routine UDSs when prescribing opioids—there can be serious safety concerns if patients are taking other prescribed or illicit medications. However, care must be taken; there were 2 instances of patients’ chronic opioid prescriptions discontinued by their VA provider after a review of the PDMP showed they had received non-VA opioids. In both cases, the quantity and doses received were small (counts of ≤ 12) and were received more than 6 months prior to the check of the PDMP. While this constitutes a breach of the Informed Consent for long-term opioid use, if there are no other concerning behaviors, it may be more prudent to review the informed consent with the patient and discuss why the behavior is a breach to ensure that patients and PCPs continue to work as a team to manage chronic pain.
Limitations
The study population was one limitation of this project. While data suggest that chronic pain affects women more than men, this study’s population was only 13% female. Thirty percent of the women in this study had an AE compared with only 8% of the men. Additional limitations included use of problem list for comorbidities, as lists may be inaccurate or outdated, and limiting the monitoring of AE to only 6 months. As some tapers were not initiated immediately and some taper schedules can last several months to years; therefor, outcomes may have been higher if patients were followed longer. Many of the patients with AEs had increased ED visits or unscheduled primary care visits as the tapers went on and their pain worsened, but the visits were outside the 6-month time frame for data collection. An additional weakness of this review included assessing a pain score, but not functional status, which may be a better predictor of the effectiveness of a patient’s pain management regimen. This assessment is needed in future studies for more reliable data. Finally, PCP survey results also should be viewed with caution. The current survey had only 29 respondents, and the 2014 survey had only 10 respondents and did not include CBOC providers.
Conclusion
A pain eConsult service managed by CPSs specializing in pain management can assist patients and PCPs with opioid therapy recommendations in a safe and timely manner, reducing risk of overdose secondary to high dose opioid therapy and with limited harm to patients.
1. National Institute on Drug Abuse. Increased drug availability is associated with increased use and overdose. Published June 9, 2020. Accessed February 19, 2021. https://www.drugabuse.gov/publications/research-reports/prescription-opioids-heroin/increased-drug-availability-associated-increased-use-overdose
2. Rudd RA, Seth P, David F, Scholl L. Increases in drug and opioid-involved overdose deaths - United States, 2010-2015. MMWR Morb Mortal Wkly Rep. 2016;65(50-51):1445-1452. Published 2016 Dec 30.doi:10.15585/mmwr.mm655051e1
3. US Department of Veterans Affairs, Office of Inspector General. Healthcare inspection – VA patterns of dispensing take-home opioids and monitoring patients on opioid therapy. Report 14-00895-163. Published May 14, 2014. Accessed February 2, 2021. https://www.va.gov/oig/pubs/VAOIG-14-00895-163.pdf
4. US Department of Veterans Affairs, US Department of Defense, Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guidelines for opioid therapy for chronic pain. Version 3.0. Published December 2017. Accessed February 2, 2021. https://www.va.gov/HOMELESS/nchav/resources/docs/mental-health/substance-abuse/VA_DoD-CLINICAL-PRACTICE-GUIDELINE-FOR-OPIOID-THERAPY-FOR-CHRONIC-PAIN-508.pdf
5. Dowell D, Haegerich TM, Chou R. CDC Guideline for Prescribing Opioids for Chronic Pain - United States, 2016 [published correction appears in MMWR Recomm Rep. 2016;65(11):295]. MMWR Recomm Rep. 2016;65(1):1-49. Published 2016 Mar 18. doi:10.15585/mmwr.rr6501e1.
6. US Food and Drug Administration. (2019). FDA identifies harm reported from sudden discontinuation of opioid pain medicines and requires label changes to guide prescribers on gradual, individualized tapering. Updated April 17, 2019. Accessed February 2, 2021. https://www.fda.gov/drugs/fda-drug-safety-podcasts/fda-identifies-harm-reported-sudden-discontinuation-opioid-pain-medicines-and-requires-label-changes
7. Dowell D, Haegerich T, Chou R. No Shortcuts to Safer Opioid Prescribing. N Engl J Med. 2019;380(24):2285-2287. doi:10.1056/NEJMp1904190
8. Nuckols TK, Anderson L, Popescu I, et al. Opioid prescribing: a systematic review and critical appraisal of guidelines for chronic pain. Ann Intern Med. 2014;160(1):38-47. doi:10.7326/0003-4819-160-1-201401070-00732
9. Rennick A, Atkinson T, Cimino NM, Strassels SA, McPherson ML, Fudin J. Variability in Opioid Equivalence Calculations. Pain Med. 2016;17(5):892-898. doi:10.1111/pme.12920
10. Shaw K, Fudin J. Evaluation and comparison of online equianalgesic opioid dose conversion calculators. Pract Pain Manag. 2013;13(7):61-66.
11. Fudin J, Pratt Cleary J, Schatman ME. The MEDD myth: the impact of pseudoscience on pain research and prescribing-guideline development. J Pain Res. 2016;9:153-156. Published 2016 Mar 23. doi:10.2147/JPR.S107794
In the late 1980s and early 1990s, an emphasis on better pain management led health care professionals (HCPs) to increase prescribing of opioids to better manage patient’s pain. In 1991, 76 million prescriptions were written for opioids in the United States, and by 2011, the number had nearly tripled to 219 million.1 Overdose rates increased as well, nearly tripling from 1999 to 2014.2 Of the 52,404 US deaths from drug overdoses in the in 2015, 63% involved an opioid.2
Opioid Safety Initiative
In response to the growing opioid epidemic, the US Department of Veterans Affairs (VA) created the Opioid Safety Initiative in 2014.3 This comprehensive, multifaceted initiative was designed to improve the care and safety of veterans managed with opioid therapy and promote rational opioid prescribing and monitoring. In 2016 the Centers for Disease Control and Prevention (CDC) issued guidelines for opioid prescriptions, and the following year the VA and the US Department of Defense (DoD) updated the VA/DoD Clinical Practice Guidelines for Opioid Therapy for Chronic Pain (VA/DoD guidelines).4,5 After the release of these guidelines, the use of opioid tapers expanded. However, due to public outcry of forced opioid tapering in 2019, the US Food and Drug Administration updated its opioid labeling requirements to provide clearer guidance on opioid tapers for tolerant patients.6,7
As a result, HCPs began to develop various strategies to balance the safety and efficacy of opioid use in patients with chronic pain. The West Palm Beach VA Medical Center (WPBVAMC) in Florida has a Pain Clinic that includes 2 pain management clinical pharmacy specialists (CPSs) with specialized training in pain management, who are uniquely qualified to assess and evaluate medication therapy in complex pain patient cases. These CPSs were involved in the face-to-face management of patients requiring specialized pain care and participated in a pain pharmacy electronic consult (eConsult) service to document pain management consultative recommendations for patients appropriate for management at the primary care level. This formalized process increased specialty pain care access for veterans whose pain was managed by primary care providers (PCPs).
The pain pharmacy eConsult service was initiated at the WPBVAMC in June 2013 to assist PCPs in the management of outpatients with chronic pain. The eConsult service includes evaluation of a patient’s electronic health records (EHRs) by CPSs. The eConsult service also provided PCPs with the option to engage a pharmacist who could provide recommendations for opioid dosing conversion, opioid tapering, pain pharmacotherapy, or drug screen interpretation, without the necessity for an additional patient visit.
Subsequent to the release of the 2016 CDC (and later the 2017 VA/DoD) guidelines recommending reducing morphine equivalent daily dose (MEDD) levels, the WPBVAMC had a large increase in pain eConsult requests for opioid tapering and opioid pharmacotherapy. A 3.4-fold increase in requests occurred in March, April, and May vs the following 9 months, and a nearly 4-fold increase in requests for opioid tapers during the same period. However, the impact of the completed eConsults was unclear. Therefore, the primary objective of this study was to assess the effect of CPS services for opioid tapering and opioid pharmacotherapy by quantifying the number of recommendations accepted/implemented by PCPs. The secondary objectives included evaluating harms associated with the recommendations (eg, increase in visits to the emergency department [ED], hospitalizations, suicide attempts, or PCP visits) and provider satisfaction.
Methods
A retrospective chart review was completed to assess data of patients from the WPBVAMC and its associated community-based outpatient clinics (CBOCs). The project was approved by the WPBVAMC Scientific Advisory Committee as part of the facility’s performance improvement efforts.
Included patients had a pain pharmacy eConsult placed between April 1, 2016 and March 31, 2017. EHRs were reviewed and only eConsults for opioid pharmacotherapy recommendation or opioid tapers were evaluated. eConsults were excluded if the request was discontinued, completed by a HCP other than the pain CPS, or placed for an opioid dose conversion, nonopioid pharmacotherapy, or drug screen interpretation.
Data for analyses were entered into Microsoft Excel 2016 and were securely saved and accessible to relevant researchers. Patient protected health information used during patient care remained confidential.
Demographic data were collected, including age, gender, race, pertinent medical comorbidities (eg, diabetes mellitus, sleep apnea), and mental health comorbidities. Pain scores were collected at baseline and 6-months postconsult. Pain medications used by patients were noted at baseline and 6 months postconsult, including concomitant opioid and benzodiazepine use, MEDD, and other pain medication. The duration of time needed by pain CPS to complete each eConsult and total time from eConsult entered to HCP implementation of the initial recommendation was collected. The number of actionable recommendations (eg, changes in drug therapy, urine drug screens [UDSs], and referrals to other services also were recorded and reviewed 6 months postconsult to determine the number and percentage of recommendations implemented by the HCP. The EHR was examined to determine adverse events (AEs) (eg, any documentation of suicide attempt, calls to the Veterans Crisis Line, or death 6 month postconsult). Collected data also included new eConsults, the reason for opioid tapering either by HCP or patient, and assessment of economic harms (count of the number of visits to ED, hospitalizations, or unscheduled PCP visits with uncontrolled pain as chief reason within 6 months postconsult). Last, PCPs were sent a survey to assess their satisfaction with the pain eConsult service.
Results
Of 517 eConsults received from April 1, 2016 to March 31, 2017, 285 (55.1%) met inclusion criteria (Figure). Using a random number generator, 100 eConsults were further reviewed for outcomes of interest.
In this cohort, the mean age was 61 years, 87% were male, and 80% were White individuals. Most patients (83%) had ≥ 1 mental health comorbidity, and 53% had ≥ 2, with depressive symptoms, tobacco use, and/or posttraumatic stress disorder the most common diagnoses (Table 1). Eighty-seven percent of eConsults were for opioid tapers and the remaining 13% were for opioid pharmacotherapy.
The median pain score at time of consult was 6 on a 10-point scale, with no change at 6 months postconsult. However, 41% of patients overall had a median 3.3-point drop in pain score, 17% had no change in pain score, and 42% had a median 2.6-point increase in pain score.
At time of consult, 24% of patients had an opioid and benzodiazepine prescribed concurrently. At the time of the initial request, the mean MEDD was 177.5 mg (median, 165; range, 0-577.5). At 6 months postconsult, the average MEDD was 71 mg (median, 90; range, 0-450) for a mean 44% MEDD decrease. Eighteen percent of patients had no change in MEDD, and 5% had an increase.
One concern was the number of patients whose pain management regimen consisted of either opioids as monotherapy or a combination of opioids and skeletal muscle relaxants (SMRs), which can increase the opioid overdose risk and are not indicated for long-term use (except for baclofen for spasticity). Thirty-five percent of patients were taking either opioid monotherapy or opioids and SMRs for chronic pain management at time of consult and 28% were taking opioid monotherapy or opioids and SMRs 6 months postconsult.
Electronic Consults
Table 2 describes the reasons eConsults were requested. The most common reason was to taper the dose to be in compliance with the CDC 2016 guideline recommendation of MEDD < 90 mg, which was later increased to 100 mg by the VA/DoD guideline.
On average, eConsults were completed within a mean of 11.5 days of the PCP request, including nights and weekends. The CPS spent a mean 66.8 minutes to complete each eConsult. Once the eConsult was completed, PCPs took a mean of 9 days to initiate the primary recommendation. This 9-day average does not include 11 eConsults with no accepted recommendations and 11 eConsults for which the PCP implemented the primary recommendation before the CPS completed the consult, most likely due to a phone call or direct contact with the CPS at the time the eConsult was ordered.
A mean 3.5 actionable recommendations were made by the CPS and a mean 1.6 recommendations were implemented within 6 months by the PCP. At least 1 recommendation was accepted/implemented for 89% of patients, with a mean 55% recommendations that were accepted/implemented. Eleven percent of the eConsult final recommendations were not accepted by PCPs and clear documentation of the reasons were not provided.
Adverse Outcomes
In the 6 months postconsult, 11 patients (7 men and 4 women) experienced 32 AEs (Table 3). Eight patients had 15 ED visits, with 3 of the visits resulting in hospitalizations, 8 patients had 9 unscheduled PCP visits, 1 patient reported suicidal ideation and 2 patients made a total of 4 calls to the Veterans Crisis Line. There were also 2 deaths; however, both were due to end-stage disease (cirrhosis and amyotrophic lateral sclerosis) and not believed to be related to eConsult recommendations.
Eight patients had a history of substance use disorders (SUDs) and 8 had a history of a mood disorder or psychosis. One patient had both SUD and a mood/psychosis-related mental health disorder, including a reported suicidal attempt/ideation at an ED visit and a subsequent hospitalization. A similar number of AEs occurred in patients with decreases in MEDD of 0 to 24% compared with those that received more aggressive tapers of 75 to 100% (Table 4).
Primary Care Providers
Nine patients were reconsulted, with only 1 secondary to the PCP not implementing recommendations from the initial consult. No factors were found that correlated with likelihood of a patient being reconsulted.
Surveys on PCP satisfaction with the eConsult service were completed by 29 of the 55 PCPs. PCP feedback was generally positive with nearly 90% of PCPs planning to use the service in the future as well as recommending use to other providers.
PCPs also were given the option to indicate the most important factor for overall satisfaction with eConsult service (time, access, safety, expectations or confidence). Safety was provider’s top choice with time being a close second.
Discussion
Most (89%) PCPs accepted at least 1 recommendation from the completed eConsult, and MEDDs decreased by 60%, likely reducing the patient’s risk of overdose or other AEs from opioids. There also was a slight reduction in patient’s mean pain scores; however, 41% had a decrease and 42% had an increase in pain scores. There was no clear relationship when pain scores were compared with MEDDs, likely giving credence to the idea that pain scores are largely subjective and an unreliable surrogate marker for assessing effectiveness of analgesic regimens.
Eleven patients experienced AEs, including 1 patient for whom the recommendations were not implemented by the PCP. Eight of the 11 had multiple AEs. One interesting finding was that 7 of the 11 patients with an AE tested positive for unexpected substances on routine UDS or were arrested for driving while intoxicated (DWI). However, only 3 of the 7 had an active SUD diagnosis. With 25% of the AEs coming from patients with a history of SUD, it is important that any history of SUD be documented in the EHR. Maintaining this documentation can be especially difficult if patients switch VA medical centers or receive services outside the VA. Thorough and accurate history and chart review should ideally be completed before prescribing opioids.
Guidelines
While the PCPs were following VA/DoD and CDC recommendations for opioid tapering to < 100 or 90 mg MEDD, respectively, there is weak evidence in these guidelines to support specific MEDD cutoffs. The CDC guidelines even state, “a single dosage threshold for safe opioid use could not be identified.”5 One of the largest issues when using MEDD as a cutoff is the lack of agreement on its calculation. In 2014, Nuckols and colleagues al conducted a study to compare the existing guidelines on the use of opioids for chronic pain. While 13 guidelines were considered eligible, most recommendations were supported only by observational data or expert recommendations, and there was no consensus on what constitutes a “morphine equivalent.”8 Currently there is no universally accepted opioid-conversion method, resulting in a substantial problem when calculating a MEDD.9 A survey of 8 online opioid dose conversion tools found a -55% to +242% variation.10 As Fudin and colleagues concluded in response to the large variations found in these various analyses, the studies “unequivocally disqualify the validity of embracing MEDD to assess risk in any meaningful statistical way.”11 Pharmacogenetics, drug tolerance, drug-drug interactions, body surface area, and organ function are patient- specific factors that are not taken into consideration when relying solely on a MEDD calculation. Tapering to lowest functional dose rather than a specific number or cutoff may be a more effective way to treat patients, and providers should use the guidelines as recommendations and not a hardline mandate.
At 6 months, 6 patients were receiving no pain medications from the VA, and 24 of the patients were tapered from their opiate to discontinuation. It is unclear whether patients are no longer taking opioids or switched their care to non-VA providers to receive medications, including opioids, privately. This is difficult to verify, though a prescription drug monitoring program (PDMP) could be used to assess patient adherence. As many of the patients that were tapered due to identification of aberrant behaviors, lack of continuity of care across health care systems may result in future patient harm.
The results of this analysis highlight the importance of checking PDMP databases and routine UDSs when prescribing opioids—there can be serious safety concerns if patients are taking other prescribed or illicit medications. However, care must be taken; there were 2 instances of patients’ chronic opioid prescriptions discontinued by their VA provider after a review of the PDMP showed they had received non-VA opioids. In both cases, the quantity and doses received were small (counts of ≤ 12) and were received more than 6 months prior to the check of the PDMP. While this constitutes a breach of the Informed Consent for long-term opioid use, if there are no other concerning behaviors, it may be more prudent to review the informed consent with the patient and discuss why the behavior is a breach to ensure that patients and PCPs continue to work as a team to manage chronic pain.
Limitations
The study population was one limitation of this project. While data suggest that chronic pain affects women more than men, this study’s population was only 13% female. Thirty percent of the women in this study had an AE compared with only 8% of the men. Additional limitations included use of problem list for comorbidities, as lists may be inaccurate or outdated, and limiting the monitoring of AE to only 6 months. As some tapers were not initiated immediately and some taper schedules can last several months to years; therefor, outcomes may have been higher if patients were followed longer. Many of the patients with AEs had increased ED visits or unscheduled primary care visits as the tapers went on and their pain worsened, but the visits were outside the 6-month time frame for data collection. An additional weakness of this review included assessing a pain score, but not functional status, which may be a better predictor of the effectiveness of a patient’s pain management regimen. This assessment is needed in future studies for more reliable data. Finally, PCP survey results also should be viewed with caution. The current survey had only 29 respondents, and the 2014 survey had only 10 respondents and did not include CBOC providers.
Conclusion
A pain eConsult service managed by CPSs specializing in pain management can assist patients and PCPs with opioid therapy recommendations in a safe and timely manner, reducing risk of overdose secondary to high dose opioid therapy and with limited harm to patients.
In the late 1980s and early 1990s, an emphasis on better pain management led health care professionals (HCPs) to increase prescribing of opioids to better manage patient’s pain. In 1991, 76 million prescriptions were written for opioids in the United States, and by 2011, the number had nearly tripled to 219 million.1 Overdose rates increased as well, nearly tripling from 1999 to 2014.2 Of the 52,404 US deaths from drug overdoses in the in 2015, 63% involved an opioid.2
Opioid Safety Initiative
In response to the growing opioid epidemic, the US Department of Veterans Affairs (VA) created the Opioid Safety Initiative in 2014.3 This comprehensive, multifaceted initiative was designed to improve the care and safety of veterans managed with opioid therapy and promote rational opioid prescribing and monitoring. In 2016 the Centers for Disease Control and Prevention (CDC) issued guidelines for opioid prescriptions, and the following year the VA and the US Department of Defense (DoD) updated the VA/DoD Clinical Practice Guidelines for Opioid Therapy for Chronic Pain (VA/DoD guidelines).4,5 After the release of these guidelines, the use of opioid tapers expanded. However, due to public outcry of forced opioid tapering in 2019, the US Food and Drug Administration updated its opioid labeling requirements to provide clearer guidance on opioid tapers for tolerant patients.6,7
As a result, HCPs began to develop various strategies to balance the safety and efficacy of opioid use in patients with chronic pain. The West Palm Beach VA Medical Center (WPBVAMC) in Florida has a Pain Clinic that includes 2 pain management clinical pharmacy specialists (CPSs) with specialized training in pain management, who are uniquely qualified to assess and evaluate medication therapy in complex pain patient cases. These CPSs were involved in the face-to-face management of patients requiring specialized pain care and participated in a pain pharmacy electronic consult (eConsult) service to document pain management consultative recommendations for patients appropriate for management at the primary care level. This formalized process increased specialty pain care access for veterans whose pain was managed by primary care providers (PCPs).
The pain pharmacy eConsult service was initiated at the WPBVAMC in June 2013 to assist PCPs in the management of outpatients with chronic pain. The eConsult service includes evaluation of a patient’s electronic health records (EHRs) by CPSs. The eConsult service also provided PCPs with the option to engage a pharmacist who could provide recommendations for opioid dosing conversion, opioid tapering, pain pharmacotherapy, or drug screen interpretation, without the necessity for an additional patient visit.
Subsequent to the release of the 2016 CDC (and later the 2017 VA/DoD) guidelines recommending reducing morphine equivalent daily dose (MEDD) levels, the WPBVAMC had a large increase in pain eConsult requests for opioid tapering and opioid pharmacotherapy. A 3.4-fold increase in requests occurred in March, April, and May vs the following 9 months, and a nearly 4-fold increase in requests for opioid tapers during the same period. However, the impact of the completed eConsults was unclear. Therefore, the primary objective of this study was to assess the effect of CPS services for opioid tapering and opioid pharmacotherapy by quantifying the number of recommendations accepted/implemented by PCPs. The secondary objectives included evaluating harms associated with the recommendations (eg, increase in visits to the emergency department [ED], hospitalizations, suicide attempts, or PCP visits) and provider satisfaction.
Methods
A retrospective chart review was completed to assess data of patients from the WPBVAMC and its associated community-based outpatient clinics (CBOCs). The project was approved by the WPBVAMC Scientific Advisory Committee as part of the facility’s performance improvement efforts.
Included patients had a pain pharmacy eConsult placed between April 1, 2016 and March 31, 2017. EHRs were reviewed and only eConsults for opioid pharmacotherapy recommendation or opioid tapers were evaluated. eConsults were excluded if the request was discontinued, completed by a HCP other than the pain CPS, or placed for an opioid dose conversion, nonopioid pharmacotherapy, or drug screen interpretation.
Data for analyses were entered into Microsoft Excel 2016 and were securely saved and accessible to relevant researchers. Patient protected health information used during patient care remained confidential.
Demographic data were collected, including age, gender, race, pertinent medical comorbidities (eg, diabetes mellitus, sleep apnea), and mental health comorbidities. Pain scores were collected at baseline and 6-months postconsult. Pain medications used by patients were noted at baseline and 6 months postconsult, including concomitant opioid and benzodiazepine use, MEDD, and other pain medication. The duration of time needed by pain CPS to complete each eConsult and total time from eConsult entered to HCP implementation of the initial recommendation was collected. The number of actionable recommendations (eg, changes in drug therapy, urine drug screens [UDSs], and referrals to other services also were recorded and reviewed 6 months postconsult to determine the number and percentage of recommendations implemented by the HCP. The EHR was examined to determine adverse events (AEs) (eg, any documentation of suicide attempt, calls to the Veterans Crisis Line, or death 6 month postconsult). Collected data also included new eConsults, the reason for opioid tapering either by HCP or patient, and assessment of economic harms (count of the number of visits to ED, hospitalizations, or unscheduled PCP visits with uncontrolled pain as chief reason within 6 months postconsult). Last, PCPs were sent a survey to assess their satisfaction with the pain eConsult service.
Results
Of 517 eConsults received from April 1, 2016 to March 31, 2017, 285 (55.1%) met inclusion criteria (Figure). Using a random number generator, 100 eConsults were further reviewed for outcomes of interest.
In this cohort, the mean age was 61 years, 87% were male, and 80% were White individuals. Most patients (83%) had ≥ 1 mental health comorbidity, and 53% had ≥ 2, with depressive symptoms, tobacco use, and/or posttraumatic stress disorder the most common diagnoses (Table 1). Eighty-seven percent of eConsults were for opioid tapers and the remaining 13% were for opioid pharmacotherapy.
The median pain score at time of consult was 6 on a 10-point scale, with no change at 6 months postconsult. However, 41% of patients overall had a median 3.3-point drop in pain score, 17% had no change in pain score, and 42% had a median 2.6-point increase in pain score.
At time of consult, 24% of patients had an opioid and benzodiazepine prescribed concurrently. At the time of the initial request, the mean MEDD was 177.5 mg (median, 165; range, 0-577.5). At 6 months postconsult, the average MEDD was 71 mg (median, 90; range, 0-450) for a mean 44% MEDD decrease. Eighteen percent of patients had no change in MEDD, and 5% had an increase.
One concern was the number of patients whose pain management regimen consisted of either opioids as monotherapy or a combination of opioids and skeletal muscle relaxants (SMRs), which can increase the opioid overdose risk and are not indicated for long-term use (except for baclofen for spasticity). Thirty-five percent of patients were taking either opioid monotherapy or opioids and SMRs for chronic pain management at time of consult and 28% were taking opioid monotherapy or opioids and SMRs 6 months postconsult.
Electronic Consults
Table 2 describes the reasons eConsults were requested. The most common reason was to taper the dose to be in compliance with the CDC 2016 guideline recommendation of MEDD < 90 mg, which was later increased to 100 mg by the VA/DoD guideline.
On average, eConsults were completed within a mean of 11.5 days of the PCP request, including nights and weekends. The CPS spent a mean 66.8 minutes to complete each eConsult. Once the eConsult was completed, PCPs took a mean of 9 days to initiate the primary recommendation. This 9-day average does not include 11 eConsults with no accepted recommendations and 11 eConsults for which the PCP implemented the primary recommendation before the CPS completed the consult, most likely due to a phone call or direct contact with the CPS at the time the eConsult was ordered.
A mean 3.5 actionable recommendations were made by the CPS and a mean 1.6 recommendations were implemented within 6 months by the PCP. At least 1 recommendation was accepted/implemented for 89% of patients, with a mean 55% recommendations that were accepted/implemented. Eleven percent of the eConsult final recommendations were not accepted by PCPs and clear documentation of the reasons were not provided.
Adverse Outcomes
In the 6 months postconsult, 11 patients (7 men and 4 women) experienced 32 AEs (Table 3). Eight patients had 15 ED visits, with 3 of the visits resulting in hospitalizations, 8 patients had 9 unscheduled PCP visits, 1 patient reported suicidal ideation and 2 patients made a total of 4 calls to the Veterans Crisis Line. There were also 2 deaths; however, both were due to end-stage disease (cirrhosis and amyotrophic lateral sclerosis) and not believed to be related to eConsult recommendations.
Eight patients had a history of substance use disorders (SUDs) and 8 had a history of a mood disorder or psychosis. One patient had both SUD and a mood/psychosis-related mental health disorder, including a reported suicidal attempt/ideation at an ED visit and a subsequent hospitalization. A similar number of AEs occurred in patients with decreases in MEDD of 0 to 24% compared with those that received more aggressive tapers of 75 to 100% (Table 4).
Primary Care Providers
Nine patients were reconsulted, with only 1 secondary to the PCP not implementing recommendations from the initial consult. No factors were found that correlated with likelihood of a patient being reconsulted.
Surveys on PCP satisfaction with the eConsult service were completed by 29 of the 55 PCPs. PCP feedback was generally positive with nearly 90% of PCPs planning to use the service in the future as well as recommending use to other providers.
PCPs also were given the option to indicate the most important factor for overall satisfaction with eConsult service (time, access, safety, expectations or confidence). Safety was provider’s top choice with time being a close second.
Discussion
Most (89%) PCPs accepted at least 1 recommendation from the completed eConsult, and MEDDs decreased by 60%, likely reducing the patient’s risk of overdose or other AEs from opioids. There also was a slight reduction in patient’s mean pain scores; however, 41% had a decrease and 42% had an increase in pain scores. There was no clear relationship when pain scores were compared with MEDDs, likely giving credence to the idea that pain scores are largely subjective and an unreliable surrogate marker for assessing effectiveness of analgesic regimens.
Eleven patients experienced AEs, including 1 patient for whom the recommendations were not implemented by the PCP. Eight of the 11 had multiple AEs. One interesting finding was that 7 of the 11 patients with an AE tested positive for unexpected substances on routine UDS or were arrested for driving while intoxicated (DWI). However, only 3 of the 7 had an active SUD diagnosis. With 25% of the AEs coming from patients with a history of SUD, it is important that any history of SUD be documented in the EHR. Maintaining this documentation can be especially difficult if patients switch VA medical centers or receive services outside the VA. Thorough and accurate history and chart review should ideally be completed before prescribing opioids.
Guidelines
While the PCPs were following VA/DoD and CDC recommendations for opioid tapering to < 100 or 90 mg MEDD, respectively, there is weak evidence in these guidelines to support specific MEDD cutoffs. The CDC guidelines even state, “a single dosage threshold for safe opioid use could not be identified.”5 One of the largest issues when using MEDD as a cutoff is the lack of agreement on its calculation. In 2014, Nuckols and colleagues al conducted a study to compare the existing guidelines on the use of opioids for chronic pain. While 13 guidelines were considered eligible, most recommendations were supported only by observational data or expert recommendations, and there was no consensus on what constitutes a “morphine equivalent.”8 Currently there is no universally accepted opioid-conversion method, resulting in a substantial problem when calculating a MEDD.9 A survey of 8 online opioid dose conversion tools found a -55% to +242% variation.10 As Fudin and colleagues concluded in response to the large variations found in these various analyses, the studies “unequivocally disqualify the validity of embracing MEDD to assess risk in any meaningful statistical way.”11 Pharmacogenetics, drug tolerance, drug-drug interactions, body surface area, and organ function are patient- specific factors that are not taken into consideration when relying solely on a MEDD calculation. Tapering to lowest functional dose rather than a specific number or cutoff may be a more effective way to treat patients, and providers should use the guidelines as recommendations and not a hardline mandate.
At 6 months, 6 patients were receiving no pain medications from the VA, and 24 of the patients were tapered from their opiate to discontinuation. It is unclear whether patients are no longer taking opioids or switched their care to non-VA providers to receive medications, including opioids, privately. This is difficult to verify, though a prescription drug monitoring program (PDMP) could be used to assess patient adherence. As many of the patients that were tapered due to identification of aberrant behaviors, lack of continuity of care across health care systems may result in future patient harm.
The results of this analysis highlight the importance of checking PDMP databases and routine UDSs when prescribing opioids—there can be serious safety concerns if patients are taking other prescribed or illicit medications. However, care must be taken; there were 2 instances of patients’ chronic opioid prescriptions discontinued by their VA provider after a review of the PDMP showed they had received non-VA opioids. In both cases, the quantity and doses received were small (counts of ≤ 12) and were received more than 6 months prior to the check of the PDMP. While this constitutes a breach of the Informed Consent for long-term opioid use, if there are no other concerning behaviors, it may be more prudent to review the informed consent with the patient and discuss why the behavior is a breach to ensure that patients and PCPs continue to work as a team to manage chronic pain.
Limitations
The study population was one limitation of this project. While data suggest that chronic pain affects women more than men, this study’s population was only 13% female. Thirty percent of the women in this study had an AE compared with only 8% of the men. Additional limitations included use of problem list for comorbidities, as lists may be inaccurate or outdated, and limiting the monitoring of AE to only 6 months. As some tapers were not initiated immediately and some taper schedules can last several months to years; therefor, outcomes may have been higher if patients were followed longer. Many of the patients with AEs had increased ED visits or unscheduled primary care visits as the tapers went on and their pain worsened, but the visits were outside the 6-month time frame for data collection. An additional weakness of this review included assessing a pain score, but not functional status, which may be a better predictor of the effectiveness of a patient’s pain management regimen. This assessment is needed in future studies for more reliable data. Finally, PCP survey results also should be viewed with caution. The current survey had only 29 respondents, and the 2014 survey had only 10 respondents and did not include CBOC providers.
Conclusion
A pain eConsult service managed by CPSs specializing in pain management can assist patients and PCPs with opioid therapy recommendations in a safe and timely manner, reducing risk of overdose secondary to high dose opioid therapy and with limited harm to patients.
1. National Institute on Drug Abuse. Increased drug availability is associated with increased use and overdose. Published June 9, 2020. Accessed February 19, 2021. https://www.drugabuse.gov/publications/research-reports/prescription-opioids-heroin/increased-drug-availability-associated-increased-use-overdose
2. Rudd RA, Seth P, David F, Scholl L. Increases in drug and opioid-involved overdose deaths - United States, 2010-2015. MMWR Morb Mortal Wkly Rep. 2016;65(50-51):1445-1452. Published 2016 Dec 30.doi:10.15585/mmwr.mm655051e1
3. US Department of Veterans Affairs, Office of Inspector General. Healthcare inspection – VA patterns of dispensing take-home opioids and monitoring patients on opioid therapy. Report 14-00895-163. Published May 14, 2014. Accessed February 2, 2021. https://www.va.gov/oig/pubs/VAOIG-14-00895-163.pdf
4. US Department of Veterans Affairs, US Department of Defense, Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guidelines for opioid therapy for chronic pain. Version 3.0. Published December 2017. Accessed February 2, 2021. https://www.va.gov/HOMELESS/nchav/resources/docs/mental-health/substance-abuse/VA_DoD-CLINICAL-PRACTICE-GUIDELINE-FOR-OPIOID-THERAPY-FOR-CHRONIC-PAIN-508.pdf
5. Dowell D, Haegerich TM, Chou R. CDC Guideline for Prescribing Opioids for Chronic Pain - United States, 2016 [published correction appears in MMWR Recomm Rep. 2016;65(11):295]. MMWR Recomm Rep. 2016;65(1):1-49. Published 2016 Mar 18. doi:10.15585/mmwr.rr6501e1.
6. US Food and Drug Administration. (2019). FDA identifies harm reported from sudden discontinuation of opioid pain medicines and requires label changes to guide prescribers on gradual, individualized tapering. Updated April 17, 2019. Accessed February 2, 2021. https://www.fda.gov/drugs/fda-drug-safety-podcasts/fda-identifies-harm-reported-sudden-discontinuation-opioid-pain-medicines-and-requires-label-changes
7. Dowell D, Haegerich T, Chou R. No Shortcuts to Safer Opioid Prescribing. N Engl J Med. 2019;380(24):2285-2287. doi:10.1056/NEJMp1904190
8. Nuckols TK, Anderson L, Popescu I, et al. Opioid prescribing: a systematic review and critical appraisal of guidelines for chronic pain. Ann Intern Med. 2014;160(1):38-47. doi:10.7326/0003-4819-160-1-201401070-00732
9. Rennick A, Atkinson T, Cimino NM, Strassels SA, McPherson ML, Fudin J. Variability in Opioid Equivalence Calculations. Pain Med. 2016;17(5):892-898. doi:10.1111/pme.12920
10. Shaw K, Fudin J. Evaluation and comparison of online equianalgesic opioid dose conversion calculators. Pract Pain Manag. 2013;13(7):61-66.
11. Fudin J, Pratt Cleary J, Schatman ME. The MEDD myth: the impact of pseudoscience on pain research and prescribing-guideline development. J Pain Res. 2016;9:153-156. Published 2016 Mar 23. doi:10.2147/JPR.S107794
1. National Institute on Drug Abuse. Increased drug availability is associated with increased use and overdose. Published June 9, 2020. Accessed February 19, 2021. https://www.drugabuse.gov/publications/research-reports/prescription-opioids-heroin/increased-drug-availability-associated-increased-use-overdose
2. Rudd RA, Seth P, David F, Scholl L. Increases in drug and opioid-involved overdose deaths - United States, 2010-2015. MMWR Morb Mortal Wkly Rep. 2016;65(50-51):1445-1452. Published 2016 Dec 30.doi:10.15585/mmwr.mm655051e1
3. US Department of Veterans Affairs, Office of Inspector General. Healthcare inspection – VA patterns of dispensing take-home opioids and monitoring patients on opioid therapy. Report 14-00895-163. Published May 14, 2014. Accessed February 2, 2021. https://www.va.gov/oig/pubs/VAOIG-14-00895-163.pdf
4. US Department of Veterans Affairs, US Department of Defense, Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guidelines for opioid therapy for chronic pain. Version 3.0. Published December 2017. Accessed February 2, 2021. https://www.va.gov/HOMELESS/nchav/resources/docs/mental-health/substance-abuse/VA_DoD-CLINICAL-PRACTICE-GUIDELINE-FOR-OPIOID-THERAPY-FOR-CHRONIC-PAIN-508.pdf
5. Dowell D, Haegerich TM, Chou R. CDC Guideline for Prescribing Opioids for Chronic Pain - United States, 2016 [published correction appears in MMWR Recomm Rep. 2016;65(11):295]. MMWR Recomm Rep. 2016;65(1):1-49. Published 2016 Mar 18. doi:10.15585/mmwr.rr6501e1.
6. US Food and Drug Administration. (2019). FDA identifies harm reported from sudden discontinuation of opioid pain medicines and requires label changes to guide prescribers on gradual, individualized tapering. Updated April 17, 2019. Accessed February 2, 2021. https://www.fda.gov/drugs/fda-drug-safety-podcasts/fda-identifies-harm-reported-sudden-discontinuation-opioid-pain-medicines-and-requires-label-changes
7. Dowell D, Haegerich T, Chou R. No Shortcuts to Safer Opioid Prescribing. N Engl J Med. 2019;380(24):2285-2287. doi:10.1056/NEJMp1904190
8. Nuckols TK, Anderson L, Popescu I, et al. Opioid prescribing: a systematic review and critical appraisal of guidelines for chronic pain. Ann Intern Med. 2014;160(1):38-47. doi:10.7326/0003-4819-160-1-201401070-00732
9. Rennick A, Atkinson T, Cimino NM, Strassels SA, McPherson ML, Fudin J. Variability in Opioid Equivalence Calculations. Pain Med. 2016;17(5):892-898. doi:10.1111/pme.12920
10. Shaw K, Fudin J. Evaluation and comparison of online equianalgesic opioid dose conversion calculators. Pract Pain Manag. 2013;13(7):61-66.
11. Fudin J, Pratt Cleary J, Schatman ME. The MEDD myth: the impact of pseudoscience on pain research and prescribing-guideline development. J Pain Res. 2016;9:153-156. Published 2016 Mar 23. doi:10.2147/JPR.S107794
Weight Gain in Veterans Taking Duloxetine, Pregabalin, or Both for the Treatment of Neuropathy
Neuropathy is the result of damage to the nervous system. This dysfunction generally occurs in peripheral nerves, which are the circuits that transmit signals to the brain and spinal cord. The peripheral nervous system is responsible for controlling motor and autonomic nerves and conduction of sensory information. Injury to the nervous system can lead to changes in nerve fiber sensitivity and malfunctioning of nerve stimuli pathways. Neuropathy may be a sequela of a wide variety of diseases, including diabetes mellitus (DM), autoimmune disorders, infections, and cancer. Also, neuropathy can be caused by medications, trauma, exposure to toxins, classified idiopathic.1-5
Peripheral neuropathy is a common condition with an estimated incidence of > 3 million cases in the United States per year.4 The burden of neuropathy may be greater among veterans, due to a higher prevalence of type 2 DM (T2DM) and an aging population. Manifestations of neuropathy include weakness, numbness, burning or tingling sensations, and lingering pain.3,5 This can lead to limited mobility and decreased quality of life. Neuropathy can be debilitating, but several medications can be used to alleviate symptoms—including duloxetine and pregabalin. The American Diabetes Association recommends either agent as initial treatment for neuropathic pain in patients with DM.2 As with all medication use, the benefits and risks of treatment must be assessed prior to initiation of therapy.
The Centers for Disease Control and Prevention estimates > 70% of adults in the United States are overweight or obese.6 Excessive weight gain causes a higher risk of developing certain comorbidities, such as coronary artery disease, cerebrovascular accident, T2DM, and cancer, and all can lead to premature death. It is important to avoid excessive weight gain whenever possible, especially in patients already at a high risk for developing these diseases.
The correlation of weight gain in patients taking duloxetine, pregabalin, or both is not well studied. Duloxetine has the potential to cause weight gain or weight loss, with reports of > 1% incidence for either effect.7 Clinical significance of weight changes caused by duloxetine is uncertain.Pregabalin is more likely to cause weight gain, with a reported incidence between 2 and 14%.8 Weight gain may be associated with dose and duration; 1 study demonstrated an average weight gain of about 11 lb after 2 years of pregabalin treatment.8 The medical literature lacks information regarding weight gain associated with combination therapy of duloxetine and pregabalin. The objective of this study was to investigate the association of weight gain in veterans taking duloxetine, pregabalin, or both for the treatment of neuropathy.
Methods
A retrospective, single-center, chart review was conducted at the Sioux Falls Veterans Affairs Health Care System (SFVAHCS). Data were collected through manual chart review of US Department of Veterans Affairs (VA) electronic health records (EHRs). Patients included were veterans aged 18 to 89 years who were initiated on duloxetine and/or pregabalin between October 2015 and September 2018.
The primary end point of this study was the change in body weight, expressed in pounds, after 12 to 18 months of treatment. If multiple weights were obtained during the 12- to 18-month period, the weight recorded closest to 12 months was used. The secondary end points included the percent change in body weight and dose effect, which evaluated change in weight at doses of duloxetine > 60 mg/d, and pregabalin at doses > 300 mg/d. Duration of effect was evaluated as a secondary end point; contrary to the primary end point, the weight furthest from 12 months was recorded. The change in hemoglobin A1c (HbA1c) in patients with prediabetes and DM also was investigated as a secondary end point. Last, involvement in the Managing Overweight Veterans Everywhere (MOVE!) weight management program at SFVAHCS and its effect on weight gain was reviewed.
Baseline characteristics were collected to determine the variability between each study group. Data collected during the study included age, sex, race, weight, BMI, HbA1c, eGFR, DM diagnosis, insulin therapy prescription, duration of use, and MOVE! program participation.
Statistical Analysis
The primary and secondary end points were analyzed using an analysis of variance statistical test. Results were considered statistically significant at P < .05.
Results
A total of 174 participants were included in this study, with 77 in each monotherapy group, and 22 in the combination therapy group. More than 300 patients were excluded from the study due to prespecified inclusion and exclusion criteria. Baseline characteristics were similar among the 3 groups, with no statistically significant differences identified (Table 1).
Primary End Point
The change in body weight after 12 to 18 months of treatment was –0.8 lb in the duloxetine group, +2.9 lb in the pregabalin group, and +5.5 lb in the pregabalin plus duloxetine group (P = .12) (Figure).
Secondary End Points
The percent change in body weight after 12 to 18 months of treatment was −0.3% in the duloxetine group, +1.5% in the pregabalin group, and +2.0% in the duloxetine plus pregabalin group (P = .18). The change in body weight beyond 12 months of treatment was −0.9 lb in the duloxetine group, +3.6 lb in the pregabalin group, and +8.5 lb in the duloxetine plus pregabalin group (P = .05). The change in HbA1c in patients with DM and pre-DM was −0.1% in the duloxetine group, +0.3% in the pregabalin group, and −0.3% in the duloxetine plus pregabalin group (P = .14). The change in body weight in patients who received increased doses of the study agents was −2.8 lb in the duloxetine group and +6.5 lb in the pregabalin group (P = .05). Among veterans who participated in MOVE!, change in body weight after 12 to 18 months of treatment was +1.5 lb in the duloxetine group, +4.9 lb in the pregabalin group, and +3.4 lb in the pregabalin plus duloxetine group (P = .91)(Table 2).
Discussion
The purpose of this retrospective chart review was to evaluate the association of weight gain in veterans taking duloxetine and/or pregabalin for the treatment of neuropathy. Although the primary end point, weight gain after 12 to 18 months of therapy, was not statistically significant, we found notable trends and associations worthy of discussion.
The secondary end point of the difference in weight gain in veterans taking duloxetine, pregabalin, or both for a treatment duration > 12 months was statistically significant. For this secondary end point, the weight recorded was when the study agent(s) were discontinued or the most recent weight obtained if participants still had an active prescription; the average duration of treatment in the 3 study groups was about 24 months. These weights differed from the primary end point, in which weight closest to 12 months of therapy was recorded.
The other secondary end point that was statistically significant was the difference in weight gain in patients who were on higher doses of duloxetine or pregabalin. This specifically examined participants who were on doses of duloxetine > 60 mg/d and pregabalin > 300 mg/d. Duloxetine was associated with weight loss, whereas pregabalin was associated with weight gain, with a difference of about 10 lb between the groups. The significance of this secondary end point demonstrates that increased doses of duloxetine and pregabalin are more associated with changes in weight compared with standard doses.
The secondary end points of percent change in body weight, change in HBA1c in patients with DM and prediabetes, and weight gain in patients who participated in the MOVE! weight management program were not statistically significant among the 3 study groups. Given the relatively small sample sizes, more significant differences in the evaluation of the primary and secondary end points may have been observed with a larger patient population.
Study investigators made additional observations beyond the primary and secondary end points. Most notably, > 300 patients were excluded from this study because they did not continue treatment beyond 12 months. The investigators found this number staggering, as it may imply that veterans were not satisfied with treatment agent(s) within 1 year of initiation, which could be due to lack of efficacy or intolerable adverse effects.
The mechanism of why combination therapy of duloxetine and pregabalin may be more associated with weight gain compared with either agent alone is unknown. Since this study found duloxetine to be more associated with weight loss, the mechanism does not seem to be an additive effect. The alternative hypothesis proposed prior to the completion of this study stemmed from an observation seen by health care providers at SFVAHCS.
Limitations
The retrospective nature of the study does not provide proof of causation but does demonstrate association. There was no control group, and the study design did not allow for randomization of participants. Additionally, since the study was completed at a single center, there was potential for selection bias. Future studies could benefit from pursuing a multicenter study design, which may provide a higher level of external validity. There are several confounding factors that have the potential to influence changes in weight, all of which cannot feasibly be accounted for. Since participants were ambulatory veterans, medication adherence could not be confirmed.
Conclusions
There was no difference in weight gain in veterans who took duloxetine, pregabalin, or both for treatment of neuropathy after 12 to 18 months of therapy. However, there was a difference in weight gain between the 3 groups when therapy lasted > 12 months. The combination therapy of pregabalin and duloxetine was associated with the most amount of weight gain, followed by pregabalin alone. Duloxetine monotherapy had minimal impact on weight.
In veterans who took increased doses of duloxetine or pregabalin, there was a statistically significant difference in weight between the monotherapy groups, with pregabalin associated with weight gain and duloxetine associated with weight loss.
For patients in which weight gain may be a concern, it would be reasonable to prefer duloxetine rather than pregabalin for initial treatment of neuropathy. Pregabalin should be used at the lowest effective dose to minimize risk of weight gain. Combination therapy of duloxetine and pregabalin for the treatment of neuropathy seems to be associated with the most amount of weight gain compared with either therapy alone. Association of changes in weight is greater as treatment duration lasts beyond 12 months.
1. Onakpoya IJ, Thomas ET, Lee JJ, Goldacre B, Heneghan CJ. Benefits and harms of pregabalin in the management of neuropathic pain: a rapid review and meta-analysis of randomised clinical trials. BMJ Open. 2019;9(1):e023600. Published 2019 Jan 21. doi:10.1136/bmjopen-2018-023600
2. American Diabetes Association. 11. Microvascular Complications and Foot Care: Standards of Medical Care in Diabetes-2019. Diabetes Care. 2019;42(suppl 1):S124-S138. doi:10.2337/dc19-S011
3. Baumann TJ, Herndon CM, Strickland JM. Pain Management. In: DiPiro JT, Talbert RL, Yee GC, Matzke GR, Wells BG, Posey LM, eds. Pharmacotherapy: A Pathophysiologic Approach. 9th ed. New York, NY: McGraw-Hill; 2014:925.
4. National Institute of Neurological Disorders and Stroke. Peripheral neuropathy fact sheet. Updated March 16, 2020. Accessed March 10, 2021. https://www.ninds.nih.gov/Disorders/Patient-Caregiver-Education/Fact-Sheets/Peripheral-Neuropathy-Fact-Sheet
5. Feldman EL. Patient education: diabetic neuropathy (beyond the basics). Updated January 20, 2021. Accessed April 21, 2021. https://www.uptodate.com/contents/diabetic-neuropathy-beyond-the-basics
6. Centers for Disease Control and Prevention. Overweight and obesity. Updated October 29, 2020. Accessed March 10, 2021. https://www.cdc.gov/obesity/index.html
7. Cymbalta (duloxetine) [prescribing information]. Eli Lilly and Company; April 2020.
8. Lyrica (pregabalin) [prescribing information]. Parke-Davis, Division of Pfizer Inc; June 2020.
Neuropathy is the result of damage to the nervous system. This dysfunction generally occurs in peripheral nerves, which are the circuits that transmit signals to the brain and spinal cord. The peripheral nervous system is responsible for controlling motor and autonomic nerves and conduction of sensory information. Injury to the nervous system can lead to changes in nerve fiber sensitivity and malfunctioning of nerve stimuli pathways. Neuropathy may be a sequela of a wide variety of diseases, including diabetes mellitus (DM), autoimmune disorders, infections, and cancer. Also, neuropathy can be caused by medications, trauma, exposure to toxins, classified idiopathic.1-5
Peripheral neuropathy is a common condition with an estimated incidence of > 3 million cases in the United States per year.4 The burden of neuropathy may be greater among veterans, due to a higher prevalence of type 2 DM (T2DM) and an aging population. Manifestations of neuropathy include weakness, numbness, burning or tingling sensations, and lingering pain.3,5 This can lead to limited mobility and decreased quality of life. Neuropathy can be debilitating, but several medications can be used to alleviate symptoms—including duloxetine and pregabalin. The American Diabetes Association recommends either agent as initial treatment for neuropathic pain in patients with DM.2 As with all medication use, the benefits and risks of treatment must be assessed prior to initiation of therapy.
The Centers for Disease Control and Prevention estimates > 70% of adults in the United States are overweight or obese.6 Excessive weight gain causes a higher risk of developing certain comorbidities, such as coronary artery disease, cerebrovascular accident, T2DM, and cancer, and all can lead to premature death. It is important to avoid excessive weight gain whenever possible, especially in patients already at a high risk for developing these diseases.
The correlation of weight gain in patients taking duloxetine, pregabalin, or both is not well studied. Duloxetine has the potential to cause weight gain or weight loss, with reports of > 1% incidence for either effect.7 Clinical significance of weight changes caused by duloxetine is uncertain.Pregabalin is more likely to cause weight gain, with a reported incidence between 2 and 14%.8 Weight gain may be associated with dose and duration; 1 study demonstrated an average weight gain of about 11 lb after 2 years of pregabalin treatment.8 The medical literature lacks information regarding weight gain associated with combination therapy of duloxetine and pregabalin. The objective of this study was to investigate the association of weight gain in veterans taking duloxetine, pregabalin, or both for the treatment of neuropathy.
Methods
A retrospective, single-center, chart review was conducted at the Sioux Falls Veterans Affairs Health Care System (SFVAHCS). Data were collected through manual chart review of US Department of Veterans Affairs (VA) electronic health records (EHRs). Patients included were veterans aged 18 to 89 years who were initiated on duloxetine and/or pregabalin between October 2015 and September 2018.
The primary end point of this study was the change in body weight, expressed in pounds, after 12 to 18 months of treatment. If multiple weights were obtained during the 12- to 18-month period, the weight recorded closest to 12 months was used. The secondary end points included the percent change in body weight and dose effect, which evaluated change in weight at doses of duloxetine > 60 mg/d, and pregabalin at doses > 300 mg/d. Duration of effect was evaluated as a secondary end point; contrary to the primary end point, the weight furthest from 12 months was recorded. The change in hemoglobin A1c (HbA1c) in patients with prediabetes and DM also was investigated as a secondary end point. Last, involvement in the Managing Overweight Veterans Everywhere (MOVE!) weight management program at SFVAHCS and its effect on weight gain was reviewed.
Baseline characteristics were collected to determine the variability between each study group. Data collected during the study included age, sex, race, weight, BMI, HbA1c, eGFR, DM diagnosis, insulin therapy prescription, duration of use, and MOVE! program participation.
Statistical Analysis
The primary and secondary end points were analyzed using an analysis of variance statistical test. Results were considered statistically significant at P < .05.
Results
A total of 174 participants were included in this study, with 77 in each monotherapy group, and 22 in the combination therapy group. More than 300 patients were excluded from the study due to prespecified inclusion and exclusion criteria. Baseline characteristics were similar among the 3 groups, with no statistically significant differences identified (Table 1).
Primary End Point
The change in body weight after 12 to 18 months of treatment was –0.8 lb in the duloxetine group, +2.9 lb in the pregabalin group, and +5.5 lb in the pregabalin plus duloxetine group (P = .12) (Figure).
Secondary End Points
The percent change in body weight after 12 to 18 months of treatment was −0.3% in the duloxetine group, +1.5% in the pregabalin group, and +2.0% in the duloxetine plus pregabalin group (P = .18). The change in body weight beyond 12 months of treatment was −0.9 lb in the duloxetine group, +3.6 lb in the pregabalin group, and +8.5 lb in the duloxetine plus pregabalin group (P = .05). The change in HbA1c in patients with DM and pre-DM was −0.1% in the duloxetine group, +0.3% in the pregabalin group, and −0.3% in the duloxetine plus pregabalin group (P = .14). The change in body weight in patients who received increased doses of the study agents was −2.8 lb in the duloxetine group and +6.5 lb in the pregabalin group (P = .05). Among veterans who participated in MOVE!, change in body weight after 12 to 18 months of treatment was +1.5 lb in the duloxetine group, +4.9 lb in the pregabalin group, and +3.4 lb in the pregabalin plus duloxetine group (P = .91)(Table 2).
Discussion
The purpose of this retrospective chart review was to evaluate the association of weight gain in veterans taking duloxetine and/or pregabalin for the treatment of neuropathy. Although the primary end point, weight gain after 12 to 18 months of therapy, was not statistically significant, we found notable trends and associations worthy of discussion.
The secondary end point of the difference in weight gain in veterans taking duloxetine, pregabalin, or both for a treatment duration > 12 months was statistically significant. For this secondary end point, the weight recorded was when the study agent(s) were discontinued or the most recent weight obtained if participants still had an active prescription; the average duration of treatment in the 3 study groups was about 24 months. These weights differed from the primary end point, in which weight closest to 12 months of therapy was recorded.
The other secondary end point that was statistically significant was the difference in weight gain in patients who were on higher doses of duloxetine or pregabalin. This specifically examined participants who were on doses of duloxetine > 60 mg/d and pregabalin > 300 mg/d. Duloxetine was associated with weight loss, whereas pregabalin was associated with weight gain, with a difference of about 10 lb between the groups. The significance of this secondary end point demonstrates that increased doses of duloxetine and pregabalin are more associated with changes in weight compared with standard doses.
The secondary end points of percent change in body weight, change in HBA1c in patients with DM and prediabetes, and weight gain in patients who participated in the MOVE! weight management program were not statistically significant among the 3 study groups. Given the relatively small sample sizes, more significant differences in the evaluation of the primary and secondary end points may have been observed with a larger patient population.
Study investigators made additional observations beyond the primary and secondary end points. Most notably, > 300 patients were excluded from this study because they did not continue treatment beyond 12 months. The investigators found this number staggering, as it may imply that veterans were not satisfied with treatment agent(s) within 1 year of initiation, which could be due to lack of efficacy or intolerable adverse effects.
The mechanism of why combination therapy of duloxetine and pregabalin may be more associated with weight gain compared with either agent alone is unknown. Since this study found duloxetine to be more associated with weight loss, the mechanism does not seem to be an additive effect. The alternative hypothesis proposed prior to the completion of this study stemmed from an observation seen by health care providers at SFVAHCS.
Limitations
The retrospective nature of the study does not provide proof of causation but does demonstrate association. There was no control group, and the study design did not allow for randomization of participants. Additionally, since the study was completed at a single center, there was potential for selection bias. Future studies could benefit from pursuing a multicenter study design, which may provide a higher level of external validity. There are several confounding factors that have the potential to influence changes in weight, all of which cannot feasibly be accounted for. Since participants were ambulatory veterans, medication adherence could not be confirmed.
Conclusions
There was no difference in weight gain in veterans who took duloxetine, pregabalin, or both for treatment of neuropathy after 12 to 18 months of therapy. However, there was a difference in weight gain between the 3 groups when therapy lasted > 12 months. The combination therapy of pregabalin and duloxetine was associated with the most amount of weight gain, followed by pregabalin alone. Duloxetine monotherapy had minimal impact on weight.
In veterans who took increased doses of duloxetine or pregabalin, there was a statistically significant difference in weight between the monotherapy groups, with pregabalin associated with weight gain and duloxetine associated with weight loss.
For patients in which weight gain may be a concern, it would be reasonable to prefer duloxetine rather than pregabalin for initial treatment of neuropathy. Pregabalin should be used at the lowest effective dose to minimize risk of weight gain. Combination therapy of duloxetine and pregabalin for the treatment of neuropathy seems to be associated with the most amount of weight gain compared with either therapy alone. Association of changes in weight is greater as treatment duration lasts beyond 12 months.
Neuropathy is the result of damage to the nervous system. This dysfunction generally occurs in peripheral nerves, which are the circuits that transmit signals to the brain and spinal cord. The peripheral nervous system is responsible for controlling motor and autonomic nerves and conduction of sensory information. Injury to the nervous system can lead to changes in nerve fiber sensitivity and malfunctioning of nerve stimuli pathways. Neuropathy may be a sequela of a wide variety of diseases, including diabetes mellitus (DM), autoimmune disorders, infections, and cancer. Also, neuropathy can be caused by medications, trauma, exposure to toxins, classified idiopathic.1-5
Peripheral neuropathy is a common condition with an estimated incidence of > 3 million cases in the United States per year.4 The burden of neuropathy may be greater among veterans, due to a higher prevalence of type 2 DM (T2DM) and an aging population. Manifestations of neuropathy include weakness, numbness, burning or tingling sensations, and lingering pain.3,5 This can lead to limited mobility and decreased quality of life. Neuropathy can be debilitating, but several medications can be used to alleviate symptoms—including duloxetine and pregabalin. The American Diabetes Association recommends either agent as initial treatment for neuropathic pain in patients with DM.2 As with all medication use, the benefits and risks of treatment must be assessed prior to initiation of therapy.
The Centers for Disease Control and Prevention estimates > 70% of adults in the United States are overweight or obese.6 Excessive weight gain causes a higher risk of developing certain comorbidities, such as coronary artery disease, cerebrovascular accident, T2DM, and cancer, and all can lead to premature death. It is important to avoid excessive weight gain whenever possible, especially in patients already at a high risk for developing these diseases.
The correlation of weight gain in patients taking duloxetine, pregabalin, or both is not well studied. Duloxetine has the potential to cause weight gain or weight loss, with reports of > 1% incidence for either effect.7 Clinical significance of weight changes caused by duloxetine is uncertain.Pregabalin is more likely to cause weight gain, with a reported incidence between 2 and 14%.8 Weight gain may be associated with dose and duration; 1 study demonstrated an average weight gain of about 11 lb after 2 years of pregabalin treatment.8 The medical literature lacks information regarding weight gain associated with combination therapy of duloxetine and pregabalin. The objective of this study was to investigate the association of weight gain in veterans taking duloxetine, pregabalin, or both for the treatment of neuropathy.
Methods
A retrospective, single-center, chart review was conducted at the Sioux Falls Veterans Affairs Health Care System (SFVAHCS). Data were collected through manual chart review of US Department of Veterans Affairs (VA) electronic health records (EHRs). Patients included were veterans aged 18 to 89 years who were initiated on duloxetine and/or pregabalin between October 2015 and September 2018.
The primary end point of this study was the change in body weight, expressed in pounds, after 12 to 18 months of treatment. If multiple weights were obtained during the 12- to 18-month period, the weight recorded closest to 12 months was used. The secondary end points included the percent change in body weight and dose effect, which evaluated change in weight at doses of duloxetine > 60 mg/d, and pregabalin at doses > 300 mg/d. Duration of effect was evaluated as a secondary end point; contrary to the primary end point, the weight furthest from 12 months was recorded. The change in hemoglobin A1c (HbA1c) in patients with prediabetes and DM also was investigated as a secondary end point. Last, involvement in the Managing Overweight Veterans Everywhere (MOVE!) weight management program at SFVAHCS and its effect on weight gain was reviewed.
Baseline characteristics were collected to determine the variability between each study group. Data collected during the study included age, sex, race, weight, BMI, HbA1c, eGFR, DM diagnosis, insulin therapy prescription, duration of use, and MOVE! program participation.
Statistical Analysis
The primary and secondary end points were analyzed using an analysis of variance statistical test. Results were considered statistically significant at P < .05.
Results
A total of 174 participants were included in this study, with 77 in each monotherapy group, and 22 in the combination therapy group. More than 300 patients were excluded from the study due to prespecified inclusion and exclusion criteria. Baseline characteristics were similar among the 3 groups, with no statistically significant differences identified (Table 1).
Primary End Point
The change in body weight after 12 to 18 months of treatment was –0.8 lb in the duloxetine group, +2.9 lb in the pregabalin group, and +5.5 lb in the pregabalin plus duloxetine group (P = .12) (Figure).
Secondary End Points
The percent change in body weight after 12 to 18 months of treatment was −0.3% in the duloxetine group, +1.5% in the pregabalin group, and +2.0% in the duloxetine plus pregabalin group (P = .18). The change in body weight beyond 12 months of treatment was −0.9 lb in the duloxetine group, +3.6 lb in the pregabalin group, and +8.5 lb in the duloxetine plus pregabalin group (P = .05). The change in HbA1c in patients with DM and pre-DM was −0.1% in the duloxetine group, +0.3% in the pregabalin group, and −0.3% in the duloxetine plus pregabalin group (P = .14). The change in body weight in patients who received increased doses of the study agents was −2.8 lb in the duloxetine group and +6.5 lb in the pregabalin group (P = .05). Among veterans who participated in MOVE!, change in body weight after 12 to 18 months of treatment was +1.5 lb in the duloxetine group, +4.9 lb in the pregabalin group, and +3.4 lb in the pregabalin plus duloxetine group (P = .91)(Table 2).
Discussion
The purpose of this retrospective chart review was to evaluate the association of weight gain in veterans taking duloxetine and/or pregabalin for the treatment of neuropathy. Although the primary end point, weight gain after 12 to 18 months of therapy, was not statistically significant, we found notable trends and associations worthy of discussion.
The secondary end point of the difference in weight gain in veterans taking duloxetine, pregabalin, or both for a treatment duration > 12 months was statistically significant. For this secondary end point, the weight recorded was when the study agent(s) were discontinued or the most recent weight obtained if participants still had an active prescription; the average duration of treatment in the 3 study groups was about 24 months. These weights differed from the primary end point, in which weight closest to 12 months of therapy was recorded.
The other secondary end point that was statistically significant was the difference in weight gain in patients who were on higher doses of duloxetine or pregabalin. This specifically examined participants who were on doses of duloxetine > 60 mg/d and pregabalin > 300 mg/d. Duloxetine was associated with weight loss, whereas pregabalin was associated with weight gain, with a difference of about 10 lb between the groups. The significance of this secondary end point demonstrates that increased doses of duloxetine and pregabalin are more associated with changes in weight compared with standard doses.
The secondary end points of percent change in body weight, change in HBA1c in patients with DM and prediabetes, and weight gain in patients who participated in the MOVE! weight management program were not statistically significant among the 3 study groups. Given the relatively small sample sizes, more significant differences in the evaluation of the primary and secondary end points may have been observed with a larger patient population.
Study investigators made additional observations beyond the primary and secondary end points. Most notably, > 300 patients were excluded from this study because they did not continue treatment beyond 12 months. The investigators found this number staggering, as it may imply that veterans were not satisfied with treatment agent(s) within 1 year of initiation, which could be due to lack of efficacy or intolerable adverse effects.
The mechanism of why combination therapy of duloxetine and pregabalin may be more associated with weight gain compared with either agent alone is unknown. Since this study found duloxetine to be more associated with weight loss, the mechanism does not seem to be an additive effect. The alternative hypothesis proposed prior to the completion of this study stemmed from an observation seen by health care providers at SFVAHCS.
Limitations
The retrospective nature of the study does not provide proof of causation but does demonstrate association. There was no control group, and the study design did not allow for randomization of participants. Additionally, since the study was completed at a single center, there was potential for selection bias. Future studies could benefit from pursuing a multicenter study design, which may provide a higher level of external validity. There are several confounding factors that have the potential to influence changes in weight, all of which cannot feasibly be accounted for. Since participants were ambulatory veterans, medication adherence could not be confirmed.
Conclusions
There was no difference in weight gain in veterans who took duloxetine, pregabalin, or both for treatment of neuropathy after 12 to 18 months of therapy. However, there was a difference in weight gain between the 3 groups when therapy lasted > 12 months. The combination therapy of pregabalin and duloxetine was associated with the most amount of weight gain, followed by pregabalin alone. Duloxetine monotherapy had minimal impact on weight.
In veterans who took increased doses of duloxetine or pregabalin, there was a statistically significant difference in weight between the monotherapy groups, with pregabalin associated with weight gain and duloxetine associated with weight loss.
For patients in which weight gain may be a concern, it would be reasonable to prefer duloxetine rather than pregabalin for initial treatment of neuropathy. Pregabalin should be used at the lowest effective dose to minimize risk of weight gain. Combination therapy of duloxetine and pregabalin for the treatment of neuropathy seems to be associated with the most amount of weight gain compared with either therapy alone. Association of changes in weight is greater as treatment duration lasts beyond 12 months.
1. Onakpoya IJ, Thomas ET, Lee JJ, Goldacre B, Heneghan CJ. Benefits and harms of pregabalin in the management of neuropathic pain: a rapid review and meta-analysis of randomised clinical trials. BMJ Open. 2019;9(1):e023600. Published 2019 Jan 21. doi:10.1136/bmjopen-2018-023600
2. American Diabetes Association. 11. Microvascular Complications and Foot Care: Standards of Medical Care in Diabetes-2019. Diabetes Care. 2019;42(suppl 1):S124-S138. doi:10.2337/dc19-S011
3. Baumann TJ, Herndon CM, Strickland JM. Pain Management. In: DiPiro JT, Talbert RL, Yee GC, Matzke GR, Wells BG, Posey LM, eds. Pharmacotherapy: A Pathophysiologic Approach. 9th ed. New York, NY: McGraw-Hill; 2014:925.
4. National Institute of Neurological Disorders and Stroke. Peripheral neuropathy fact sheet. Updated March 16, 2020. Accessed March 10, 2021. https://www.ninds.nih.gov/Disorders/Patient-Caregiver-Education/Fact-Sheets/Peripheral-Neuropathy-Fact-Sheet
5. Feldman EL. Patient education: diabetic neuropathy (beyond the basics). Updated January 20, 2021. Accessed April 21, 2021. https://www.uptodate.com/contents/diabetic-neuropathy-beyond-the-basics
6. Centers for Disease Control and Prevention. Overweight and obesity. Updated October 29, 2020. Accessed March 10, 2021. https://www.cdc.gov/obesity/index.html
7. Cymbalta (duloxetine) [prescribing information]. Eli Lilly and Company; April 2020.
8. Lyrica (pregabalin) [prescribing information]. Parke-Davis, Division of Pfizer Inc; June 2020.
1. Onakpoya IJ, Thomas ET, Lee JJ, Goldacre B, Heneghan CJ. Benefits and harms of pregabalin in the management of neuropathic pain: a rapid review and meta-analysis of randomised clinical trials. BMJ Open. 2019;9(1):e023600. Published 2019 Jan 21. doi:10.1136/bmjopen-2018-023600
2. American Diabetes Association. 11. Microvascular Complications and Foot Care: Standards of Medical Care in Diabetes-2019. Diabetes Care. 2019;42(suppl 1):S124-S138. doi:10.2337/dc19-S011
3. Baumann TJ, Herndon CM, Strickland JM. Pain Management. In: DiPiro JT, Talbert RL, Yee GC, Matzke GR, Wells BG, Posey LM, eds. Pharmacotherapy: A Pathophysiologic Approach. 9th ed. New York, NY: McGraw-Hill; 2014:925.
4. National Institute of Neurological Disorders and Stroke. Peripheral neuropathy fact sheet. Updated March 16, 2020. Accessed March 10, 2021. https://www.ninds.nih.gov/Disorders/Patient-Caregiver-Education/Fact-Sheets/Peripheral-Neuropathy-Fact-Sheet
5. Feldman EL. Patient education: diabetic neuropathy (beyond the basics). Updated January 20, 2021. Accessed April 21, 2021. https://www.uptodate.com/contents/diabetic-neuropathy-beyond-the-basics
6. Centers for Disease Control and Prevention. Overweight and obesity. Updated October 29, 2020. Accessed March 10, 2021. https://www.cdc.gov/obesity/index.html
7. Cymbalta (duloxetine) [prescribing information]. Eli Lilly and Company; April 2020.
8. Lyrica (pregabalin) [prescribing information]. Parke-Davis, Division of Pfizer Inc; June 2020.
Standardization of the Discharge Process for Inpatient Hematology and Oncology Using Plan-Do-Study-Act Methodology Improves Follow-Up and Patient Hand-Off
Hematology and oncology patients are a complex patient population that requires timely follow-up to prevent clinical decompensation and delays in treatment. Previous reports have demonstrated that outpatient follow-up within 14 days is associated with decreased 30-day readmissions. The magnitude of this effect is greater for higher-risk patients.1 Therefore, patients being discharged from the hematology and oncology inpatient service should be seen by a hematology and oncology provider within 14 days of discharge. Patients who do not require close oncologic follow-up should be seen by a primary care provider (PCP) within this timeframe.
Background
The Institute of Medicine (IOM) identified the need to focus on quality improvement and patient safety with a 1999 report, To Err Is Human.2 Tremendous strides have been made in the areas of quality improvement and patient safety over the past 2 decades. In a 2013 report, the IOM further identified hematology and oncology care as an area of need due to a combination of growing demand, complexity of cancer and cancer treatment, shrinking workforce, and rising costs. The report concluded that cancer care is not as patient-centered, accessible, coordinated, or evidence based as it could be, with detrimental impacts on patients.3 Patients with cancer have been identified as a high-risk population for hospital readmissions.4,5 Lack of timely follow-up and failed hand-offs have been identified as factors contributing to poor outcomes at time of discharge.6-10
Upon internal review of baseline performance data, we identified areas needing improvement in the discharge process. These included time to hematology and oncology follow-up appointment, percent of patients with PCP appointments scheduled at time of discharge, and electronically alerts for the outpatient hematologist/oncologist to discharge summaries. It was determined that patients discharged from the inpatient service were seen a mean 17 days later by their outpatient hematology and oncology provider and the time to the follow-up appointment varied substantially, with some patients being seen several weeks to months after discharge. Furthermore, only 68% of patients had a primary care appointment scheduled at the time of discharge. These data along with review of data reported in the medical literature supported our initiative for improvement in the transition from inpatient to outpatient care for our hematology and oncology patients.
Plan-Do-Study-Act (PDSA) quality improvement methodology was used to create and implement several interventions to standardize the discharge process for this patient population, with the primary goal of decreasing the mean time to hematology and oncology follow-up from 17 days by 12% to fewer than 14 days. Patients who do not require close oncologic follow-up should be seen by a PCP within this timeframe. Otherwise, PCP follow-up within at least 6 months should be made. Secondary aims included (1) an increase in scheduled PCP visits at time of discharge from 68% to > 90%; and (2) an increase in communication of the discharge summary via electronic alerting of the outpatient hematology and oncology physician from 20% to > 90%. Herein, we report our experience and results of this quality improvement initiative
Methods
The Institutional Review Board at Edward Hines Veteran Affairs Hospital in Hines, Illinois reviewed this single-center study and deemed it to be exempt from oversight. Using PDSA quality improvement methodology, a multidisciplinary team of hematology and oncology staff developed and implemented a standardized discharge process. The multidisciplinary team included a robust representation of inpatient and outpatient staff caring for the hematology and oncology patient population, including attending physicians, fellows, residents, advanced practice nurses, registered nurses, clinical pharmacists, patient care coordinators, clinic schedulers, clinical applications coordinators, quality support staff, and a systems redesign coach. Hospital leadership including chief of staff, chief of medicine, and chief of nursing participated as the management guidance team. Several interviews and group meetings were conducted and a multidisciplinary team collaboratively developed and implemented the interventions and monitored the results.
Outcome measures were identified, including time to hematology and oncology clinic visit, primary care follow-up scheduling, and communication of discharge to the outpatient hematology and oncology physician. Baseline data were collected and reviewed. The multidisciplinary team developed a process flow map to understand the steps and resources involved with the transition from inpatient to outpatient care. Gap analysis and root cause analysis were performed. A solutions approach was applied to develop interventions. Table 1 shows a summary of the identified problems, symptoms, associated causes, the interventions aimed to address the problems, and expected outcomes. Rotating resident physicians were trained through online and in-person education. The multidisciplinary team met intermittently to monitor outcomes, provide feedback, further refine interventions, and develop additional interventions.
PDSA Cycle 1
A standardized discharge process was developed in the form of guidelines and expectations. These include an explanation of unique features of the hematology and oncology service and expectations of medication reconciliation with emphasis placed on antiemetics, antimicrobial prophylaxis, and bowel regimen when appropriate, outpatient hematology and oncology follow-up within 14 days, primary care follow-up, communication with the outpatient hematology and oncology physician, written discharge instructions, and bedside teaching when appropriate.
PDSA Cycle 2
Based on team member feedback and further discussions, a discharge checklist was developed. This checklist was available online, reviewed in person, and posted in the team room for rotating residents to use for discharge planning and when discharging patients (Figure 1).
PDSA Cycle 3
Based on ongoing user feedback, group discussions, and data monitoring, the discharge checklist was further refined and updated. An electronic clinical decision support tool was developed and integrated into the electronic medical record (EMR) in the form of a discharge checklist note template directly linked to orders. The tool is a computerized patient record system (CPRS) note template that prompts users to select whether medications or return to clinic orders are needed and offers a menu of frequently used medications. If any of the selections are chosen within the note template, an order is generated automatically in the chart that requires only the user’s signature. Furthermore, the patient care coordinator reviews the prescribed follow-up and works with the medical support assistant to make these appointments. The physician is contacted only when an appointment cannot be made. Therefore, this tool allows many additional actions to be bypassed such as generating medication and return to clinic orders individually and calling schedulers to make follow-up appointments (Figure 2).
Data Analysis
All patients discharged during the 2-month period prior to and discharged after the implementation of the standardized process were reviewed. Patients who followed up with hematology and oncology at another facility, enrolled in hospice, or died during admission were excluded. Follow-up appointment scheduling data and communication between inpatient and outpatient providers were reviewed. Data were analyzed using XmR statistical process control chart and Fisher’s Exact Test using GraphPad. Control limits were calculated for each PDSA cycle as the mean ± the average of the moving range multiplied by 2.66. All data were included in the analysis.
Results
A total of 142 consecutive patients were reviewed from May 1, 2018 to August 31, 2018 and January 1, 2019 to April 30, 2019, including 58 patients prior to the intervention and 84 patients during PDSA cycles. There was a gap in data collection between September 1, 2018 and December 31, 2018 due to limited team member availability. All data were collected by 2 reviewers—a postgraduate year (PGY)-4 chief resident and a PGY-2 internal medicine resident. The median age of patients in the preintervention group was 72 years and 69 years in the postintervention group. All patients were men. Baseline data revealed a mean 17 days to hematology and oncology follow-up. Primary care visits were scheduled for 68% of patients at the time of discharge. The outpatient hematology and oncology physician was alerted electronically to the discharge summary for 20% of the patients (Table 2).
The primary endpoint of time to hematology and oncology follow-up appointment improved to 13 days in PDSA cycles 1 and 2 and 10 days in PDSA cycle 3. The target of mean 14 days to follow-up was achieved. The statistical process control chart shows 5 shifts with clusters of ≥ 7 points below the mean revealing a true signal or change in the data and demonstrating that an improvement was seen (Figure 3). Furthermore, the statistical process control chart demonstrates upper control limit decreased from 58 days at baseline to 21 days in PDSA cycle 3, suggesting a decrease in variation.
Regarding secondary endpoints, the outpatient hematology and oncology attending physician and/or fellow was alerted electronically to the discharge summary for 62% of patients compared with 20% at baseline (P = .01), and primary care appointments were scheduled for 77% of patients after the intervention compared with 68% at baseline (P = .88) (Table 2).
Through ongoing meetings, discussions, and feedback, we identified additional objectives unique to this patient population that had no performance measurement. These included peripherally inserted central catheter (PICC) care nursing visits scheduled 1 week after discharge and port care nursing visits scheduled 4 weeks after discharge. These visits allow nursing staff to dress and flush these catheters for routine maintenance per institutional policy. The implementation of the discharge checklist note creates a mechanism of tracking performance in meeting this goal moving forward, whereas no method was in place to track this metric.
Discussion
The 2013 IOM report Delivering High-Quality Cancer Care: Charting a New Course for a System in Crisis found that that cancer care is not as patient-centered, accessible, coordinated, or evidence-based as it could be, with detrimental impacts on patients.3 The document offered a conceptual framework to improve quality of cancer care that includes the translation of evidence into clinical practice, quality measurement, and performance improvement, as well as using advances in information technology to enhance quality measurement and performance improvement. Our quality initiative uses this framework to work toward the goal as stated by the IOM report: to deliver “comprehensive, patient-centered, evidence-based, high-quality cancer care that is accessible and affordable.”3
Two large studies that evaluated risk factors for 15-day and 30-day hospital readmissions identified cancer diagnosis as a risk factor for increased hospital readmission, highlighting the need to identify strategies to improve the discharge process for these patients.4,5 Timely outpatient follow-up and better patient hand-off may improve clinical outcomes among this high-risk patient population after hospital discharge. Multiple studies have demonstrated that timely follow-up is associated with fewer readmissions.1,8-10 A study by Forster and colleagues that evaluated postdischarge adverse events (AEs) revealed a 23% incidence of AEs with 12% of these identified as preventable. Postdischarge monitoring was deemed inadequate among these patients, with closer follow-up and improved hand-offs between inpatient and outpatient medical teams identified as possible interventions to improve postdischarge patient monitoring and to prevent AEs.7
The present quality initiative to standardize the discharge process for the hematology and oncology service decreased time to hematology and oncology follow-up appointment, improved communication between inpatient and outpatient teams, and decreased process variation. Timelier follow-up for this complex patient population likely will prevent clinical decompensation, delays in treatment, and directly improve patient access to care.
The multidisciplinary nature of this effort was instrumental to successful completion. In a complex health care system, it is challenging to truly understand a problem and identify possible solutions without the perspective of all members of the care team. The involvement of team members with training in quality improvement methodology was important to evaluate and develop interventions in a systematic way. Furthermore, the support and involvement of leadership is important in order to allocate resources appropriately to achieve system changes that improve care. Using quality improvement methodology, the team was able to map our processes and perform gap and root cause analyses. Strategies were identified to improve our performance using a solutions approach. Changes were implemented with continued intermittent meetings for monitoring of progression and discussion of how interventions could be made more efficient, effective, and user friendly. The primary goal was ultimately achieved.
Integration of intervention into the EMR embodies the IOM’s call to use advances in information technology to enhance the quality and delivery of care, quality measurement, and performance improvement.3 This intervention offered the strongest system changes as an electronic clinical decision support tool was developed and embedded into the EMR in the form of a Discharge Checklist Note that is linked to associated orders. This intervention was the most robust, as it provided objective data regarding utilization of the checklist, offered a more efficient way to communicate with team members regarding discharge needs, and streamlined the workflow for the discharging provider. Furthermore, this electronic tool created the ability to measure other important aspects in the care of this patient population that we previously had no mechanism of measuring: timely nursing appointments for routine care of PICC lines and ports.
Limitations
The absence of clinical endpoints was a limitation of this study. The present study was unable to evaluate the effect of the intervention on readmission rates, emergency department visits, hospital length of stay, cost, or mortality. Coordinating this multidisciplinary effort required much time and planning, and additional resources were not available to evaluate these clinical endpoints. Further studies are needed to evaluate whether the increased patient access and closer follow-up would result in improvement in these clinical endpoints. Another consideration for future improvement projects would be to include patients in the multidisciplinary team. The patient perspective would be invaluable in identifying gaps in care delivery and strategies aimed at improving care delivery.
Conclusions
This quality initiative to standardize the discharge process for the hematology and oncology service decreased time to the initial hematology and oncology follow-up appointment, improved communication between inpatient and outpatient teams, and decreased process variation. Timelier follow-up for this complex patient population likely will prevent clinical decompensation, delays in treatment, and directly improve patient access to care.
Acknowledgments
We thank our patients for whom we hope our process improvement efforts will ultimately benefit. We thank all the hematology and oncology staff at Edward Hines Jr. VA Hospital and Loyola University Medical Center residents and fellows who care for our patients and participated in the multidisciplinary team to improve care for our patients. We thank the following professionals for their uncompensated assistance in the coordination and execution of this initiative: Robert Kutter, MS, and Meghan O’Halloran, MD.
1. Jackson C, Shahsahebi M, Wedlake T, DuBard CA. Timeliness of outpatient follow-up: an evidence-based approach for planning after hospital discharge. Ann Fam Med. 2015;13(2):115-122. doi:10.1370/afm.1753
2. Kohn LT, Corrigan J, Donaldson MS, eds. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
3. Levit LA, Balogh E, Nass SJ, Ganz P, Institute of Medicine (U.S.), eds. Delivering High-Quality Cancer Care: Charting a New Course for a System in Crisis. Washington, DC: National Academies Press; 2013.
4. Allaudeen N, Vidyarthi A, Maselli J, Auerbach A. Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):54-60. doi:10.1002/jhm.805
5. Dorajoo SR, See V, Chan CT, et al. Identifying potentially avoidable readmissions: a medication-based 15-day readmission risk stratification algorithm. Pharmacotherapy. 2017;37(3):268-277. doi:10.1002/phar.1896
6. Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA. 2007;297(8):831-841. doi:10.1001/jama.297.8.831
7. Forster AJ, Clark HD, Menard A, et al. Adverse events among medical patients after discharge from hospital [published correction appears in CMAJ. 2004 March 2;170(5):771]. CMAJ. 2004;170(3):345-349.
8. Hernandez AF, Greiner MA, Fonarow GC, et al. Relationship between early physician follow-up and 30-day readmission among Medicare beneficiaries hospitalized for heart failure. JAMA. 2010;303(17):1716-1722. doi:10.1001/jama.2010.533
9. Misky GJ, Wald HL, Coleman EA. Post-hospitalization transitions: examining the effects of timing of primary care provider follow-up. J Hosp Med. 2010;5(7):392-397. doi:10.1002/jhm.666
10. Sharma G, Kuo YF, Freeman JL, Zhang DD, Goodwin JS. Outpatient follow-up visit and 30-day emergency department visit and readmission in patients hospitalized for chronic obstructive pulmonary disease. Arch Intern Med. 2010;170(18):1664-1670. doi:10.1001/archinternmed.2010.345
Hematology and oncology patients are a complex patient population that requires timely follow-up to prevent clinical decompensation and delays in treatment. Previous reports have demonstrated that outpatient follow-up within 14 days is associated with decreased 30-day readmissions. The magnitude of this effect is greater for higher-risk patients.1 Therefore, patients being discharged from the hematology and oncology inpatient service should be seen by a hematology and oncology provider within 14 days of discharge. Patients who do not require close oncologic follow-up should be seen by a primary care provider (PCP) within this timeframe.
Background
The Institute of Medicine (IOM) identified the need to focus on quality improvement and patient safety with a 1999 report, To Err Is Human.2 Tremendous strides have been made in the areas of quality improvement and patient safety over the past 2 decades. In a 2013 report, the IOM further identified hematology and oncology care as an area of need due to a combination of growing demand, complexity of cancer and cancer treatment, shrinking workforce, and rising costs. The report concluded that cancer care is not as patient-centered, accessible, coordinated, or evidence based as it could be, with detrimental impacts on patients.3 Patients with cancer have been identified as a high-risk population for hospital readmissions.4,5 Lack of timely follow-up and failed hand-offs have been identified as factors contributing to poor outcomes at time of discharge.6-10
Upon internal review of baseline performance data, we identified areas needing improvement in the discharge process. These included time to hematology and oncology follow-up appointment, percent of patients with PCP appointments scheduled at time of discharge, and electronically alerts for the outpatient hematologist/oncologist to discharge summaries. It was determined that patients discharged from the inpatient service were seen a mean 17 days later by their outpatient hematology and oncology provider and the time to the follow-up appointment varied substantially, with some patients being seen several weeks to months after discharge. Furthermore, only 68% of patients had a primary care appointment scheduled at the time of discharge. These data along with review of data reported in the medical literature supported our initiative for improvement in the transition from inpatient to outpatient care for our hematology and oncology patients.
Plan-Do-Study-Act (PDSA) quality improvement methodology was used to create and implement several interventions to standardize the discharge process for this patient population, with the primary goal of decreasing the mean time to hematology and oncology follow-up from 17 days by 12% to fewer than 14 days. Patients who do not require close oncologic follow-up should be seen by a PCP within this timeframe. Otherwise, PCP follow-up within at least 6 months should be made. Secondary aims included (1) an increase in scheduled PCP visits at time of discharge from 68% to > 90%; and (2) an increase in communication of the discharge summary via electronic alerting of the outpatient hematology and oncology physician from 20% to > 90%. Herein, we report our experience and results of this quality improvement initiative
Methods
The Institutional Review Board at Edward Hines Veteran Affairs Hospital in Hines, Illinois reviewed this single-center study and deemed it to be exempt from oversight. Using PDSA quality improvement methodology, a multidisciplinary team of hematology and oncology staff developed and implemented a standardized discharge process. The multidisciplinary team included a robust representation of inpatient and outpatient staff caring for the hematology and oncology patient population, including attending physicians, fellows, residents, advanced practice nurses, registered nurses, clinical pharmacists, patient care coordinators, clinic schedulers, clinical applications coordinators, quality support staff, and a systems redesign coach. Hospital leadership including chief of staff, chief of medicine, and chief of nursing participated as the management guidance team. Several interviews and group meetings were conducted and a multidisciplinary team collaboratively developed and implemented the interventions and monitored the results.
Outcome measures were identified, including time to hematology and oncology clinic visit, primary care follow-up scheduling, and communication of discharge to the outpatient hematology and oncology physician. Baseline data were collected and reviewed. The multidisciplinary team developed a process flow map to understand the steps and resources involved with the transition from inpatient to outpatient care. Gap analysis and root cause analysis were performed. A solutions approach was applied to develop interventions. Table 1 shows a summary of the identified problems, symptoms, associated causes, the interventions aimed to address the problems, and expected outcomes. Rotating resident physicians were trained through online and in-person education. The multidisciplinary team met intermittently to monitor outcomes, provide feedback, further refine interventions, and develop additional interventions.
PDSA Cycle 1
A standardized discharge process was developed in the form of guidelines and expectations. These include an explanation of unique features of the hematology and oncology service and expectations of medication reconciliation with emphasis placed on antiemetics, antimicrobial prophylaxis, and bowel regimen when appropriate, outpatient hematology and oncology follow-up within 14 days, primary care follow-up, communication with the outpatient hematology and oncology physician, written discharge instructions, and bedside teaching when appropriate.
PDSA Cycle 2
Based on team member feedback and further discussions, a discharge checklist was developed. This checklist was available online, reviewed in person, and posted in the team room for rotating residents to use for discharge planning and when discharging patients (Figure 1).
PDSA Cycle 3
Based on ongoing user feedback, group discussions, and data monitoring, the discharge checklist was further refined and updated. An electronic clinical decision support tool was developed and integrated into the electronic medical record (EMR) in the form of a discharge checklist note template directly linked to orders. The tool is a computerized patient record system (CPRS) note template that prompts users to select whether medications or return to clinic orders are needed and offers a menu of frequently used medications. If any of the selections are chosen within the note template, an order is generated automatically in the chart that requires only the user’s signature. Furthermore, the patient care coordinator reviews the prescribed follow-up and works with the medical support assistant to make these appointments. The physician is contacted only when an appointment cannot be made. Therefore, this tool allows many additional actions to be bypassed such as generating medication and return to clinic orders individually and calling schedulers to make follow-up appointments (Figure 2).
Data Analysis
All patients discharged during the 2-month period prior to and discharged after the implementation of the standardized process were reviewed. Patients who followed up with hematology and oncology at another facility, enrolled in hospice, or died during admission were excluded. Follow-up appointment scheduling data and communication between inpatient and outpatient providers were reviewed. Data were analyzed using XmR statistical process control chart and Fisher’s Exact Test using GraphPad. Control limits were calculated for each PDSA cycle as the mean ± the average of the moving range multiplied by 2.66. All data were included in the analysis.
Results
A total of 142 consecutive patients were reviewed from May 1, 2018 to August 31, 2018 and January 1, 2019 to April 30, 2019, including 58 patients prior to the intervention and 84 patients during PDSA cycles. There was a gap in data collection between September 1, 2018 and December 31, 2018 due to limited team member availability. All data were collected by 2 reviewers—a postgraduate year (PGY)-4 chief resident and a PGY-2 internal medicine resident. The median age of patients in the preintervention group was 72 years and 69 years in the postintervention group. All patients were men. Baseline data revealed a mean 17 days to hematology and oncology follow-up. Primary care visits were scheduled for 68% of patients at the time of discharge. The outpatient hematology and oncology physician was alerted electronically to the discharge summary for 20% of the patients (Table 2).
The primary endpoint of time to hematology and oncology follow-up appointment improved to 13 days in PDSA cycles 1 and 2 and 10 days in PDSA cycle 3. The target of mean 14 days to follow-up was achieved. The statistical process control chart shows 5 shifts with clusters of ≥ 7 points below the mean revealing a true signal or change in the data and demonstrating that an improvement was seen (Figure 3). Furthermore, the statistical process control chart demonstrates upper control limit decreased from 58 days at baseline to 21 days in PDSA cycle 3, suggesting a decrease in variation.
Regarding secondary endpoints, the outpatient hematology and oncology attending physician and/or fellow was alerted electronically to the discharge summary for 62% of patients compared with 20% at baseline (P = .01), and primary care appointments were scheduled for 77% of patients after the intervention compared with 68% at baseline (P = .88) (Table 2).
Through ongoing meetings, discussions, and feedback, we identified additional objectives unique to this patient population that had no performance measurement. These included peripherally inserted central catheter (PICC) care nursing visits scheduled 1 week after discharge and port care nursing visits scheduled 4 weeks after discharge. These visits allow nursing staff to dress and flush these catheters for routine maintenance per institutional policy. The implementation of the discharge checklist note creates a mechanism of tracking performance in meeting this goal moving forward, whereas no method was in place to track this metric.
Discussion
The 2013 IOM report Delivering High-Quality Cancer Care: Charting a New Course for a System in Crisis found that that cancer care is not as patient-centered, accessible, coordinated, or evidence-based as it could be, with detrimental impacts on patients.3 The document offered a conceptual framework to improve quality of cancer care that includes the translation of evidence into clinical practice, quality measurement, and performance improvement, as well as using advances in information technology to enhance quality measurement and performance improvement. Our quality initiative uses this framework to work toward the goal as stated by the IOM report: to deliver “comprehensive, patient-centered, evidence-based, high-quality cancer care that is accessible and affordable.”3
Two large studies that evaluated risk factors for 15-day and 30-day hospital readmissions identified cancer diagnosis as a risk factor for increased hospital readmission, highlighting the need to identify strategies to improve the discharge process for these patients.4,5 Timely outpatient follow-up and better patient hand-off may improve clinical outcomes among this high-risk patient population after hospital discharge. Multiple studies have demonstrated that timely follow-up is associated with fewer readmissions.1,8-10 A study by Forster and colleagues that evaluated postdischarge adverse events (AEs) revealed a 23% incidence of AEs with 12% of these identified as preventable. Postdischarge monitoring was deemed inadequate among these patients, with closer follow-up and improved hand-offs between inpatient and outpatient medical teams identified as possible interventions to improve postdischarge patient monitoring and to prevent AEs.7
The present quality initiative to standardize the discharge process for the hematology and oncology service decreased time to hematology and oncology follow-up appointment, improved communication between inpatient and outpatient teams, and decreased process variation. Timelier follow-up for this complex patient population likely will prevent clinical decompensation, delays in treatment, and directly improve patient access to care.
The multidisciplinary nature of this effort was instrumental to successful completion. In a complex health care system, it is challenging to truly understand a problem and identify possible solutions without the perspective of all members of the care team. The involvement of team members with training in quality improvement methodology was important to evaluate and develop interventions in a systematic way. Furthermore, the support and involvement of leadership is important in order to allocate resources appropriately to achieve system changes that improve care. Using quality improvement methodology, the team was able to map our processes and perform gap and root cause analyses. Strategies were identified to improve our performance using a solutions approach. Changes were implemented with continued intermittent meetings for monitoring of progression and discussion of how interventions could be made more efficient, effective, and user friendly. The primary goal was ultimately achieved.
Integration of intervention into the EMR embodies the IOM’s call to use advances in information technology to enhance the quality and delivery of care, quality measurement, and performance improvement.3 This intervention offered the strongest system changes as an electronic clinical decision support tool was developed and embedded into the EMR in the form of a Discharge Checklist Note that is linked to associated orders. This intervention was the most robust, as it provided objective data regarding utilization of the checklist, offered a more efficient way to communicate with team members regarding discharge needs, and streamlined the workflow for the discharging provider. Furthermore, this electronic tool created the ability to measure other important aspects in the care of this patient population that we previously had no mechanism of measuring: timely nursing appointments for routine care of PICC lines and ports.
Limitations
The absence of clinical endpoints was a limitation of this study. The present study was unable to evaluate the effect of the intervention on readmission rates, emergency department visits, hospital length of stay, cost, or mortality. Coordinating this multidisciplinary effort required much time and planning, and additional resources were not available to evaluate these clinical endpoints. Further studies are needed to evaluate whether the increased patient access and closer follow-up would result in improvement in these clinical endpoints. Another consideration for future improvement projects would be to include patients in the multidisciplinary team. The patient perspective would be invaluable in identifying gaps in care delivery and strategies aimed at improving care delivery.
Conclusions
This quality initiative to standardize the discharge process for the hematology and oncology service decreased time to the initial hematology and oncology follow-up appointment, improved communication between inpatient and outpatient teams, and decreased process variation. Timelier follow-up for this complex patient population likely will prevent clinical decompensation, delays in treatment, and directly improve patient access to care.
Acknowledgments
We thank our patients for whom we hope our process improvement efforts will ultimately benefit. We thank all the hematology and oncology staff at Edward Hines Jr. VA Hospital and Loyola University Medical Center residents and fellows who care for our patients and participated in the multidisciplinary team to improve care for our patients. We thank the following professionals for their uncompensated assistance in the coordination and execution of this initiative: Robert Kutter, MS, and Meghan O’Halloran, MD.
Hematology and oncology patients are a complex patient population that requires timely follow-up to prevent clinical decompensation and delays in treatment. Previous reports have demonstrated that outpatient follow-up within 14 days is associated with decreased 30-day readmissions. The magnitude of this effect is greater for higher-risk patients.1 Therefore, patients being discharged from the hematology and oncology inpatient service should be seen by a hematology and oncology provider within 14 days of discharge. Patients who do not require close oncologic follow-up should be seen by a primary care provider (PCP) within this timeframe.
Background
The Institute of Medicine (IOM) identified the need to focus on quality improvement and patient safety with a 1999 report, To Err Is Human.2 Tremendous strides have been made in the areas of quality improvement and patient safety over the past 2 decades. In a 2013 report, the IOM further identified hematology and oncology care as an area of need due to a combination of growing demand, complexity of cancer and cancer treatment, shrinking workforce, and rising costs. The report concluded that cancer care is not as patient-centered, accessible, coordinated, or evidence based as it could be, with detrimental impacts on patients.3 Patients with cancer have been identified as a high-risk population for hospital readmissions.4,5 Lack of timely follow-up and failed hand-offs have been identified as factors contributing to poor outcomes at time of discharge.6-10
Upon internal review of baseline performance data, we identified areas needing improvement in the discharge process. These included time to hematology and oncology follow-up appointment, percent of patients with PCP appointments scheduled at time of discharge, and electronically alerts for the outpatient hematologist/oncologist to discharge summaries. It was determined that patients discharged from the inpatient service were seen a mean 17 days later by their outpatient hematology and oncology provider and the time to the follow-up appointment varied substantially, with some patients being seen several weeks to months after discharge. Furthermore, only 68% of patients had a primary care appointment scheduled at the time of discharge. These data along with review of data reported in the medical literature supported our initiative for improvement in the transition from inpatient to outpatient care for our hematology and oncology patients.
Plan-Do-Study-Act (PDSA) quality improvement methodology was used to create and implement several interventions to standardize the discharge process for this patient population, with the primary goal of decreasing the mean time to hematology and oncology follow-up from 17 days by 12% to fewer than 14 days. Patients who do not require close oncologic follow-up should be seen by a PCP within this timeframe. Otherwise, PCP follow-up within at least 6 months should be made. Secondary aims included (1) an increase in scheduled PCP visits at time of discharge from 68% to > 90%; and (2) an increase in communication of the discharge summary via electronic alerting of the outpatient hematology and oncology physician from 20% to > 90%. Herein, we report our experience and results of this quality improvement initiative
Methods
The Institutional Review Board at Edward Hines Veteran Affairs Hospital in Hines, Illinois reviewed this single-center study and deemed it to be exempt from oversight. Using PDSA quality improvement methodology, a multidisciplinary team of hematology and oncology staff developed and implemented a standardized discharge process. The multidisciplinary team included a robust representation of inpatient and outpatient staff caring for the hematology and oncology patient population, including attending physicians, fellows, residents, advanced practice nurses, registered nurses, clinical pharmacists, patient care coordinators, clinic schedulers, clinical applications coordinators, quality support staff, and a systems redesign coach. Hospital leadership including chief of staff, chief of medicine, and chief of nursing participated as the management guidance team. Several interviews and group meetings were conducted and a multidisciplinary team collaboratively developed and implemented the interventions and monitored the results.
Outcome measures were identified, including time to hematology and oncology clinic visit, primary care follow-up scheduling, and communication of discharge to the outpatient hematology and oncology physician. Baseline data were collected and reviewed. The multidisciplinary team developed a process flow map to understand the steps and resources involved with the transition from inpatient to outpatient care. Gap analysis and root cause analysis were performed. A solutions approach was applied to develop interventions. Table 1 shows a summary of the identified problems, symptoms, associated causes, the interventions aimed to address the problems, and expected outcomes. Rotating resident physicians were trained through online and in-person education. The multidisciplinary team met intermittently to monitor outcomes, provide feedback, further refine interventions, and develop additional interventions.
PDSA Cycle 1
A standardized discharge process was developed in the form of guidelines and expectations. These include an explanation of unique features of the hematology and oncology service and expectations of medication reconciliation with emphasis placed on antiemetics, antimicrobial prophylaxis, and bowel regimen when appropriate, outpatient hematology and oncology follow-up within 14 days, primary care follow-up, communication with the outpatient hematology and oncology physician, written discharge instructions, and bedside teaching when appropriate.
PDSA Cycle 2
Based on team member feedback and further discussions, a discharge checklist was developed. This checklist was available online, reviewed in person, and posted in the team room for rotating residents to use for discharge planning and when discharging patients (Figure 1).
PDSA Cycle 3
Based on ongoing user feedback, group discussions, and data monitoring, the discharge checklist was further refined and updated. An electronic clinical decision support tool was developed and integrated into the electronic medical record (EMR) in the form of a discharge checklist note template directly linked to orders. The tool is a computerized patient record system (CPRS) note template that prompts users to select whether medications or return to clinic orders are needed and offers a menu of frequently used medications. If any of the selections are chosen within the note template, an order is generated automatically in the chart that requires only the user’s signature. Furthermore, the patient care coordinator reviews the prescribed follow-up and works with the medical support assistant to make these appointments. The physician is contacted only when an appointment cannot be made. Therefore, this tool allows many additional actions to be bypassed such as generating medication and return to clinic orders individually and calling schedulers to make follow-up appointments (Figure 2).
Data Analysis
All patients discharged during the 2-month period prior to and discharged after the implementation of the standardized process were reviewed. Patients who followed up with hematology and oncology at another facility, enrolled in hospice, or died during admission were excluded. Follow-up appointment scheduling data and communication between inpatient and outpatient providers were reviewed. Data were analyzed using XmR statistical process control chart and Fisher’s Exact Test using GraphPad. Control limits were calculated for each PDSA cycle as the mean ± the average of the moving range multiplied by 2.66. All data were included in the analysis.
Results
A total of 142 consecutive patients were reviewed from May 1, 2018 to August 31, 2018 and January 1, 2019 to April 30, 2019, including 58 patients prior to the intervention and 84 patients during PDSA cycles. There was a gap in data collection between September 1, 2018 and December 31, 2018 due to limited team member availability. All data were collected by 2 reviewers—a postgraduate year (PGY)-4 chief resident and a PGY-2 internal medicine resident. The median age of patients in the preintervention group was 72 years and 69 years in the postintervention group. All patients were men. Baseline data revealed a mean 17 days to hematology and oncology follow-up. Primary care visits were scheduled for 68% of patients at the time of discharge. The outpatient hematology and oncology physician was alerted electronically to the discharge summary for 20% of the patients (Table 2).
The primary endpoint of time to hematology and oncology follow-up appointment improved to 13 days in PDSA cycles 1 and 2 and 10 days in PDSA cycle 3. The target of mean 14 days to follow-up was achieved. The statistical process control chart shows 5 shifts with clusters of ≥ 7 points below the mean revealing a true signal or change in the data and demonstrating that an improvement was seen (Figure 3). Furthermore, the statistical process control chart demonstrates upper control limit decreased from 58 days at baseline to 21 days in PDSA cycle 3, suggesting a decrease in variation.
Regarding secondary endpoints, the outpatient hematology and oncology attending physician and/or fellow was alerted electronically to the discharge summary for 62% of patients compared with 20% at baseline (P = .01), and primary care appointments were scheduled for 77% of patients after the intervention compared with 68% at baseline (P = .88) (Table 2).
Through ongoing meetings, discussions, and feedback, we identified additional objectives unique to this patient population that had no performance measurement. These included peripherally inserted central catheter (PICC) care nursing visits scheduled 1 week after discharge and port care nursing visits scheduled 4 weeks after discharge. These visits allow nursing staff to dress and flush these catheters for routine maintenance per institutional policy. The implementation of the discharge checklist note creates a mechanism of tracking performance in meeting this goal moving forward, whereas no method was in place to track this metric.
Discussion
The 2013 IOM report Delivering High-Quality Cancer Care: Charting a New Course for a System in Crisis found that that cancer care is not as patient-centered, accessible, coordinated, or evidence-based as it could be, with detrimental impacts on patients.3 The document offered a conceptual framework to improve quality of cancer care that includes the translation of evidence into clinical practice, quality measurement, and performance improvement, as well as using advances in information technology to enhance quality measurement and performance improvement. Our quality initiative uses this framework to work toward the goal as stated by the IOM report: to deliver “comprehensive, patient-centered, evidence-based, high-quality cancer care that is accessible and affordable.”3
Two large studies that evaluated risk factors for 15-day and 30-day hospital readmissions identified cancer diagnosis as a risk factor for increased hospital readmission, highlighting the need to identify strategies to improve the discharge process for these patients.4,5 Timely outpatient follow-up and better patient hand-off may improve clinical outcomes among this high-risk patient population after hospital discharge. Multiple studies have demonstrated that timely follow-up is associated with fewer readmissions.1,8-10 A study by Forster and colleagues that evaluated postdischarge adverse events (AEs) revealed a 23% incidence of AEs with 12% of these identified as preventable. Postdischarge monitoring was deemed inadequate among these patients, with closer follow-up and improved hand-offs between inpatient and outpatient medical teams identified as possible interventions to improve postdischarge patient monitoring and to prevent AEs.7
The present quality initiative to standardize the discharge process for the hematology and oncology service decreased time to hematology and oncology follow-up appointment, improved communication between inpatient and outpatient teams, and decreased process variation. Timelier follow-up for this complex patient population likely will prevent clinical decompensation, delays in treatment, and directly improve patient access to care.
The multidisciplinary nature of this effort was instrumental to successful completion. In a complex health care system, it is challenging to truly understand a problem and identify possible solutions without the perspective of all members of the care team. The involvement of team members with training in quality improvement methodology was important to evaluate and develop interventions in a systematic way. Furthermore, the support and involvement of leadership is important in order to allocate resources appropriately to achieve system changes that improve care. Using quality improvement methodology, the team was able to map our processes and perform gap and root cause analyses. Strategies were identified to improve our performance using a solutions approach. Changes were implemented with continued intermittent meetings for monitoring of progression and discussion of how interventions could be made more efficient, effective, and user friendly. The primary goal was ultimately achieved.
Integration of intervention into the EMR embodies the IOM’s call to use advances in information technology to enhance the quality and delivery of care, quality measurement, and performance improvement.3 This intervention offered the strongest system changes as an electronic clinical decision support tool was developed and embedded into the EMR in the form of a Discharge Checklist Note that is linked to associated orders. This intervention was the most robust, as it provided objective data regarding utilization of the checklist, offered a more efficient way to communicate with team members regarding discharge needs, and streamlined the workflow for the discharging provider. Furthermore, this electronic tool created the ability to measure other important aspects in the care of this patient population that we previously had no mechanism of measuring: timely nursing appointments for routine care of PICC lines and ports.
Limitations
The absence of clinical endpoints was a limitation of this study. The present study was unable to evaluate the effect of the intervention on readmission rates, emergency department visits, hospital length of stay, cost, or mortality. Coordinating this multidisciplinary effort required much time and planning, and additional resources were not available to evaluate these clinical endpoints. Further studies are needed to evaluate whether the increased patient access and closer follow-up would result in improvement in these clinical endpoints. Another consideration for future improvement projects would be to include patients in the multidisciplinary team. The patient perspective would be invaluable in identifying gaps in care delivery and strategies aimed at improving care delivery.
Conclusions
This quality initiative to standardize the discharge process for the hematology and oncology service decreased time to the initial hematology and oncology follow-up appointment, improved communication between inpatient and outpatient teams, and decreased process variation. Timelier follow-up for this complex patient population likely will prevent clinical decompensation, delays in treatment, and directly improve patient access to care.
Acknowledgments
We thank our patients for whom we hope our process improvement efforts will ultimately benefit. We thank all the hematology and oncology staff at Edward Hines Jr. VA Hospital and Loyola University Medical Center residents and fellows who care for our patients and participated in the multidisciplinary team to improve care for our patients. We thank the following professionals for their uncompensated assistance in the coordination and execution of this initiative: Robert Kutter, MS, and Meghan O’Halloran, MD.
1. Jackson C, Shahsahebi M, Wedlake T, DuBard CA. Timeliness of outpatient follow-up: an evidence-based approach for planning after hospital discharge. Ann Fam Med. 2015;13(2):115-122. doi:10.1370/afm.1753
2. Kohn LT, Corrigan J, Donaldson MS, eds. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
3. Levit LA, Balogh E, Nass SJ, Ganz P, Institute of Medicine (U.S.), eds. Delivering High-Quality Cancer Care: Charting a New Course for a System in Crisis. Washington, DC: National Academies Press; 2013.
4. Allaudeen N, Vidyarthi A, Maselli J, Auerbach A. Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):54-60. doi:10.1002/jhm.805
5. Dorajoo SR, See V, Chan CT, et al. Identifying potentially avoidable readmissions: a medication-based 15-day readmission risk stratification algorithm. Pharmacotherapy. 2017;37(3):268-277. doi:10.1002/phar.1896
6. Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA. 2007;297(8):831-841. doi:10.1001/jama.297.8.831
7. Forster AJ, Clark HD, Menard A, et al. Adverse events among medical patients after discharge from hospital [published correction appears in CMAJ. 2004 March 2;170(5):771]. CMAJ. 2004;170(3):345-349.
8. Hernandez AF, Greiner MA, Fonarow GC, et al. Relationship between early physician follow-up and 30-day readmission among Medicare beneficiaries hospitalized for heart failure. JAMA. 2010;303(17):1716-1722. doi:10.1001/jama.2010.533
9. Misky GJ, Wald HL, Coleman EA. Post-hospitalization transitions: examining the effects of timing of primary care provider follow-up. J Hosp Med. 2010;5(7):392-397. doi:10.1002/jhm.666
10. Sharma G, Kuo YF, Freeman JL, Zhang DD, Goodwin JS. Outpatient follow-up visit and 30-day emergency department visit and readmission in patients hospitalized for chronic obstructive pulmonary disease. Arch Intern Med. 2010;170(18):1664-1670. doi:10.1001/archinternmed.2010.345
1. Jackson C, Shahsahebi M, Wedlake T, DuBard CA. Timeliness of outpatient follow-up: an evidence-based approach for planning after hospital discharge. Ann Fam Med. 2015;13(2):115-122. doi:10.1370/afm.1753
2. Kohn LT, Corrigan J, Donaldson MS, eds. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
3. Levit LA, Balogh E, Nass SJ, Ganz P, Institute of Medicine (U.S.), eds. Delivering High-Quality Cancer Care: Charting a New Course for a System in Crisis. Washington, DC: National Academies Press; 2013.
4. Allaudeen N, Vidyarthi A, Maselli J, Auerbach A. Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):54-60. doi:10.1002/jhm.805
5. Dorajoo SR, See V, Chan CT, et al. Identifying potentially avoidable readmissions: a medication-based 15-day readmission risk stratification algorithm. Pharmacotherapy. 2017;37(3):268-277. doi:10.1002/phar.1896
6. Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA. 2007;297(8):831-841. doi:10.1001/jama.297.8.831
7. Forster AJ, Clark HD, Menard A, et al. Adverse events among medical patients after discharge from hospital [published correction appears in CMAJ. 2004 March 2;170(5):771]. CMAJ. 2004;170(3):345-349.
8. Hernandez AF, Greiner MA, Fonarow GC, et al. Relationship between early physician follow-up and 30-day readmission among Medicare beneficiaries hospitalized for heart failure. JAMA. 2010;303(17):1716-1722. doi:10.1001/jama.2010.533
9. Misky GJ, Wald HL, Coleman EA. Post-hospitalization transitions: examining the effects of timing of primary care provider follow-up. J Hosp Med. 2010;5(7):392-397. doi:10.1002/jhm.666
10. Sharma G, Kuo YF, Freeman JL, Zhang DD, Goodwin JS. Outpatient follow-up visit and 30-day emergency department visit and readmission in patients hospitalized for chronic obstructive pulmonary disease. Arch Intern Med. 2010;170(18):1664-1670. doi:10.1001/archinternmed.2010.345
Factors Associated with Radiation Toxicity and Survival in Patients with Presumed Early-Stage Non-Small Cell Lung Cancer Receiving Empiric Stereotactic Ablative Radiotherapy
Stereotactic ablative radiotherapy (SABR) has become the standard of care for inoperable early-stage non-small cell lung cancer (NSCLC). Many patients are unable to undergo a biopsy safely because of poor pulmonary function or underlying emphysema and are then empirically treated with radiotherapy if they meet criteria. In these patients, local control can be achieved with SABR with minimal toxicity.1 Considering that median overall survival (OS) among patients with untreated stage I NSCLC has been reported to be as low as 9 months, early treatment with SABR could lead to increased survival of 29 to 60 months.2-4
The RTOG 0236 trial showed a median OS of 48 months and the randomized phase III CHISEL trial showed a median OS of 60 months; however, these survival data were reported in patients who were able to safely undergo a biopsy and had confirmed NSCLC.4,5 For patients without a diagnosis confirmed by biopsy and who are treated with empiric SABR, patient factors that influence radiation toxicity and OS are not well defined.
It is not clear if empiric radiation benefits survival or if treatment causes decline in lung function, considering that underlying chronic lung disease precludes these patients from biopsy. The purpose of this study was to evaluate the factors associated with radiation toxicity with empiric SABR and to evaluate OS in this population without a biopsy-confirmed diagnosis.
Methods
This was a single center retrospective review of patients treated at the radiation oncology department at the Kansas City Veterans Affairs Medical Center from August 2014 to February 2019. Data were collected on 69 patients with pulmonary nodules identified by chest computed tomography (CT) and/or positron emission tomography (PET)-CT that were highly suspicious for primary NSCLC.
These patients were presented at a multidisciplinary meeting that involved pulmonologists, oncologists, radiation oncologists, and thoracic surgeons. Patients were deemed to be poor candidates for biopsy because of severe underlying emphysema, which would put them at high risk for pneumothorax with a percutaneous needle biopsy, or were unable to tolerate general anesthesia for navigational bronchoscopy or surgical biopsy because of poor lung function. These patients were diagnosed with presumed stage I NSCLC using the criteria: minimum of 2 sequential CT scans with enlarging nodule; absence of metastases on PET-CT; the single nodule had to be fluorodeoxyglucose avid with a minimum standardized uptake value of 2.5, and absence of clinical history or physical examination consistent with small cell lung cancer or infection.
After a consensus was reached that patients met these criteria, individuals were referred for empiric SABR. Follow-up visits were at 1 month, 3 months, and every 6 months. Variables analyzed included: patient demographics, pre- and posttreatment pulmonary function tests (PFT) when available, pre-treatment oxygen use, tumor size and location (peripheral, central, or ultra-central), radiation doses, and grade of toxicity as defined by Human and Health Services Common Terminology Criteria for Adverse Events version 5.0 (dyspnea and cough both counted as pulmonary toxicity): acute ≤ 90 days and late > 90 days (Table 1).
SPSS versions 24 and 26 were used for statistical analysis. Median and range were obtained for continuous variables with a normal distribution. Kaplan-Meier log-rank testing was used to analyze OS. χ2 and Mann-Whitney U tests were used to analyze association between independent variables and OS. Analysis of significant findings were repeated with operable patients excluded for further analysis.
Results
The median follow-up was 18 months (range, 1 to 54). The median age was 71 years (range, 59 to 95) (Table 2). Most patients (97.1%) were male. The majority of patients (79.4%) had a 0 or 1 for the Eastern Cooperative Oncology group performance status, indicating fully active or restricted in physically strenuous activity but ambulatory and able to perform light work. All patients were either current or former smokers with an average pack-year history of 69.4. Only 11.6% of patients had operable disease, but received empiric SABR because they declined surgery. Four patients did not have pretreatment spirometry available and 37 did not have pretreatment diffusing capacity for carbon monoxide (DLCO) data.
Most patients had a pretreatment forced expiratory volume during the first seconds (FEV1) value and DLCO < 60% of predicted (60% and 84% of the patients, respectively). The median tumor diameter was 2 cm. Of the 68.2% of patients who did not have chronic hypoxemic respiratory failure before SABR, 16% developed a new requirement for supplemental oxygen. Sixty-two tumors (89.9%) were peripheral. There were 4 local recurrences (5.7%), 10 regional (different lobe and nodal) failures (14.3%), and 15 distant metastases (21.4%).
Nineteen of 67 patients (26.3%) had acute toxicity of which 9 had acute grade ≥ 2 toxicity; information regarding toxicity was missing on 2 patients. Thirty-two of 65 (49.9%) patients had late toxicity of which 20 (30.8%) had late grade ≥ 2 toxicity. The main factor associated with development of acute toxicity was pretreatment oxygendependence (P = .047). This was not significant when comparing only inoperable patients. Twenty patients (29.9%) developed some type of acute toxicity; pulmonary toxicity was most common (22.4%) (Table 3). All patients with acute toxicity also developed late toxicity except for 1 who died before 3 months. Predominantly, the deaths in our sample were from causes other than the malignancy or treatment, such as sepsis, deconditioning after a fall, cardiovascular complications, etc. Acute toxicity of grade ≥ 2 was significantly associated with late toxicity (P < .001 for both) in both operable and inoperable patients (P < .001).
Development of any acute toxicity grade ≥ 2 was significantly associated with oxygendependence at baseline (P = .003), central location (P < .001), and new oxygen requirement (P = .02). Only central tumor location was found to be significant (P = .001) within the inoperable cohort. There were no significant differences in outcome based on pulmonary function testing (FEV1, forced vital capacity, or DLCO) or the analyzed PFT subgroups (FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30%, and FEV1 < 35%).
At the time of data collection, 30 patients were deceased (43.5%). There was a statistically significant association between OS and operability (P = .03; Table 4, Figure 1). Decreased OS was significantly associated with acute toxicity (P = .001) and acute toxicity grade ≥ 2 (P = .005; Figures 2 and 3). For the inoperable patients, both acute toxicity (P < .001) and acute toxicity grade ≥ 2 (P = .026) remained significant.
Discussion
SABR is an effective treatment for inoperable early-stage NSCLC, however its therapeutic ratio in a more frail population who cannot withstand biopsy is not well established. Additionally, the prevalence of benign disease in patients with solitary pulmonary nodules can be between 9% and 21%.6 Haidar and colleagues looked at 55 patients who received empiric SABR and found a median OS of 30.2 months with an 8.7% risk of local failure, 13% risk of regional failure with 8.7% acute toxicity, and 13% chronic toxicity.7 Data from Harkenrider and colleagues (n = 34) revealed similar results with a 2-year OS of 85%, local control of 97.1%, and regional control of 80%. The authors noted no grade ≥ 3 acute toxicities and an incidence of grade ≥ 3 late toxicities of 8.8%.1 These findings are concordant with our study results, confirming the safety and efficacy of SABR. Furthermore, a National Cancer Database analysis of observation vs empiric SABR found an OS of 10.1 months and 29 months respectively, with a hazard ratio of 0.64 (P < .001).3 Additionally, Fischer-Valuck and colleagues (n = 88) compared biopsy confirmed vs unbiopsied patients treated with SABR and found no difference in the 3-year local progression-free survival (93.1% vs 94.1%), regional lymph node metastasis and distant metastases free survival (92.5% vs 87.4%), or OS (59.9% vs 58.9%).8 With a median OS of ≤ 1 year for untreated stage I NSCLC,these studies support treating patients with empiric SABR.4
Other researchers have sought parameters to identify patients for whom radiation therapy would be too toxic. Guckenberger and colleagues aimed to establish a lower limit of pretreatment PFT to exclude patients and found only a 7% incidence of grade ≥ 2 adverse effects and toxicity did not increase with lower pulmonary function.9 They concluded that SABR was safe even for patients with poor pulmonary function. Other institutions have confirmed such findings and have been unable to find a cut-off PFT to exclude patients from empiric SABR.10,11 An analysis from the RTOG 0236 trial also noted that poor baseline PFT could not predict pulmonary toxicity or survival. Additionally, the study demonstrated only minimal decreases in patients’ FEV1 (5.8%) and DLCO (6%) at 2 years.12
Our study sought to identify a cut-off on FEV1 or DLCO that could be associated with increased toxicity. We also evaluated the incidence of acute toxicities grade ≥ 2 by stratifying patients according to FEV1 into subgroups: FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30% of predicted and FEV1 < 35% of predicted. However, similar to other studies, we did not find any value that was significantly associated with increased toxicity that could preclude empiric SABR. One possible reason is that no treatment is offered for patients with extremely poor lung function as deemed by clinical judgement, therefore data on these patients is unavailable. In contradiction to other studies, our study found that oxygen dependence before treatment was significantly associated with development of acute toxicities. The exact mechanism for this association is unknown and could not be elucidated by baseline PFT. One possible explanation is that SABR could lead to oxygen free radical generation. In addition, our study indicated that those who developed acute toxicities had worse OS.
Limitations
Our study is limited by caveats of a retrospective study and its small sample size, but is in line with the reported literature (ranging from 33 to 88 patients).1,7,8 Another limitation is that data on pretreatment DLCO was missing in 37 patients and the lack of statistical robustness in terms of the smaller inoperable cohort, which limits the analyses of these factors in regards to anticipated morbidity from SABR. Also, given this is data collected from the US Department of Veterans Affairs, only 3% of our sample was female.
Conclusions
Empiric SABR for patients with presumed early-stage NSCLC appears to be safe and might positively impact OS. Development of any acute toxicity grade ≥ 2 was significantly associated with dependence on supplemental oxygen before treatment, central tumor location, and development of new oxygen requirement. No association was found in patients with poor pulmonary function before treatment because we could not find a FEV1 or DLCO cutoff that could preclude patients from empiric SABR. Considering the poor survival of untreated early-stage NSCLC, coupled with the efficacy and safety of empiric SABR for those with presumed disease, definitive SABR should be offered selectively within this patient population.
Acknowledgments
Drs. Park, Whiting and Castillo contributed to data collection. Drs. Park, Govindan and Castillo contributed to the statistical analysis and writing the first draft and final manuscript. Drs. Park, Govindan, Huang, and Reddy contributed to the discussion section.
1. Harkenrider MM, Bertke MH, Dunlap NE. Stereotactic body radiation therapy for unbiopsied early-stage lung cancer: a multi-institutional analysis. Am J Clin Oncol. 2014;37(4):337-342. doi:10.1097/COC.0b013e318277d822
2. Raz DJ, Zell JA, Ou SH, Gandara DR, Anton-Culver H, Jablons DM. Natural history of stage I non-small cell lung cancer: implications for early detection. Chest. 2007;132(1):193-199. doi:10.1378/chest.06-3096
3. Nanda RH, Liu Y, Gillespie TW, et al. Stereotactic body radiation therapy versus no treatment for early stage non-small cell lung cancer in medically inoperable elderly patients: a National Cancer Data Base analysis. Cancer. 2015;121(23):4222-4230. doi:10.1002/cncr.29640
4. Ball D, Mai GT, Vinod S, et al. Stereotactic ablative radiotherapy versus standard radiotherapy in stage 1 non-small-cell lung cancer (TROG 09.02 CHISEL): a phase 3, open-label, randomised controlled trial. Lancet Oncol. 2019;20(4):494-503. doi:10.1016/S1470-2045(18)30896-9
5. Timmerman R, Paulus R, Galvin J, et al. Stereotactic body radiation therapy for inoperable early stage lung cancer. JAMA. 2010;303(11):1070-1076. doi:10.1001/jama.2010.261
6. Smith MA, Battafarano RJ, Meyers BF, Zoole JB, Cooper JD, Patterson GA. Prevalence of benign disease in patients undergoing resection for suspected lung cancer. Ann Thorac Surg. 2006;81(5):1824-1828. doi:10.1016/j.athoracsur.2005.11.010
7. Haidar YM, Rahn DA 3rd, Nath S, et al. Comparison of outcomes following stereotactic body radiotherapy for nonsmall cell lung cancer in patients with and without pathological confirmation. Ther Adv Respir Dis. 2014;8(1):3-12. doi:10.1177/1753465813512545
8. Fischer-Valuck BW, Boggs H, Katz S, Durci M, Acharya S, Rosen LR. Comparison of stereotactic body radiation therapy for biopsy-proven versus radiographically diagnosed early-stage non-small lung cancer: a single-institution experience. Tumori. 2015;101(3):287-293. doi:10.5301/tj.5000279
9. Guckenberger M, Kestin LL, Hope AJ, et al. Is there a lower limit of pretreatment pulmonary function for safe and effective stereotactic body radiotherapy for early-stage non-small cell lung cancer? J Thorac Oncol. 2012;7:542-551. doi:10.1097/JTO.0b013e31824165d7
10. Wang J, Cao J, Yuan S, et al. Poor baseline pulmonary function may not increase the risk of radiation-induced lung toxicity. Int J Radiat Oncol Biol Phys. 2013;85(3):798-804. doi:10.1016/j.ijrobp.2012.06.040
11. Henderson M, McGarry R, Yiannoutsos C, et al. Baseline pulmonary function as a predictor for survival and decline in pulmonary function over time in patients undergoing stereotactic body radiotherapy for the treatment of stage I non-small-cell lung cancer. Int J Radiat Oncol Biol Phys. 2008;72(2):404-409. doi:10.1016/j.ijrobp.2007.12.051
12. Stanic S, Paulus R, Timmerman RD, et al. No clinically significant changes in pulmonary function following stereotactic body radiation therapy for early- stage peripheral non-small cell lung cancer: an analysis of RTOG 0236. Int J Radiat Oncol Biol Phys. 2014;88(5):1092-1099. doi:10.1016/j.ijrobp.2013.12.050
Stereotactic ablative radiotherapy (SABR) has become the standard of care for inoperable early-stage non-small cell lung cancer (NSCLC). Many patients are unable to undergo a biopsy safely because of poor pulmonary function or underlying emphysema and are then empirically treated with radiotherapy if they meet criteria. In these patients, local control can be achieved with SABR with minimal toxicity.1 Considering that median overall survival (OS) among patients with untreated stage I NSCLC has been reported to be as low as 9 months, early treatment with SABR could lead to increased survival of 29 to 60 months.2-4
The RTOG 0236 trial showed a median OS of 48 months and the randomized phase III CHISEL trial showed a median OS of 60 months; however, these survival data were reported in patients who were able to safely undergo a biopsy and had confirmed NSCLC.4,5 For patients without a diagnosis confirmed by biopsy and who are treated with empiric SABR, patient factors that influence radiation toxicity and OS are not well defined.
It is not clear if empiric radiation benefits survival or if treatment causes decline in lung function, considering that underlying chronic lung disease precludes these patients from biopsy. The purpose of this study was to evaluate the factors associated with radiation toxicity with empiric SABR and to evaluate OS in this population without a biopsy-confirmed diagnosis.
Methods
This was a single center retrospective review of patients treated at the radiation oncology department at the Kansas City Veterans Affairs Medical Center from August 2014 to February 2019. Data were collected on 69 patients with pulmonary nodules identified by chest computed tomography (CT) and/or positron emission tomography (PET)-CT that were highly suspicious for primary NSCLC.
These patients were presented at a multidisciplinary meeting that involved pulmonologists, oncologists, radiation oncologists, and thoracic surgeons. Patients were deemed to be poor candidates for biopsy because of severe underlying emphysema, which would put them at high risk for pneumothorax with a percutaneous needle biopsy, or were unable to tolerate general anesthesia for navigational bronchoscopy or surgical biopsy because of poor lung function. These patients were diagnosed with presumed stage I NSCLC using the criteria: minimum of 2 sequential CT scans with enlarging nodule; absence of metastases on PET-CT; the single nodule had to be fluorodeoxyglucose avid with a minimum standardized uptake value of 2.5, and absence of clinical history or physical examination consistent with small cell lung cancer or infection.
After a consensus was reached that patients met these criteria, individuals were referred for empiric SABR. Follow-up visits were at 1 month, 3 months, and every 6 months. Variables analyzed included: patient demographics, pre- and posttreatment pulmonary function tests (PFT) when available, pre-treatment oxygen use, tumor size and location (peripheral, central, or ultra-central), radiation doses, and grade of toxicity as defined by Human and Health Services Common Terminology Criteria for Adverse Events version 5.0 (dyspnea and cough both counted as pulmonary toxicity): acute ≤ 90 days and late > 90 days (Table 1).
SPSS versions 24 and 26 were used for statistical analysis. Median and range were obtained for continuous variables with a normal distribution. Kaplan-Meier log-rank testing was used to analyze OS. χ2 and Mann-Whitney U tests were used to analyze association between independent variables and OS. Analysis of significant findings were repeated with operable patients excluded for further analysis.
Results
The median follow-up was 18 months (range, 1 to 54). The median age was 71 years (range, 59 to 95) (Table 2). Most patients (97.1%) were male. The majority of patients (79.4%) had a 0 or 1 for the Eastern Cooperative Oncology group performance status, indicating fully active or restricted in physically strenuous activity but ambulatory and able to perform light work. All patients were either current or former smokers with an average pack-year history of 69.4. Only 11.6% of patients had operable disease, but received empiric SABR because they declined surgery. Four patients did not have pretreatment spirometry available and 37 did not have pretreatment diffusing capacity for carbon monoxide (DLCO) data.
Most patients had a pretreatment forced expiratory volume during the first seconds (FEV1) value and DLCO < 60% of predicted (60% and 84% of the patients, respectively). The median tumor diameter was 2 cm. Of the 68.2% of patients who did not have chronic hypoxemic respiratory failure before SABR, 16% developed a new requirement for supplemental oxygen. Sixty-two tumors (89.9%) were peripheral. There were 4 local recurrences (5.7%), 10 regional (different lobe and nodal) failures (14.3%), and 15 distant metastases (21.4%).
Nineteen of 67 patients (26.3%) had acute toxicity of which 9 had acute grade ≥ 2 toxicity; information regarding toxicity was missing on 2 patients. Thirty-two of 65 (49.9%) patients had late toxicity of which 20 (30.8%) had late grade ≥ 2 toxicity. The main factor associated with development of acute toxicity was pretreatment oxygendependence (P = .047). This was not significant when comparing only inoperable patients. Twenty patients (29.9%) developed some type of acute toxicity; pulmonary toxicity was most common (22.4%) (Table 3). All patients with acute toxicity also developed late toxicity except for 1 who died before 3 months. Predominantly, the deaths in our sample were from causes other than the malignancy or treatment, such as sepsis, deconditioning after a fall, cardiovascular complications, etc. Acute toxicity of grade ≥ 2 was significantly associated with late toxicity (P < .001 for both) in both operable and inoperable patients (P < .001).
Development of any acute toxicity grade ≥ 2 was significantly associated with oxygendependence at baseline (P = .003), central location (P < .001), and new oxygen requirement (P = .02). Only central tumor location was found to be significant (P = .001) within the inoperable cohort. There were no significant differences in outcome based on pulmonary function testing (FEV1, forced vital capacity, or DLCO) or the analyzed PFT subgroups (FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30%, and FEV1 < 35%).
At the time of data collection, 30 patients were deceased (43.5%). There was a statistically significant association between OS and operability (P = .03; Table 4, Figure 1). Decreased OS was significantly associated with acute toxicity (P = .001) and acute toxicity grade ≥ 2 (P = .005; Figures 2 and 3). For the inoperable patients, both acute toxicity (P < .001) and acute toxicity grade ≥ 2 (P = .026) remained significant.
Discussion
SABR is an effective treatment for inoperable early-stage NSCLC, however its therapeutic ratio in a more frail population who cannot withstand biopsy is not well established. Additionally, the prevalence of benign disease in patients with solitary pulmonary nodules can be between 9% and 21%.6 Haidar and colleagues looked at 55 patients who received empiric SABR and found a median OS of 30.2 months with an 8.7% risk of local failure, 13% risk of regional failure with 8.7% acute toxicity, and 13% chronic toxicity.7 Data from Harkenrider and colleagues (n = 34) revealed similar results with a 2-year OS of 85%, local control of 97.1%, and regional control of 80%. The authors noted no grade ≥ 3 acute toxicities and an incidence of grade ≥ 3 late toxicities of 8.8%.1 These findings are concordant with our study results, confirming the safety and efficacy of SABR. Furthermore, a National Cancer Database analysis of observation vs empiric SABR found an OS of 10.1 months and 29 months respectively, with a hazard ratio of 0.64 (P < .001).3 Additionally, Fischer-Valuck and colleagues (n = 88) compared biopsy confirmed vs unbiopsied patients treated with SABR and found no difference in the 3-year local progression-free survival (93.1% vs 94.1%), regional lymph node metastasis and distant metastases free survival (92.5% vs 87.4%), or OS (59.9% vs 58.9%).8 With a median OS of ≤ 1 year for untreated stage I NSCLC,these studies support treating patients with empiric SABR.4
Other researchers have sought parameters to identify patients for whom radiation therapy would be too toxic. Guckenberger and colleagues aimed to establish a lower limit of pretreatment PFT to exclude patients and found only a 7% incidence of grade ≥ 2 adverse effects and toxicity did not increase with lower pulmonary function.9 They concluded that SABR was safe even for patients with poor pulmonary function. Other institutions have confirmed such findings and have been unable to find a cut-off PFT to exclude patients from empiric SABR.10,11 An analysis from the RTOG 0236 trial also noted that poor baseline PFT could not predict pulmonary toxicity or survival. Additionally, the study demonstrated only minimal decreases in patients’ FEV1 (5.8%) and DLCO (6%) at 2 years.12
Our study sought to identify a cut-off on FEV1 or DLCO that could be associated with increased toxicity. We also evaluated the incidence of acute toxicities grade ≥ 2 by stratifying patients according to FEV1 into subgroups: FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30% of predicted and FEV1 < 35% of predicted. However, similar to other studies, we did not find any value that was significantly associated with increased toxicity that could preclude empiric SABR. One possible reason is that no treatment is offered for patients with extremely poor lung function as deemed by clinical judgement, therefore data on these patients is unavailable. In contradiction to other studies, our study found that oxygen dependence before treatment was significantly associated with development of acute toxicities. The exact mechanism for this association is unknown and could not be elucidated by baseline PFT. One possible explanation is that SABR could lead to oxygen free radical generation. In addition, our study indicated that those who developed acute toxicities had worse OS.
Limitations
Our study is limited by caveats of a retrospective study and its small sample size, but is in line with the reported literature (ranging from 33 to 88 patients).1,7,8 Another limitation is that data on pretreatment DLCO was missing in 37 patients and the lack of statistical robustness in terms of the smaller inoperable cohort, which limits the analyses of these factors in regards to anticipated morbidity from SABR. Also, given this is data collected from the US Department of Veterans Affairs, only 3% of our sample was female.
Conclusions
Empiric SABR for patients with presumed early-stage NSCLC appears to be safe and might positively impact OS. Development of any acute toxicity grade ≥ 2 was significantly associated with dependence on supplemental oxygen before treatment, central tumor location, and development of new oxygen requirement. No association was found in patients with poor pulmonary function before treatment because we could not find a FEV1 or DLCO cutoff that could preclude patients from empiric SABR. Considering the poor survival of untreated early-stage NSCLC, coupled with the efficacy and safety of empiric SABR for those with presumed disease, definitive SABR should be offered selectively within this patient population.
Acknowledgments
Drs. Park, Whiting and Castillo contributed to data collection. Drs. Park, Govindan and Castillo contributed to the statistical analysis and writing the first draft and final manuscript. Drs. Park, Govindan, Huang, and Reddy contributed to the discussion section.
Stereotactic ablative radiotherapy (SABR) has become the standard of care for inoperable early-stage non-small cell lung cancer (NSCLC). Many patients are unable to undergo a biopsy safely because of poor pulmonary function or underlying emphysema and are then empirically treated with radiotherapy if they meet criteria. In these patients, local control can be achieved with SABR with minimal toxicity.1 Considering that median overall survival (OS) among patients with untreated stage I NSCLC has been reported to be as low as 9 months, early treatment with SABR could lead to increased survival of 29 to 60 months.2-4
The RTOG 0236 trial showed a median OS of 48 months and the randomized phase III CHISEL trial showed a median OS of 60 months; however, these survival data were reported in patients who were able to safely undergo a biopsy and had confirmed NSCLC.4,5 For patients without a diagnosis confirmed by biopsy and who are treated with empiric SABR, patient factors that influence radiation toxicity and OS are not well defined.
It is not clear if empiric radiation benefits survival or if treatment causes decline in lung function, considering that underlying chronic lung disease precludes these patients from biopsy. The purpose of this study was to evaluate the factors associated with radiation toxicity with empiric SABR and to evaluate OS in this population without a biopsy-confirmed diagnosis.
Methods
This was a single center retrospective review of patients treated at the radiation oncology department at the Kansas City Veterans Affairs Medical Center from August 2014 to February 2019. Data were collected on 69 patients with pulmonary nodules identified by chest computed tomography (CT) and/or positron emission tomography (PET)-CT that were highly suspicious for primary NSCLC.
These patients were presented at a multidisciplinary meeting that involved pulmonologists, oncologists, radiation oncologists, and thoracic surgeons. Patients were deemed to be poor candidates for biopsy because of severe underlying emphysema, which would put them at high risk for pneumothorax with a percutaneous needle biopsy, or were unable to tolerate general anesthesia for navigational bronchoscopy or surgical biopsy because of poor lung function. These patients were diagnosed with presumed stage I NSCLC using the criteria: minimum of 2 sequential CT scans with enlarging nodule; absence of metastases on PET-CT; the single nodule had to be fluorodeoxyglucose avid with a minimum standardized uptake value of 2.5, and absence of clinical history or physical examination consistent with small cell lung cancer or infection.
After a consensus was reached that patients met these criteria, individuals were referred for empiric SABR. Follow-up visits were at 1 month, 3 months, and every 6 months. Variables analyzed included: patient demographics, pre- and posttreatment pulmonary function tests (PFT) when available, pre-treatment oxygen use, tumor size and location (peripheral, central, or ultra-central), radiation doses, and grade of toxicity as defined by Human and Health Services Common Terminology Criteria for Adverse Events version 5.0 (dyspnea and cough both counted as pulmonary toxicity): acute ≤ 90 days and late > 90 days (Table 1).
SPSS versions 24 and 26 were used for statistical analysis. Median and range were obtained for continuous variables with a normal distribution. Kaplan-Meier log-rank testing was used to analyze OS. χ2 and Mann-Whitney U tests were used to analyze association between independent variables and OS. Analysis of significant findings were repeated with operable patients excluded for further analysis.
Results
The median follow-up was 18 months (range, 1 to 54). The median age was 71 years (range, 59 to 95) (Table 2). Most patients (97.1%) were male. The majority of patients (79.4%) had a 0 or 1 for the Eastern Cooperative Oncology group performance status, indicating fully active or restricted in physically strenuous activity but ambulatory and able to perform light work. All patients were either current or former smokers with an average pack-year history of 69.4. Only 11.6% of patients had operable disease, but received empiric SABR because they declined surgery. Four patients did not have pretreatment spirometry available and 37 did not have pretreatment diffusing capacity for carbon monoxide (DLCO) data.
Most patients had a pretreatment forced expiratory volume during the first seconds (FEV1) value and DLCO < 60% of predicted (60% and 84% of the patients, respectively). The median tumor diameter was 2 cm. Of the 68.2% of patients who did not have chronic hypoxemic respiratory failure before SABR, 16% developed a new requirement for supplemental oxygen. Sixty-two tumors (89.9%) were peripheral. There were 4 local recurrences (5.7%), 10 regional (different lobe and nodal) failures (14.3%), and 15 distant metastases (21.4%).
Nineteen of 67 patients (26.3%) had acute toxicity of which 9 had acute grade ≥ 2 toxicity; information regarding toxicity was missing on 2 patients. Thirty-two of 65 (49.9%) patients had late toxicity of which 20 (30.8%) had late grade ≥ 2 toxicity. The main factor associated with development of acute toxicity was pretreatment oxygendependence (P = .047). This was not significant when comparing only inoperable patients. Twenty patients (29.9%) developed some type of acute toxicity; pulmonary toxicity was most common (22.4%) (Table 3). All patients with acute toxicity also developed late toxicity except for 1 who died before 3 months. Predominantly, the deaths in our sample were from causes other than the malignancy or treatment, such as sepsis, deconditioning after a fall, cardiovascular complications, etc. Acute toxicity of grade ≥ 2 was significantly associated with late toxicity (P < .001 for both) in both operable and inoperable patients (P < .001).
Development of any acute toxicity grade ≥ 2 was significantly associated with oxygendependence at baseline (P = .003), central location (P < .001), and new oxygen requirement (P = .02). Only central tumor location was found to be significant (P = .001) within the inoperable cohort. There were no significant differences in outcome based on pulmonary function testing (FEV1, forced vital capacity, or DLCO) or the analyzed PFT subgroups (FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30%, and FEV1 < 35%).
At the time of data collection, 30 patients were deceased (43.5%). There was a statistically significant association between OS and operability (P = .03; Table 4, Figure 1). Decreased OS was significantly associated with acute toxicity (P = .001) and acute toxicity grade ≥ 2 (P = .005; Figures 2 and 3). For the inoperable patients, both acute toxicity (P < .001) and acute toxicity grade ≥ 2 (P = .026) remained significant.
Discussion
SABR is an effective treatment for inoperable early-stage NSCLC, however its therapeutic ratio in a more frail population who cannot withstand biopsy is not well established. Additionally, the prevalence of benign disease in patients with solitary pulmonary nodules can be between 9% and 21%.6 Haidar and colleagues looked at 55 patients who received empiric SABR and found a median OS of 30.2 months with an 8.7% risk of local failure, 13% risk of regional failure with 8.7% acute toxicity, and 13% chronic toxicity.7 Data from Harkenrider and colleagues (n = 34) revealed similar results with a 2-year OS of 85%, local control of 97.1%, and regional control of 80%. The authors noted no grade ≥ 3 acute toxicities and an incidence of grade ≥ 3 late toxicities of 8.8%.1 These findings are concordant with our study results, confirming the safety and efficacy of SABR. Furthermore, a National Cancer Database analysis of observation vs empiric SABR found an OS of 10.1 months and 29 months respectively, with a hazard ratio of 0.64 (P < .001).3 Additionally, Fischer-Valuck and colleagues (n = 88) compared biopsy confirmed vs unbiopsied patients treated with SABR and found no difference in the 3-year local progression-free survival (93.1% vs 94.1%), regional lymph node metastasis and distant metastases free survival (92.5% vs 87.4%), or OS (59.9% vs 58.9%).8 With a median OS of ≤ 1 year for untreated stage I NSCLC,these studies support treating patients with empiric SABR.4
Other researchers have sought parameters to identify patients for whom radiation therapy would be too toxic. Guckenberger and colleagues aimed to establish a lower limit of pretreatment PFT to exclude patients and found only a 7% incidence of grade ≥ 2 adverse effects and toxicity did not increase with lower pulmonary function.9 They concluded that SABR was safe even for patients with poor pulmonary function. Other institutions have confirmed such findings and have been unable to find a cut-off PFT to exclude patients from empiric SABR.10,11 An analysis from the RTOG 0236 trial also noted that poor baseline PFT could not predict pulmonary toxicity or survival. Additionally, the study demonstrated only minimal decreases in patients’ FEV1 (5.8%) and DLCO (6%) at 2 years.12
Our study sought to identify a cut-off on FEV1 or DLCO that could be associated with increased toxicity. We also evaluated the incidence of acute toxicities grade ≥ 2 by stratifying patients according to FEV1 into subgroups: FEV1 < 1.0 L, FEV1 < 1.5 L, FEV1 < 30% of predicted and FEV1 < 35% of predicted. However, similar to other studies, we did not find any value that was significantly associated with increased toxicity that could preclude empiric SABR. One possible reason is that no treatment is offered for patients with extremely poor lung function as deemed by clinical judgement, therefore data on these patients is unavailable. In contradiction to other studies, our study found that oxygen dependence before treatment was significantly associated with development of acute toxicities. The exact mechanism for this association is unknown and could not be elucidated by baseline PFT. One possible explanation is that SABR could lead to oxygen free radical generation. In addition, our study indicated that those who developed acute toxicities had worse OS.
Limitations
Our study is limited by caveats of a retrospective study and its small sample size, but is in line with the reported literature (ranging from 33 to 88 patients).1,7,8 Another limitation is that data on pretreatment DLCO was missing in 37 patients and the lack of statistical robustness in terms of the smaller inoperable cohort, which limits the analyses of these factors in regards to anticipated morbidity from SABR. Also, given this is data collected from the US Department of Veterans Affairs, only 3% of our sample was female.
Conclusions
Empiric SABR for patients with presumed early-stage NSCLC appears to be safe and might positively impact OS. Development of any acute toxicity grade ≥ 2 was significantly associated with dependence on supplemental oxygen before treatment, central tumor location, and development of new oxygen requirement. No association was found in patients with poor pulmonary function before treatment because we could not find a FEV1 or DLCO cutoff that could preclude patients from empiric SABR. Considering the poor survival of untreated early-stage NSCLC, coupled with the efficacy and safety of empiric SABR for those with presumed disease, definitive SABR should be offered selectively within this patient population.
Acknowledgments
Drs. Park, Whiting and Castillo contributed to data collection. Drs. Park, Govindan and Castillo contributed to the statistical analysis and writing the first draft and final manuscript. Drs. Park, Govindan, Huang, and Reddy contributed to the discussion section.
1. Harkenrider MM, Bertke MH, Dunlap NE. Stereotactic body radiation therapy for unbiopsied early-stage lung cancer: a multi-institutional analysis. Am J Clin Oncol. 2014;37(4):337-342. doi:10.1097/COC.0b013e318277d822
2. Raz DJ, Zell JA, Ou SH, Gandara DR, Anton-Culver H, Jablons DM. Natural history of stage I non-small cell lung cancer: implications for early detection. Chest. 2007;132(1):193-199. doi:10.1378/chest.06-3096
3. Nanda RH, Liu Y, Gillespie TW, et al. Stereotactic body radiation therapy versus no treatment for early stage non-small cell lung cancer in medically inoperable elderly patients: a National Cancer Data Base analysis. Cancer. 2015;121(23):4222-4230. doi:10.1002/cncr.29640
4. Ball D, Mai GT, Vinod S, et al. Stereotactic ablative radiotherapy versus standard radiotherapy in stage 1 non-small-cell lung cancer (TROG 09.02 CHISEL): a phase 3, open-label, randomised controlled trial. Lancet Oncol. 2019;20(4):494-503. doi:10.1016/S1470-2045(18)30896-9
5. Timmerman R, Paulus R, Galvin J, et al. Stereotactic body radiation therapy for inoperable early stage lung cancer. JAMA. 2010;303(11):1070-1076. doi:10.1001/jama.2010.261
6. Smith MA, Battafarano RJ, Meyers BF, Zoole JB, Cooper JD, Patterson GA. Prevalence of benign disease in patients undergoing resection for suspected lung cancer. Ann Thorac Surg. 2006;81(5):1824-1828. doi:10.1016/j.athoracsur.2005.11.010
7. Haidar YM, Rahn DA 3rd, Nath S, et al. Comparison of outcomes following stereotactic body radiotherapy for nonsmall cell lung cancer in patients with and without pathological confirmation. Ther Adv Respir Dis. 2014;8(1):3-12. doi:10.1177/1753465813512545
8. Fischer-Valuck BW, Boggs H, Katz S, Durci M, Acharya S, Rosen LR. Comparison of stereotactic body radiation therapy for biopsy-proven versus radiographically diagnosed early-stage non-small lung cancer: a single-institution experience. Tumori. 2015;101(3):287-293. doi:10.5301/tj.5000279
9. Guckenberger M, Kestin LL, Hope AJ, et al. Is there a lower limit of pretreatment pulmonary function for safe and effective stereotactic body radiotherapy for early-stage non-small cell lung cancer? J Thorac Oncol. 2012;7:542-551. doi:10.1097/JTO.0b013e31824165d7
10. Wang J, Cao J, Yuan S, et al. Poor baseline pulmonary function may not increase the risk of radiation-induced lung toxicity. Int J Radiat Oncol Biol Phys. 2013;85(3):798-804. doi:10.1016/j.ijrobp.2012.06.040
11. Henderson M, McGarry R, Yiannoutsos C, et al. Baseline pulmonary function as a predictor for survival and decline in pulmonary function over time in patients undergoing stereotactic body radiotherapy for the treatment of stage I non-small-cell lung cancer. Int J Radiat Oncol Biol Phys. 2008;72(2):404-409. doi:10.1016/j.ijrobp.2007.12.051
12. Stanic S, Paulus R, Timmerman RD, et al. No clinically significant changes in pulmonary function following stereotactic body radiation therapy for early- stage peripheral non-small cell lung cancer: an analysis of RTOG 0236. Int J Radiat Oncol Biol Phys. 2014;88(5):1092-1099. doi:10.1016/j.ijrobp.2013.12.050
1. Harkenrider MM, Bertke MH, Dunlap NE. Stereotactic body radiation therapy for unbiopsied early-stage lung cancer: a multi-institutional analysis. Am J Clin Oncol. 2014;37(4):337-342. doi:10.1097/COC.0b013e318277d822
2. Raz DJ, Zell JA, Ou SH, Gandara DR, Anton-Culver H, Jablons DM. Natural history of stage I non-small cell lung cancer: implications for early detection. Chest. 2007;132(1):193-199. doi:10.1378/chest.06-3096
3. Nanda RH, Liu Y, Gillespie TW, et al. Stereotactic body radiation therapy versus no treatment for early stage non-small cell lung cancer in medically inoperable elderly patients: a National Cancer Data Base analysis. Cancer. 2015;121(23):4222-4230. doi:10.1002/cncr.29640
4. Ball D, Mai GT, Vinod S, et al. Stereotactic ablative radiotherapy versus standard radiotherapy in stage 1 non-small-cell lung cancer (TROG 09.02 CHISEL): a phase 3, open-label, randomised controlled trial. Lancet Oncol. 2019;20(4):494-503. doi:10.1016/S1470-2045(18)30896-9
5. Timmerman R, Paulus R, Galvin J, et al. Stereotactic body radiation therapy for inoperable early stage lung cancer. JAMA. 2010;303(11):1070-1076. doi:10.1001/jama.2010.261
6. Smith MA, Battafarano RJ, Meyers BF, Zoole JB, Cooper JD, Patterson GA. Prevalence of benign disease in patients undergoing resection for suspected lung cancer. Ann Thorac Surg. 2006;81(5):1824-1828. doi:10.1016/j.athoracsur.2005.11.010
7. Haidar YM, Rahn DA 3rd, Nath S, et al. Comparison of outcomes following stereotactic body radiotherapy for nonsmall cell lung cancer in patients with and without pathological confirmation. Ther Adv Respir Dis. 2014;8(1):3-12. doi:10.1177/1753465813512545
8. Fischer-Valuck BW, Boggs H, Katz S, Durci M, Acharya S, Rosen LR. Comparison of stereotactic body radiation therapy for biopsy-proven versus radiographically diagnosed early-stage non-small lung cancer: a single-institution experience. Tumori. 2015;101(3):287-293. doi:10.5301/tj.5000279
9. Guckenberger M, Kestin LL, Hope AJ, et al. Is there a lower limit of pretreatment pulmonary function for safe and effective stereotactic body radiotherapy for early-stage non-small cell lung cancer? J Thorac Oncol. 2012;7:542-551. doi:10.1097/JTO.0b013e31824165d7
10. Wang J, Cao J, Yuan S, et al. Poor baseline pulmonary function may not increase the risk of radiation-induced lung toxicity. Int J Radiat Oncol Biol Phys. 2013;85(3):798-804. doi:10.1016/j.ijrobp.2012.06.040
11. Henderson M, McGarry R, Yiannoutsos C, et al. Baseline pulmonary function as a predictor for survival and decline in pulmonary function over time in patients undergoing stereotactic body radiotherapy for the treatment of stage I non-small-cell lung cancer. Int J Radiat Oncol Biol Phys. 2008;72(2):404-409. doi:10.1016/j.ijrobp.2007.12.051
12. Stanic S, Paulus R, Timmerman RD, et al. No clinically significant changes in pulmonary function following stereotactic body radiation therapy for early- stage peripheral non-small cell lung cancer: an analysis of RTOG 0236. Int J Radiat Oncol Biol Phys. 2014;88(5):1092-1099. doi:10.1016/j.ijrobp.2013.12.050
Impact of an Oral Antineoplastic Renewal Clinic on Medication Possession Ratio and Cost-Savings
Evaluation of oral antineoplastic agent (OAN) adherence patterns have identified correlations between nonadherence or over-adherence and poorer disease-related outcomes. Multiple studies have focused on imatinib use in chronic myeloid leukemia (CML) due to its continuous, long-term use. A study by Ganesan and colleagues found that nonadherence to imatinib showed a significant decrease in 5-year event-free survival between 76.7% of adherent participants compared with 59.8% of nonadherent participants.1 This study found that 44% of patients who were adherent to imatinib achieved complete cytogenetic response vs only 26% of patients who were nonadherent. In another study of imatinib for CML, major molecular response (MMR) was strongly correlated with adherence and no patients with adherence < 80% were able to achieve MMR.2 Similarly, in studies of tamoxifen for breast cancer, < 80% adherence resulted in a 10% decrease in survival when compared to those who were more adherent.3,4
In addition to the clinical implications of nonadherence, there can be a significant cost associated with suboptimal use of these medications. The price of a single dose of OAN medication may cost as much as $440.5
The benefits of multidisciplinary care teams have been identified in many studies.6,7 While studies are limited in oncology, pharmacists provide vital contributions to the oncology multidisciplinary team when managing OANs as these health care professionals have expert knowledge of the medications, potential adverse events (AEs), and necessary monitoring parameters.8 In one study, patients seen by the pharmacist-led oral chemotherapy management program experienced improved clinical outcomes and response to therapy when compared with preintervention patients (early molecular response, 88.9% vs 54.8%, P = .01; major molecular response, 83.3% vs 57.6%, P = .06).9 During the study, 318 AEs were reported, leading to 235 pharmacist interventions to ameliorate AEs and improve adherence.
The primary objective of this study was to measure the impact of a pharmacist-driven OAN renewal clinic on medication adherence. The secondary objective was to estimate cost-savings of this new service.
Methods
Prior to July 2014, several limitations were identified related to OAN prescribing and monitoring at the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, Indiana (RLRVAMC). The prescription ordering process relied primarily on the patient to initiate refills, rather than the prescriber OAN prescriptions also lacked consistency for number of refills or quantities dispensed. Furthermore, ordering of antineoplastic products was not limited to hematology/oncology providers. Patients were identified with significant supply on hand at the time of medication discontinuation, creating concerns for medication waste, tolerability, and nonadherence.
As a result, opportunities were identified to improve the prescribing process, recommended monitoring, toxicity and tolerability evaluation, medication reconciliation, and medication adherence. In July of 2014, the RLRVAMC adopted a new chemotherapy order entry system capable of restricting prescriptions to hematology/oncology providers and limiting dispensed quantities and refill amounts. A comprehensive pharmacist driven OAN renewal clinic was implemented on September 1, 2014 with the goal of improving long-term adherence and tolerability, in addition to minimizing medication waste.
Patients were eligible for enrollment in the clinic if they had a cancer diagnosis and were concomitantly prescribed an OAN outlined in Table 1. All eligible patients were automatically enrolled in the clinic when they were deemed stable on their OAN by a hematology/oncology pharmacy specialist. Stability was defined as ≤ Grade 1 symptoms associated with the toxicities of OAN therapy managed with or without intervention as defined by the Common Terminology Criteria for Adverse Events (CTCAE) version 4.03. Once enrolled in the renewal clinic, patients were called by an oncology pharmacy resident (PGY2) 1 week prior to any OAN refill due date. Patients were asked a series of 5 adherence and tolerability questions (Table 2) to evaluate renewal criteria for approval or need for further evaluation. These questions were developed based on targeted information and published reports on monitoring adherence.10,11 Criteria for renewal included: < 10% self-reported missed doses of the OAN during the previous dispensing period, no hospitalizations or emergency department visits since most recent hematology/oncology provider appointment, no changes to concomitant medication therapies, and no new or worsening medication-related AEs. Patients meeting all criteria were given a 30-day supply of OAN. Prescribing, dispensing, and delivery of OAN were facilitated by the pharmacist. Patient cases that did not meet criteria for renewal were escalated to the hematology/oncology provider or oncology clinical pharmacy specialist for further evaluation.
Study Design and Setting
This was a pre/post retrospective cohort, quality improvement study of patients enrolled in the RLRVAMC OAN pharmacist renewal clinic. The study was deemed exempt from institutional review board (IRB) by the US Department of Veterans Affairs (VA) Research and Development Department.
Study Population
Patients were included in the preimplementation group if they had received at least 2 prescriptions of an eligible OAN. Therapy for the preimplementation group was required to be a monthly duration > 21 days and between the dates of September 1, 2013 and August 31, 2014. Patients were included in the postimplementation group if they had received at least 2 prescriptions of the studied OANs between September 1, 2014 and January 31, 2015. Patients were excluded if they had filled < 2 prescriptions of OAN; were managed by a non-VA oncologist or hematologist; or received an OAN other than those listed in Table 1.
Data Collection
For all patients in both the pre- and postimplementation cohorts, a standardized data collection tool was used to collect the following via electronic health record review by a PGY2 oncology resident: age, race, gender, oral antineoplastic agent, refill dates, days’ supply, estimated unit cost per dose cancer diagnosis, distance from the RLRVAMC, copay status, presence of hospitalizations/ED visits/dosage reductions, discontinuation rates, reasons for discontinuation, and total number of current prescriptions. The presence or absence of dosage reductions were collected to identify concerns for tolerability, but only the original dose for the preimplementation group and dosage at time of clinic enrollment for the postimplementation group was included in the analysis.
Outcomes and Statistical Analyses
The primary outcome was medication adherence defined as the median medication possession ratio (MPR) before and after implementation of the clinic. Secondary outcomes included the proportion of patients who were adherent from before implementation to after implementation and estimated cost-savings of this clinic after implementation. MPR was used to estimate medication adherence by taking the cumulative day supply of medication on hand divided by the number of days on therapy.12 Number of days on therapy was determined by taking the difference on the start date of the new medication regimen and the discontinuation date of the same regimen. Patients were grouped by adherence into one of the following categories: < 0.8, 0.8 to 0.89, 0.9 to 1, and > 1.1. Patients were considered adherent if they reported taking ≥ 90% (MPR ≥ 0.9) of prescribed doses, adopted from the study by Anderson and colleagues.12 A patient with an MPR > 1, likely due to filling prior to the anticipated refill date, was considered 100% adherent (MPR = 1). If a patient switched OAN during the study, both agents were included as separate entities.
A conservative estimate of cost-savings was made by multiplying the RLRVAMC cost per unit of medication at time of initial prescription fill by the number of units taken each day multiplied by the total days’ supply on hand at time of therapy discontinuation. Patients with an MPR < 1 at time of therapy discontinuation were assumed to have zero remaining units on hand and zero cost savings was estimated. Waste, for purposes of cost-savings, was calculated for all MPR values > 1. Additional supply anticipated to be on hand from dose reductions was not included in the estimated cost of unused medication.
Descriptive statistics compared demographic characteristics between the pre- and postimplementation groups. MPR data were not normally distributed, which required the use of nonparametric Mann-Whitney U tests to compare pre- and postMPRs. Pearson χ2 compared the proportion of adherent patients between groups while descriptive statistics were used to estimate cost savings. Significance was determined based on a P value < .05. IBM SPSS Statistics software was used for all statistical analyses. As this was a complete sample of all eligible subjects, no sample size calculation was performed.
Results
In the preimplementation period, 246 patients received an OAN and 61 patients received an OAN in the postimplementation period (Figure 1). Of the 246 patients in the preimplementation period, 98 were eligible and included in the preimplementation group. Similarly, of the 61 patients in the postimplementation period, 35 patients met inclusion criteria for the postimplementation group. The study population was predominantly male with an average age of approximately 70 years in both groups (Table 3). More than 70% of the population in each group was White. No statistically significant differences between groups were identified. The most commonly prescribed OAN in the preimplementation group were abiraterone, imatinib, and enzalutamide (Table 3). In the postimplementation group, the most commonly prescribed agents were abiraterone, imatinib, pazopanib, and dasatinib. No significant differences were observed in prescribing of individual agents between the pre- and postimplementation groups or other characteristics that may affect adherence including patient copay status, number of concomitant medications, and driving distance from the RLRVAMC.
Thirty-six (36.7%) patients in the preimplementation group were considered nonadherent (MPR < 0.9) and 18 (18.4%) had an MPR < 0.8. Fifteen (15.3%) patients in the preimplementation clinic were considered overadherent (MPR > 1.1). Forty-seven (47.9%) patients in the preimplementation group were considered adherent (MPR 0.9 - 1.1) while all 35 (100%) patients in the postimplementation group were considered adherent (MPR 0.9 - 1.1). No non- or overadherent patients were identified in the postimplementation group (Figure 2). The median MPR for all patients in the preimplementation group was 0.94 compared with 1.06 (P < .001) in the postimplementation group.
Thirty-five (35.7%) patients had therapy discontinued or held in the preimplementation group compared with 2 (5.7%) patients in the postimplementation group (P < .001). Reasons for discontinuation in the preimplementation group included disease progression (n = 27), death (n = 3), lost to follow up (n = 2), and intolerability of therapy (n = 3). Both patients that discontinued therapy in the postimplementation group did so due to disease progression. Of the 35 patients who had their OAN discontinued or held in the preimplementation group, 14 patients had excess supply on hand at time of discontinuation. The estimated value of the unused medication was $37,890. Nine (25%) of the 35 patients who discontinued therapy had a dosage reduction during the course of therapy and the additional supply was not included in the cost estimate. Similarly, 1 of the 2 patients in the postimplementation group had their OAN discontinued during study. The cost of oversupply of medication at the time of therapy discontinuation was estimated at $1,555. No patients in the postimplementation group had dose reductions. After implementation of the OAN renewal clinic, the total cost savings between pre ($37,890) and postimplementation ($1,555) groups was $36,355.
Discussion
OANs are widely used therapies, with more than 25 million doses administered per year in the United States alone.12 The use of these agents will continue to grow as more targeted agents become available and patients request more convenient treatment options. The role for hematology/oncology clinical pharmacy services must adapt to this increased usage of OANs, including increasing pharmacist involvement in medication education, adherence and tolerability assessments, and proactive drug interaction monitoring.However, additional research is needed to determine optimal management strategies.
Our study aimed to compare OAN adherence among patients at a tertiary care VA hospital before and after implementation of a renewal clinic. The preimplementation population had a median MPR of 0.94 compared with 1.06 in the postimplementation group (P < .001). Although an ideal MPR is 1.0, we aimed for a slightly higher MPR to allow a supply buffer in the event of prescription delivery delays, as more than 90% of prescriptions are mailed to patients from a regional mail-order pharmacy. Importantly, the median MPRs do not adequately convey the impact from this clinic. The proportion of patients who were considered adherent to OANs increased from 47.9% in the preimplementation to 100% in the postimplementation period. These finding suggest that the clinical pharmacist role to assess and encourage adherence through monitoring tolerability of these OANs improved the overall medication taking experience of these patients.
Upon initial evaluation of adherence pre- and postimplementation, median adherence rates in both groups appeared to be above goal at 0.94 and 1.06 respectively. Patients in the postimplementation group intentionally received a 5- to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer. After correcting for patients with confounding reasons for excess (dose reductions, breaks in treatment, etc.), the median MPR in the prerefill clinic group decreased to 0.9 and the MPR in the postrefill clinic group increased slightly to 1.08. Although the median adherence rate in both the pre- and postimplementation groups were above goal of 0.90, 36% of the patients in the preimplementation group were considered nonadherent (MPR < 0.9) compared with no patients in the postimplementation group. Therefore, our intervention to improve patient adherence appeared to be beneficial at our institution.
In addition to improving adherence, one of the goals of the renewal clinic was to minimize excess supply at the time of therapy discontinuation. This was accomplished by aligning medication fills with medical visits and objective monitoring, as well as limiting supply to no more than 30 days. Of the patients in the postimplementation group, only 1 patient had remaining medication at the time of therapy discontinuation compared with 14 patients in the preimplementation group. The estimated cost savings from excess supply was $36,335. Limiting the amount of unused supply not only saves money for the patient and the institution, but also decreases opportunity for improper hazardous waste disposal and unnecessary exposure of hazardous materials to others.
Our results show the pharmacist intervention in the coordination of renewals improved adherence, minimized medication waste, and saved money. The cost of pharmacist time participating in the refill clinic was not calculated. Each visit was completed in approximately 5 minutes, with subsequent documentation and coordination taking an additional 5 to 10 minutes. During the launch of this service, the oncology pharmacy resident provided all coverage of the clinic. Oversite of the resident was provided by hematology/oncology clinical pharmacy specialists. We have continued to utilize pharmacy resident coverage since that time to meet education needs and keep the estimated cost per visit low. Another option in the case that pharmacy residents are not available would be utilization of a pharmacy technician, intern, or professional student to conduct the adherence and tolerability phone assessments. Our escalation protocol allows intervention by clinical pharmacy specialist and/or other health care providers when necessary. Trainees have only required basic training on how to use the protocol.
Limitations
Due to this study’s retrospective design, an inherent limitation is dependence on prescriber and refill records for documentation of initiation and discontinuation dates. Therefore, only the association of impact of pharmacist intervention on medication adherence can be determined as opposed to causation. We did not take into account discrepancies in day supply secondary to ‘held’ therapies, dose reductions, or doses supplied during an inpatient admission, which may alter estimates of MPR and cost-savings data. Patients in the postimplementation group intentionally received a 5 to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer, thereby skewing MPR values. This study did not account for cost avoidance resulting from early identification and management of toxicity. Finally, the postimplementation data only spans 4 months and a longer duration of time is needed to more accurately determine sustainability of renewal clinic interventions and provide comprehensive evaluation of cost-avoidance.
Conclusion
Implementation of an OAN renewal clinic was associated with an increase in MPR, improved proportion of patients considered adherent, and an estimated $36,335 cost-savings. However, prospective evaluation and a longer study duration are needed to determine causality of improved adherence and cost-savings associated with a pharmacist-driven OAN renewal clinic.
1. Ganesan P, Sagar TG, Dubashi B, et al. Nonadherence to imatinib adversely affects event free survival in chronic phase chronic myeloid leukemia. Am J Hematol 2011; 86: 471-474. doi:10.1002/ajh.22019
2. Marin D, Bazeos A, Mahon FX, et al. Adherence is the critical factor for achieving molecular responses in patients with chronic myeloid leukemia who achieve complete cytogenetic responses on imatinib. J Clin Oncol 2010; 28: 2381-2388. doi:10.1200/JCO.2009.26.3087
3. McCowan C, Shearer J, Donnan PT, et al. Cohort study examining tamoxifen adherence and its relationship to mortality in women with breast cancer. Br J Cancer 2008; 99: 1763-1768. doi:10.1038/sj.bjc.6604758
4. Lexicomp Online. Sunitinib. Hudson, Ohio: Lexi-Comp, Inc; August 20, 2019.
5. Babiker A, El Husseini M, Al Nemri A, et al. Health care professional development: Working as a team to improve patient care. Sudan J Paediatr. 2014;14(2):9-16.
6. Spence MM, Makarem AF, Reyes SL, et al. Evaluation of an outpatient pharmacy clinical services program on adherence and clinical outcomes among patients with diabetes and/or coronary artery disease. J Manag Care Spec Pharm. 2014;20(10):1036-1045. doi:10.18553/jmcp.2014.20.10.1036
7. Holle LM, Puri S, Clement JM. Physician-pharmacist collaboration for oral chemotherapy monitoring: Insights from an academic genitourinary oncology practice. J Oncol Pharm Pract 2015; doi:10.1177/1078155215581524
8. Muluneh B, Schneider M, Faso A, et al. Improved Adherence Rates and Clinical Outcomes of an Integrated, Closed-Loop, Pharmacist-Led Oral Chemotherapy Management Program. Journal of Oncology Practice. 2018;14(6):371-333. doi:10.1200/JOP.17.00039.
9. Font R, Espinas JA, Gil-Gil M, et al. Prescription refill, patient self-report and physician report in assessing adherence to oral endocrine therapy in early breast cancer patients: a retrospective cohort study in Catalonia, Spain. British Journal of Cancer. 2012 ;107(8):1249-1256. doi:10.1038/bjc.2012.389.
10. Anderson KR, Chambers CR, Lam N, et al. Medication adherence among adults prescribed imatinib, dasatinib, or nilotinib for the treatment of chronic myeloid leukemia. J Oncol Pharm Practice. 2015;21(1):19–25. doi:10.1177/1078155213520261
11. Weingart SN, Brown E, Bach PB, et al. NCCN Task Force Report: oral chemotherapy. J Natl Compr Canc Netw. 2008;6(3): S1-S14.
Evaluation of oral antineoplastic agent (OAN) adherence patterns have identified correlations between nonadherence or over-adherence and poorer disease-related outcomes. Multiple studies have focused on imatinib use in chronic myeloid leukemia (CML) due to its continuous, long-term use. A study by Ganesan and colleagues found that nonadherence to imatinib showed a significant decrease in 5-year event-free survival between 76.7% of adherent participants compared with 59.8% of nonadherent participants.1 This study found that 44% of patients who were adherent to imatinib achieved complete cytogenetic response vs only 26% of patients who were nonadherent. In another study of imatinib for CML, major molecular response (MMR) was strongly correlated with adherence and no patients with adherence < 80% were able to achieve MMR.2 Similarly, in studies of tamoxifen for breast cancer, < 80% adherence resulted in a 10% decrease in survival when compared to those who were more adherent.3,4
In addition to the clinical implications of nonadherence, there can be a significant cost associated with suboptimal use of these medications. The price of a single dose of OAN medication may cost as much as $440.5
The benefits of multidisciplinary care teams have been identified in many studies.6,7 While studies are limited in oncology, pharmacists provide vital contributions to the oncology multidisciplinary team when managing OANs as these health care professionals have expert knowledge of the medications, potential adverse events (AEs), and necessary monitoring parameters.8 In one study, patients seen by the pharmacist-led oral chemotherapy management program experienced improved clinical outcomes and response to therapy when compared with preintervention patients (early molecular response, 88.9% vs 54.8%, P = .01; major molecular response, 83.3% vs 57.6%, P = .06).9 During the study, 318 AEs were reported, leading to 235 pharmacist interventions to ameliorate AEs and improve adherence.
The primary objective of this study was to measure the impact of a pharmacist-driven OAN renewal clinic on medication adherence. The secondary objective was to estimate cost-savings of this new service.
Methods
Prior to July 2014, several limitations were identified related to OAN prescribing and monitoring at the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, Indiana (RLRVAMC). The prescription ordering process relied primarily on the patient to initiate refills, rather than the prescriber OAN prescriptions also lacked consistency for number of refills or quantities dispensed. Furthermore, ordering of antineoplastic products was not limited to hematology/oncology providers. Patients were identified with significant supply on hand at the time of medication discontinuation, creating concerns for medication waste, tolerability, and nonadherence.
As a result, opportunities were identified to improve the prescribing process, recommended monitoring, toxicity and tolerability evaluation, medication reconciliation, and medication adherence. In July of 2014, the RLRVAMC adopted a new chemotherapy order entry system capable of restricting prescriptions to hematology/oncology providers and limiting dispensed quantities and refill amounts. A comprehensive pharmacist driven OAN renewal clinic was implemented on September 1, 2014 with the goal of improving long-term adherence and tolerability, in addition to minimizing medication waste.
Patients were eligible for enrollment in the clinic if they had a cancer diagnosis and were concomitantly prescribed an OAN outlined in Table 1. All eligible patients were automatically enrolled in the clinic when they were deemed stable on their OAN by a hematology/oncology pharmacy specialist. Stability was defined as ≤ Grade 1 symptoms associated with the toxicities of OAN therapy managed with or without intervention as defined by the Common Terminology Criteria for Adverse Events (CTCAE) version 4.03. Once enrolled in the renewal clinic, patients were called by an oncology pharmacy resident (PGY2) 1 week prior to any OAN refill due date. Patients were asked a series of 5 adherence and tolerability questions (Table 2) to evaluate renewal criteria for approval or need for further evaluation. These questions were developed based on targeted information and published reports on monitoring adherence.10,11 Criteria for renewal included: < 10% self-reported missed doses of the OAN during the previous dispensing period, no hospitalizations or emergency department visits since most recent hematology/oncology provider appointment, no changes to concomitant medication therapies, and no new or worsening medication-related AEs. Patients meeting all criteria were given a 30-day supply of OAN. Prescribing, dispensing, and delivery of OAN were facilitated by the pharmacist. Patient cases that did not meet criteria for renewal were escalated to the hematology/oncology provider or oncology clinical pharmacy specialist for further evaluation.
Study Design and Setting
This was a pre/post retrospective cohort, quality improvement study of patients enrolled in the RLRVAMC OAN pharmacist renewal clinic. The study was deemed exempt from institutional review board (IRB) by the US Department of Veterans Affairs (VA) Research and Development Department.
Study Population
Patients were included in the preimplementation group if they had received at least 2 prescriptions of an eligible OAN. Therapy for the preimplementation group was required to be a monthly duration > 21 days and between the dates of September 1, 2013 and August 31, 2014. Patients were included in the postimplementation group if they had received at least 2 prescriptions of the studied OANs between September 1, 2014 and January 31, 2015. Patients were excluded if they had filled < 2 prescriptions of OAN; were managed by a non-VA oncologist or hematologist; or received an OAN other than those listed in Table 1.
Data Collection
For all patients in both the pre- and postimplementation cohorts, a standardized data collection tool was used to collect the following via electronic health record review by a PGY2 oncology resident: age, race, gender, oral antineoplastic agent, refill dates, days’ supply, estimated unit cost per dose cancer diagnosis, distance from the RLRVAMC, copay status, presence of hospitalizations/ED visits/dosage reductions, discontinuation rates, reasons for discontinuation, and total number of current prescriptions. The presence or absence of dosage reductions were collected to identify concerns for tolerability, but only the original dose for the preimplementation group and dosage at time of clinic enrollment for the postimplementation group was included in the analysis.
Outcomes and Statistical Analyses
The primary outcome was medication adherence defined as the median medication possession ratio (MPR) before and after implementation of the clinic. Secondary outcomes included the proportion of patients who were adherent from before implementation to after implementation and estimated cost-savings of this clinic after implementation. MPR was used to estimate medication adherence by taking the cumulative day supply of medication on hand divided by the number of days on therapy.12 Number of days on therapy was determined by taking the difference on the start date of the new medication regimen and the discontinuation date of the same regimen. Patients were grouped by adherence into one of the following categories: < 0.8, 0.8 to 0.89, 0.9 to 1, and > 1.1. Patients were considered adherent if they reported taking ≥ 90% (MPR ≥ 0.9) of prescribed doses, adopted from the study by Anderson and colleagues.12 A patient with an MPR > 1, likely due to filling prior to the anticipated refill date, was considered 100% adherent (MPR = 1). If a patient switched OAN during the study, both agents were included as separate entities.
A conservative estimate of cost-savings was made by multiplying the RLRVAMC cost per unit of medication at time of initial prescription fill by the number of units taken each day multiplied by the total days’ supply on hand at time of therapy discontinuation. Patients with an MPR < 1 at time of therapy discontinuation were assumed to have zero remaining units on hand and zero cost savings was estimated. Waste, for purposes of cost-savings, was calculated for all MPR values > 1. Additional supply anticipated to be on hand from dose reductions was not included in the estimated cost of unused medication.
Descriptive statistics compared demographic characteristics between the pre- and postimplementation groups. MPR data were not normally distributed, which required the use of nonparametric Mann-Whitney U tests to compare pre- and postMPRs. Pearson χ2 compared the proportion of adherent patients between groups while descriptive statistics were used to estimate cost savings. Significance was determined based on a P value < .05. IBM SPSS Statistics software was used for all statistical analyses. As this was a complete sample of all eligible subjects, no sample size calculation was performed.
Results
In the preimplementation period, 246 patients received an OAN and 61 patients received an OAN in the postimplementation period (Figure 1). Of the 246 patients in the preimplementation period, 98 were eligible and included in the preimplementation group. Similarly, of the 61 patients in the postimplementation period, 35 patients met inclusion criteria for the postimplementation group. The study population was predominantly male with an average age of approximately 70 years in both groups (Table 3). More than 70% of the population in each group was White. No statistically significant differences between groups were identified. The most commonly prescribed OAN in the preimplementation group were abiraterone, imatinib, and enzalutamide (Table 3). In the postimplementation group, the most commonly prescribed agents were abiraterone, imatinib, pazopanib, and dasatinib. No significant differences were observed in prescribing of individual agents between the pre- and postimplementation groups or other characteristics that may affect adherence including patient copay status, number of concomitant medications, and driving distance from the RLRVAMC.
Thirty-six (36.7%) patients in the preimplementation group were considered nonadherent (MPR < 0.9) and 18 (18.4%) had an MPR < 0.8. Fifteen (15.3%) patients in the preimplementation clinic were considered overadherent (MPR > 1.1). Forty-seven (47.9%) patients in the preimplementation group were considered adherent (MPR 0.9 - 1.1) while all 35 (100%) patients in the postimplementation group were considered adherent (MPR 0.9 - 1.1). No non- or overadherent patients were identified in the postimplementation group (Figure 2). The median MPR for all patients in the preimplementation group was 0.94 compared with 1.06 (P < .001) in the postimplementation group.
Thirty-five (35.7%) patients had therapy discontinued or held in the preimplementation group compared with 2 (5.7%) patients in the postimplementation group (P < .001). Reasons for discontinuation in the preimplementation group included disease progression (n = 27), death (n = 3), lost to follow up (n = 2), and intolerability of therapy (n = 3). Both patients that discontinued therapy in the postimplementation group did so due to disease progression. Of the 35 patients who had their OAN discontinued or held in the preimplementation group, 14 patients had excess supply on hand at time of discontinuation. The estimated value of the unused medication was $37,890. Nine (25%) of the 35 patients who discontinued therapy had a dosage reduction during the course of therapy and the additional supply was not included in the cost estimate. Similarly, 1 of the 2 patients in the postimplementation group had their OAN discontinued during study. The cost of oversupply of medication at the time of therapy discontinuation was estimated at $1,555. No patients in the postimplementation group had dose reductions. After implementation of the OAN renewal clinic, the total cost savings between pre ($37,890) and postimplementation ($1,555) groups was $36,355.
Discussion
OANs are widely used therapies, with more than 25 million doses administered per year in the United States alone.12 The use of these agents will continue to grow as more targeted agents become available and patients request more convenient treatment options. The role for hematology/oncology clinical pharmacy services must adapt to this increased usage of OANs, including increasing pharmacist involvement in medication education, adherence and tolerability assessments, and proactive drug interaction monitoring.However, additional research is needed to determine optimal management strategies.
Our study aimed to compare OAN adherence among patients at a tertiary care VA hospital before and after implementation of a renewal clinic. The preimplementation population had a median MPR of 0.94 compared with 1.06 in the postimplementation group (P < .001). Although an ideal MPR is 1.0, we aimed for a slightly higher MPR to allow a supply buffer in the event of prescription delivery delays, as more than 90% of prescriptions are mailed to patients from a regional mail-order pharmacy. Importantly, the median MPRs do not adequately convey the impact from this clinic. The proportion of patients who were considered adherent to OANs increased from 47.9% in the preimplementation to 100% in the postimplementation period. These finding suggest that the clinical pharmacist role to assess and encourage adherence through monitoring tolerability of these OANs improved the overall medication taking experience of these patients.
Upon initial evaluation of adherence pre- and postimplementation, median adherence rates in both groups appeared to be above goal at 0.94 and 1.06 respectively. Patients in the postimplementation group intentionally received a 5- to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer. After correcting for patients with confounding reasons for excess (dose reductions, breaks in treatment, etc.), the median MPR in the prerefill clinic group decreased to 0.9 and the MPR in the postrefill clinic group increased slightly to 1.08. Although the median adherence rate in both the pre- and postimplementation groups were above goal of 0.90, 36% of the patients in the preimplementation group were considered nonadherent (MPR < 0.9) compared with no patients in the postimplementation group. Therefore, our intervention to improve patient adherence appeared to be beneficial at our institution.
In addition to improving adherence, one of the goals of the renewal clinic was to minimize excess supply at the time of therapy discontinuation. This was accomplished by aligning medication fills with medical visits and objective monitoring, as well as limiting supply to no more than 30 days. Of the patients in the postimplementation group, only 1 patient had remaining medication at the time of therapy discontinuation compared with 14 patients in the preimplementation group. The estimated cost savings from excess supply was $36,335. Limiting the amount of unused supply not only saves money for the patient and the institution, but also decreases opportunity for improper hazardous waste disposal and unnecessary exposure of hazardous materials to others.
Our results show the pharmacist intervention in the coordination of renewals improved adherence, minimized medication waste, and saved money. The cost of pharmacist time participating in the refill clinic was not calculated. Each visit was completed in approximately 5 minutes, with subsequent documentation and coordination taking an additional 5 to 10 minutes. During the launch of this service, the oncology pharmacy resident provided all coverage of the clinic. Oversite of the resident was provided by hematology/oncology clinical pharmacy specialists. We have continued to utilize pharmacy resident coverage since that time to meet education needs and keep the estimated cost per visit low. Another option in the case that pharmacy residents are not available would be utilization of a pharmacy technician, intern, or professional student to conduct the adherence and tolerability phone assessments. Our escalation protocol allows intervention by clinical pharmacy specialist and/or other health care providers when necessary. Trainees have only required basic training on how to use the protocol.
Limitations
Due to this study’s retrospective design, an inherent limitation is dependence on prescriber and refill records for documentation of initiation and discontinuation dates. Therefore, only the association of impact of pharmacist intervention on medication adherence can be determined as opposed to causation. We did not take into account discrepancies in day supply secondary to ‘held’ therapies, dose reductions, or doses supplied during an inpatient admission, which may alter estimates of MPR and cost-savings data. Patients in the postimplementation group intentionally received a 5 to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer, thereby skewing MPR values. This study did not account for cost avoidance resulting from early identification and management of toxicity. Finally, the postimplementation data only spans 4 months and a longer duration of time is needed to more accurately determine sustainability of renewal clinic interventions and provide comprehensive evaluation of cost-avoidance.
Conclusion
Implementation of an OAN renewal clinic was associated with an increase in MPR, improved proportion of patients considered adherent, and an estimated $36,335 cost-savings. However, prospective evaluation and a longer study duration are needed to determine causality of improved adherence and cost-savings associated with a pharmacist-driven OAN renewal clinic.
Evaluation of oral antineoplastic agent (OAN) adherence patterns have identified correlations between nonadherence or over-adherence and poorer disease-related outcomes. Multiple studies have focused on imatinib use in chronic myeloid leukemia (CML) due to its continuous, long-term use. A study by Ganesan and colleagues found that nonadherence to imatinib showed a significant decrease in 5-year event-free survival between 76.7% of adherent participants compared with 59.8% of nonadherent participants.1 This study found that 44% of patients who were adherent to imatinib achieved complete cytogenetic response vs only 26% of patients who were nonadherent. In another study of imatinib for CML, major molecular response (MMR) was strongly correlated with adherence and no patients with adherence < 80% were able to achieve MMR.2 Similarly, in studies of tamoxifen for breast cancer, < 80% adherence resulted in a 10% decrease in survival when compared to those who were more adherent.3,4
In addition to the clinical implications of nonadherence, there can be a significant cost associated with suboptimal use of these medications. The price of a single dose of OAN medication may cost as much as $440.5
The benefits of multidisciplinary care teams have been identified in many studies.6,7 While studies are limited in oncology, pharmacists provide vital contributions to the oncology multidisciplinary team when managing OANs as these health care professionals have expert knowledge of the medications, potential adverse events (AEs), and necessary monitoring parameters.8 In one study, patients seen by the pharmacist-led oral chemotherapy management program experienced improved clinical outcomes and response to therapy when compared with preintervention patients (early molecular response, 88.9% vs 54.8%, P = .01; major molecular response, 83.3% vs 57.6%, P = .06).9 During the study, 318 AEs were reported, leading to 235 pharmacist interventions to ameliorate AEs and improve adherence.
The primary objective of this study was to measure the impact of a pharmacist-driven OAN renewal clinic on medication adherence. The secondary objective was to estimate cost-savings of this new service.
Methods
Prior to July 2014, several limitations were identified related to OAN prescribing and monitoring at the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, Indiana (RLRVAMC). The prescription ordering process relied primarily on the patient to initiate refills, rather than the prescriber OAN prescriptions also lacked consistency for number of refills or quantities dispensed. Furthermore, ordering of antineoplastic products was not limited to hematology/oncology providers. Patients were identified with significant supply on hand at the time of medication discontinuation, creating concerns for medication waste, tolerability, and nonadherence.
As a result, opportunities were identified to improve the prescribing process, recommended monitoring, toxicity and tolerability evaluation, medication reconciliation, and medication adherence. In July of 2014, the RLRVAMC adopted a new chemotherapy order entry system capable of restricting prescriptions to hematology/oncology providers and limiting dispensed quantities and refill amounts. A comprehensive pharmacist driven OAN renewal clinic was implemented on September 1, 2014 with the goal of improving long-term adherence and tolerability, in addition to minimizing medication waste.
Patients were eligible for enrollment in the clinic if they had a cancer diagnosis and were concomitantly prescribed an OAN outlined in Table 1. All eligible patients were automatically enrolled in the clinic when they were deemed stable on their OAN by a hematology/oncology pharmacy specialist. Stability was defined as ≤ Grade 1 symptoms associated with the toxicities of OAN therapy managed with or without intervention as defined by the Common Terminology Criteria for Adverse Events (CTCAE) version 4.03. Once enrolled in the renewal clinic, patients were called by an oncology pharmacy resident (PGY2) 1 week prior to any OAN refill due date. Patients were asked a series of 5 adherence and tolerability questions (Table 2) to evaluate renewal criteria for approval or need for further evaluation. These questions were developed based on targeted information and published reports on monitoring adherence.10,11 Criteria for renewal included: < 10% self-reported missed doses of the OAN during the previous dispensing period, no hospitalizations or emergency department visits since most recent hematology/oncology provider appointment, no changes to concomitant medication therapies, and no new or worsening medication-related AEs. Patients meeting all criteria were given a 30-day supply of OAN. Prescribing, dispensing, and delivery of OAN were facilitated by the pharmacist. Patient cases that did not meet criteria for renewal were escalated to the hematology/oncology provider or oncology clinical pharmacy specialist for further evaluation.
Study Design and Setting
This was a pre/post retrospective cohort, quality improvement study of patients enrolled in the RLRVAMC OAN pharmacist renewal clinic. The study was deemed exempt from institutional review board (IRB) by the US Department of Veterans Affairs (VA) Research and Development Department.
Study Population
Patients were included in the preimplementation group if they had received at least 2 prescriptions of an eligible OAN. Therapy for the preimplementation group was required to be a monthly duration > 21 days and between the dates of September 1, 2013 and August 31, 2014. Patients were included in the postimplementation group if they had received at least 2 prescriptions of the studied OANs between September 1, 2014 and January 31, 2015. Patients were excluded if they had filled < 2 prescriptions of OAN; were managed by a non-VA oncologist or hematologist; or received an OAN other than those listed in Table 1.
Data Collection
For all patients in both the pre- and postimplementation cohorts, a standardized data collection tool was used to collect the following via electronic health record review by a PGY2 oncology resident: age, race, gender, oral antineoplastic agent, refill dates, days’ supply, estimated unit cost per dose cancer diagnosis, distance from the RLRVAMC, copay status, presence of hospitalizations/ED visits/dosage reductions, discontinuation rates, reasons for discontinuation, and total number of current prescriptions. The presence or absence of dosage reductions were collected to identify concerns for tolerability, but only the original dose for the preimplementation group and dosage at time of clinic enrollment for the postimplementation group was included in the analysis.
Outcomes and Statistical Analyses
The primary outcome was medication adherence defined as the median medication possession ratio (MPR) before and after implementation of the clinic. Secondary outcomes included the proportion of patients who were adherent from before implementation to after implementation and estimated cost-savings of this clinic after implementation. MPR was used to estimate medication adherence by taking the cumulative day supply of medication on hand divided by the number of days on therapy.12 Number of days on therapy was determined by taking the difference on the start date of the new medication regimen and the discontinuation date of the same regimen. Patients were grouped by adherence into one of the following categories: < 0.8, 0.8 to 0.89, 0.9 to 1, and > 1.1. Patients were considered adherent if they reported taking ≥ 90% (MPR ≥ 0.9) of prescribed doses, adopted from the study by Anderson and colleagues.12 A patient with an MPR > 1, likely due to filling prior to the anticipated refill date, was considered 100% adherent (MPR = 1). If a patient switched OAN during the study, both agents were included as separate entities.
A conservative estimate of cost-savings was made by multiplying the RLRVAMC cost per unit of medication at time of initial prescription fill by the number of units taken each day multiplied by the total days’ supply on hand at time of therapy discontinuation. Patients with an MPR < 1 at time of therapy discontinuation were assumed to have zero remaining units on hand and zero cost savings was estimated. Waste, for purposes of cost-savings, was calculated for all MPR values > 1. Additional supply anticipated to be on hand from dose reductions was not included in the estimated cost of unused medication.
Descriptive statistics compared demographic characteristics between the pre- and postimplementation groups. MPR data were not normally distributed, which required the use of nonparametric Mann-Whitney U tests to compare pre- and postMPRs. Pearson χ2 compared the proportion of adherent patients between groups while descriptive statistics were used to estimate cost savings. Significance was determined based on a P value < .05. IBM SPSS Statistics software was used for all statistical analyses. As this was a complete sample of all eligible subjects, no sample size calculation was performed.
Results
In the preimplementation period, 246 patients received an OAN and 61 patients received an OAN in the postimplementation period (Figure 1). Of the 246 patients in the preimplementation period, 98 were eligible and included in the preimplementation group. Similarly, of the 61 patients in the postimplementation period, 35 patients met inclusion criteria for the postimplementation group. The study population was predominantly male with an average age of approximately 70 years in both groups (Table 3). More than 70% of the population in each group was White. No statistically significant differences between groups were identified. The most commonly prescribed OAN in the preimplementation group were abiraterone, imatinib, and enzalutamide (Table 3). In the postimplementation group, the most commonly prescribed agents were abiraterone, imatinib, pazopanib, and dasatinib. No significant differences were observed in prescribing of individual agents between the pre- and postimplementation groups or other characteristics that may affect adherence including patient copay status, number of concomitant medications, and driving distance from the RLRVAMC.
Thirty-six (36.7%) patients in the preimplementation group were considered nonadherent (MPR < 0.9) and 18 (18.4%) had an MPR < 0.8. Fifteen (15.3%) patients in the preimplementation clinic were considered overadherent (MPR > 1.1). Forty-seven (47.9%) patients in the preimplementation group were considered adherent (MPR 0.9 - 1.1) while all 35 (100%) patients in the postimplementation group were considered adherent (MPR 0.9 - 1.1). No non- or overadherent patients were identified in the postimplementation group (Figure 2). The median MPR for all patients in the preimplementation group was 0.94 compared with 1.06 (P < .001) in the postimplementation group.
Thirty-five (35.7%) patients had therapy discontinued or held in the preimplementation group compared with 2 (5.7%) patients in the postimplementation group (P < .001). Reasons for discontinuation in the preimplementation group included disease progression (n = 27), death (n = 3), lost to follow up (n = 2), and intolerability of therapy (n = 3). Both patients that discontinued therapy in the postimplementation group did so due to disease progression. Of the 35 patients who had their OAN discontinued or held in the preimplementation group, 14 patients had excess supply on hand at time of discontinuation. The estimated value of the unused medication was $37,890. Nine (25%) of the 35 patients who discontinued therapy had a dosage reduction during the course of therapy and the additional supply was not included in the cost estimate. Similarly, 1 of the 2 patients in the postimplementation group had their OAN discontinued during study. The cost of oversupply of medication at the time of therapy discontinuation was estimated at $1,555. No patients in the postimplementation group had dose reductions. After implementation of the OAN renewal clinic, the total cost savings between pre ($37,890) and postimplementation ($1,555) groups was $36,355.
Discussion
OANs are widely used therapies, with more than 25 million doses administered per year in the United States alone.12 The use of these agents will continue to grow as more targeted agents become available and patients request more convenient treatment options. The role for hematology/oncology clinical pharmacy services must adapt to this increased usage of OANs, including increasing pharmacist involvement in medication education, adherence and tolerability assessments, and proactive drug interaction monitoring.However, additional research is needed to determine optimal management strategies.
Our study aimed to compare OAN adherence among patients at a tertiary care VA hospital before and after implementation of a renewal clinic. The preimplementation population had a median MPR of 0.94 compared with 1.06 in the postimplementation group (P < .001). Although an ideal MPR is 1.0, we aimed for a slightly higher MPR to allow a supply buffer in the event of prescription delivery delays, as more than 90% of prescriptions are mailed to patients from a regional mail-order pharmacy. Importantly, the median MPRs do not adequately convey the impact from this clinic. The proportion of patients who were considered adherent to OANs increased from 47.9% in the preimplementation to 100% in the postimplementation period. These finding suggest that the clinical pharmacist role to assess and encourage adherence through monitoring tolerability of these OANs improved the overall medication taking experience of these patients.
Upon initial evaluation of adherence pre- and postimplementation, median adherence rates in both groups appeared to be above goal at 0.94 and 1.06 respectively. Patients in the postimplementation group intentionally received a 5- to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer. After correcting for patients with confounding reasons for excess (dose reductions, breaks in treatment, etc.), the median MPR in the prerefill clinic group decreased to 0.9 and the MPR in the postrefill clinic group increased slightly to 1.08. Although the median adherence rate in both the pre- and postimplementation groups were above goal of 0.90, 36% of the patients in the preimplementation group were considered nonadherent (MPR < 0.9) compared with no patients in the postimplementation group. Therefore, our intervention to improve patient adherence appeared to be beneficial at our institution.
In addition to improving adherence, one of the goals of the renewal clinic was to minimize excess supply at the time of therapy discontinuation. This was accomplished by aligning medication fills with medical visits and objective monitoring, as well as limiting supply to no more than 30 days. Of the patients in the postimplementation group, only 1 patient had remaining medication at the time of therapy discontinuation compared with 14 patients in the preimplementation group. The estimated cost savings from excess supply was $36,335. Limiting the amount of unused supply not only saves money for the patient and the institution, but also decreases opportunity for improper hazardous waste disposal and unnecessary exposure of hazardous materials to others.
Our results show the pharmacist intervention in the coordination of renewals improved adherence, minimized medication waste, and saved money. The cost of pharmacist time participating in the refill clinic was not calculated. Each visit was completed in approximately 5 minutes, with subsequent documentation and coordination taking an additional 5 to 10 minutes. During the launch of this service, the oncology pharmacy resident provided all coverage of the clinic. Oversite of the resident was provided by hematology/oncology clinical pharmacy specialists. We have continued to utilize pharmacy resident coverage since that time to meet education needs and keep the estimated cost per visit low. Another option in the case that pharmacy residents are not available would be utilization of a pharmacy technician, intern, or professional student to conduct the adherence and tolerability phone assessments. Our escalation protocol allows intervention by clinical pharmacy specialist and/or other health care providers when necessary. Trainees have only required basic training on how to use the protocol.
Limitations
Due to this study’s retrospective design, an inherent limitation is dependence on prescriber and refill records for documentation of initiation and discontinuation dates. Therefore, only the association of impact of pharmacist intervention on medication adherence can be determined as opposed to causation. We did not take into account discrepancies in day supply secondary to ‘held’ therapies, dose reductions, or doses supplied during an inpatient admission, which may alter estimates of MPR and cost-savings data. Patients in the postimplementation group intentionally received a 5 to 7-day supply buffer to account for potential prescription delivery delays due to holidays and inclement weather. This would indicate that the patients in the postimplementation group would have 15% oversupply due to the 5-day supply buffer, thereby skewing MPR values. This study did not account for cost avoidance resulting from early identification and management of toxicity. Finally, the postimplementation data only spans 4 months and a longer duration of time is needed to more accurately determine sustainability of renewal clinic interventions and provide comprehensive evaluation of cost-avoidance.
Conclusion
Implementation of an OAN renewal clinic was associated with an increase in MPR, improved proportion of patients considered adherent, and an estimated $36,335 cost-savings. However, prospective evaluation and a longer study duration are needed to determine causality of improved adherence and cost-savings associated with a pharmacist-driven OAN renewal clinic.
1. Ganesan P, Sagar TG, Dubashi B, et al. Nonadherence to imatinib adversely affects event free survival in chronic phase chronic myeloid leukemia. Am J Hematol 2011; 86: 471-474. doi:10.1002/ajh.22019
2. Marin D, Bazeos A, Mahon FX, et al. Adherence is the critical factor for achieving molecular responses in patients with chronic myeloid leukemia who achieve complete cytogenetic responses on imatinib. J Clin Oncol 2010; 28: 2381-2388. doi:10.1200/JCO.2009.26.3087
3. McCowan C, Shearer J, Donnan PT, et al. Cohort study examining tamoxifen adherence and its relationship to mortality in women with breast cancer. Br J Cancer 2008; 99: 1763-1768. doi:10.1038/sj.bjc.6604758
4. Lexicomp Online. Sunitinib. Hudson, Ohio: Lexi-Comp, Inc; August 20, 2019.
5. Babiker A, El Husseini M, Al Nemri A, et al. Health care professional development: Working as a team to improve patient care. Sudan J Paediatr. 2014;14(2):9-16.
6. Spence MM, Makarem AF, Reyes SL, et al. Evaluation of an outpatient pharmacy clinical services program on adherence and clinical outcomes among patients with diabetes and/or coronary artery disease. J Manag Care Spec Pharm. 2014;20(10):1036-1045. doi:10.18553/jmcp.2014.20.10.1036
7. Holle LM, Puri S, Clement JM. Physician-pharmacist collaboration for oral chemotherapy monitoring: Insights from an academic genitourinary oncology practice. J Oncol Pharm Pract 2015; doi:10.1177/1078155215581524
8. Muluneh B, Schneider M, Faso A, et al. Improved Adherence Rates and Clinical Outcomes of an Integrated, Closed-Loop, Pharmacist-Led Oral Chemotherapy Management Program. Journal of Oncology Practice. 2018;14(6):371-333. doi:10.1200/JOP.17.00039.
9. Font R, Espinas JA, Gil-Gil M, et al. Prescription refill, patient self-report and physician report in assessing adherence to oral endocrine therapy in early breast cancer patients: a retrospective cohort study in Catalonia, Spain. British Journal of Cancer. 2012 ;107(8):1249-1256. doi:10.1038/bjc.2012.389.
10. Anderson KR, Chambers CR, Lam N, et al. Medication adherence among adults prescribed imatinib, dasatinib, or nilotinib for the treatment of chronic myeloid leukemia. J Oncol Pharm Practice. 2015;21(1):19–25. doi:10.1177/1078155213520261
11. Weingart SN, Brown E, Bach PB, et al. NCCN Task Force Report: oral chemotherapy. J Natl Compr Canc Netw. 2008;6(3): S1-S14.
1. Ganesan P, Sagar TG, Dubashi B, et al. Nonadherence to imatinib adversely affects event free survival in chronic phase chronic myeloid leukemia. Am J Hematol 2011; 86: 471-474. doi:10.1002/ajh.22019
2. Marin D, Bazeos A, Mahon FX, et al. Adherence is the critical factor for achieving molecular responses in patients with chronic myeloid leukemia who achieve complete cytogenetic responses on imatinib. J Clin Oncol 2010; 28: 2381-2388. doi:10.1200/JCO.2009.26.3087
3. McCowan C, Shearer J, Donnan PT, et al. Cohort study examining tamoxifen adherence and its relationship to mortality in women with breast cancer. Br J Cancer 2008; 99: 1763-1768. doi:10.1038/sj.bjc.6604758
4. Lexicomp Online. Sunitinib. Hudson, Ohio: Lexi-Comp, Inc; August 20, 2019.
5. Babiker A, El Husseini M, Al Nemri A, et al. Health care professional development: Working as a team to improve patient care. Sudan J Paediatr. 2014;14(2):9-16.
6. Spence MM, Makarem AF, Reyes SL, et al. Evaluation of an outpatient pharmacy clinical services program on adherence and clinical outcomes among patients with diabetes and/or coronary artery disease. J Manag Care Spec Pharm. 2014;20(10):1036-1045. doi:10.18553/jmcp.2014.20.10.1036
7. Holle LM, Puri S, Clement JM. Physician-pharmacist collaboration for oral chemotherapy monitoring: Insights from an academic genitourinary oncology practice. J Oncol Pharm Pract 2015; doi:10.1177/1078155215581524
8. Muluneh B, Schneider M, Faso A, et al. Improved Adherence Rates and Clinical Outcomes of an Integrated, Closed-Loop, Pharmacist-Led Oral Chemotherapy Management Program. Journal of Oncology Practice. 2018;14(6):371-333. doi:10.1200/JOP.17.00039.
9. Font R, Espinas JA, Gil-Gil M, et al. Prescription refill, patient self-report and physician report in assessing adherence to oral endocrine therapy in early breast cancer patients: a retrospective cohort study in Catalonia, Spain. British Journal of Cancer. 2012 ;107(8):1249-1256. doi:10.1038/bjc.2012.389.
10. Anderson KR, Chambers CR, Lam N, et al. Medication adherence among adults prescribed imatinib, dasatinib, or nilotinib for the treatment of chronic myeloid leukemia. J Oncol Pharm Practice. 2015;21(1):19–25. doi:10.1177/1078155213520261
11. Weingart SN, Brown E, Bach PB, et al. NCCN Task Force Report: oral chemotherapy. J Natl Compr Canc Netw. 2008;6(3): S1-S14.
Reduction of Opioid Use With Enhanced Recovery Program for Total Knee Arthroplasty
Total knee arthroplasty (TKA) is one of the most common surgical procedures in the United States. The volume of TKAs is projected to substantially increase over the next 30 years.1 Adequate pain control after TKA is critically important to achieve early mobilization, shorten the length of hospital stay, and reduce postoperative complications. The evolution and inclusion of multimodal pain-management protocols have had a major impact on the clinical outcomes for TKA patients.2,3
Pain-management protocols typically use several modalities to control pain throughout the perioperative period. Multimodal opioid and nonopioid oral medications are administered during the pre- and postoperative periods and often involve a combination of acetaminophen, gabapentinoids, and cyclooxygenase-2 inhibitors.4 Peripheral nerve blocks and central neuraxial blockades are widely used and have been shown to be effective in reducing postoperative pain as well as overall opioid consumption.5,6 Finally, intraoperative periarticular injections have been shown to reduce postoperative pain and opioid consumption as well as improve patient satisfaction scores.7-9 These strategies are routinely used in TKA with the goal of minimizing overall opioid consumption and adverse events, reducing perioperative complications, and improving patient satisfaction.
Periarticular injections during surgery are an integral part of the multimodal pain-management protocols, though no consensus has been reached on proper injection formulation or technique. Liposomal bupivacaine is a local anesthetic depot formulation approved by the US Food and Drug Administration for surgical patients. The reported results have been discrepant regarding the efficacy of using liposomal bupivacaine injection in patients with TKA. Several studies have reported no added benefit of liposomal bupivacaine in contrast to a mixture of local anesthetics.10,11 Other studies have demonstrated superior pain relief.12 Many factors may contribute to the discrepant data, such as injection techniques, infiltration volume, and the assessment tools used to measure efficacy and safety.13
The US Department of Veterans Affairs (VA) Veterans Health Administration (VHA) provides care to a large patient population. Many of the patients in that system have high-risk profiles, including medical comorbidities; exposure to chronic pain and opioid use; and psychological and central nervous system injuries, including posttraumatic stress disorder and traumatic brain injury. Hadlandsmyth and colleagues reported increased risk of prolonged opioid use in VA patients after TKA surgery.14 They found that 20% of the patients were still on long-term opioids more than 90 days after TKA.
The purpose of this study was to evaluate the efficacy of the implementation of a comprehensive enhanced recovery after surgery (ERAS) protocol at a regional VA medical center. We hypothesize that the addition of liposomal bupivacaine in a multidisciplinary ERAS protocol would reduce the length of hospital stay and opioid consumption without any deleterious effects on postoperative outcomes.
Methods
A postoperative recovery protocol was implemented in 2013 at VA North Texas Health Care System (VANTHCS) in Dallas, and many of the patients continued to have issues with satisfactory pain control, prolonged length of stay, and extended opioid consumption postoperatively. A multimodal pain-management protocol and multidisciplinary perioperative case-management protocol were implemented in 2016 to further improve the clinical outcomes of patients undergoing TKA surgery. The senior surgeon (JM) organized a multidisciplinary team of health care providers to identify and implement potential solutions. This task force met weekly and consisted of surgeons, anesthesiologists, certified registered nurse anesthetists, orthopedic physician assistants, a nurse coordinator, a physical therapist, and an occupational therapist, as well as operating room, postanesthesia care unit (PACU), and surgical ward nurses. In addition, the staff from the home health agencies and social services attended the weekly meetings.
We conducted a retrospective review of all patients who had undergone unilateral TKA from 2013 to 2018 at VANTHCS. This was a consecutive, unselected cohort. All patients were under the care of a single surgeon using identical implant systems and identical surgical techniques. This study was approved by the institutional review board at VANTHCS. Patients were divided into 2 distinct and consecutive cohorts. The standard of care (SOC) group included all patients from 2013 to 2016. The ERAS group included all patients after the institution of the standardized protocol until the end of the study period.
Data on patient demographics, the American Society of Anesthesiologists risk classification, and preoperative functional status were extracted. Anesthesia techniques included either general endotracheal anesthesia or subarachnoid block with monitored anesthesia care. The quantity of the opioids given during surgery, in the PACU, during the inpatient stay, as discharge prescriptions, and as refills of the narcotic prescriptions up to 3 months postsurgery were recorded. All opioids were converted into morphine equivalent dosages (MED) in order to be properly analyzed using the statistical methodologies described in the statistical section.15 The VHA is a closed health care delivery system; therefore, all of the prescriptions ordered by surgery providers were recorded in the electronic health record.
ERAS Protocol
The SOC cohort was predominantly managed with general endotracheal anesthesia. The ERAS group was predominantly managed with subarachnoid blocks (Table 1). For the ERAS protocol preoperatively, the patients were administered oral gabapentin 300 mg, acetaminophen 650 mg, and oxycodone 20 mg, and IV ondansetron 4 mg. Intraoperatively, minimal opioids were used. In the PACU, the patients received dilaudid 0.25 mg IV as needed every 15 minutes for up to 1 mg/h. The nursing staff was trained to use the visual analog pain scale scores to titrate the medication. During the inpatient stay, patients received 1 g IV acetaminophen every 6 hours for 3 doses. The patients thereafter received oral acetaminophen as needed. Other medications in the multimodal pain-management protocol included gabapentin 300 mg twice daily, meloxicam 15 mg daily, and oxycodone 10 mg every 4 hours as needed. Rescue medication for insufficient pain relief was dilaudid 0.25 mg IV every 15 minutes for visual analog pain scale > 8. On discharge, the patients received a prescription of 30 tablets of hydrocodone 10 mg.
Periarticular Injections
Intraoperatively, all patients in the SOC and ERAS groups received periarticular injections. The liposomal bupivacaine injection was added to the standard injection mixture for the ERAS group. For the SOC group, the total volume of 100 ml was divided into 10 separate 10 cc syringes, and for the ERAS group, the total volume of 140 ml was divided into 14 separate 10 cc syringes. The SOC group injections were performed with an 18-gauge needle and the periarticular soft tissues grossly infiltrated. The ERAS group injections were done with more attention to anatomical detail. Injection sites for the ERAS group included the posterior joint capsule, the medial compartment, the lateral compartment, the tibial fat pad, the quadriceps and the patellar tendon, the femoral and tibial periosteum circumferentially, and the anterior joint capsule. Each needle-stick in the ERAS group delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee.
Outcome Variable
The primary outcome measure was total oral MED intraoperatively, in the PACU, during the hospital inpatient stay, in the hospital discharge prescription, and during the 3-month period after hospital discharge. Incidence of nausea and vomiting during the inpatient stay and any narcotic use at 6 months postsurgery were secondary binary outcomes.
Statistical Analysis
Demographic data and the clinical characteristics for the entire group were described using the sample mean and SD for continuous variables and the frequency and percentage for categorical variables. Differences between the 2 cohorts were analyzed using a 2-independent-sample t test and Fisher exact test.
The estimation of the total oral MED throughout all phases of care was done using a separate Poisson model due to the data being not normally distributed. A log-linear regression model was used to evaluate the main effect of ERAS vs the SOC cohort on the total oral MED used. Finally, a separate multiple logistic regression model was used to estimate the odds of postoperative nausea and vomiting and narcotic use at 6 months postsurgery between the cohorts. The adjusted odds ratio (OR) was estimated from the logistic model. Age, sex, body mass index, preoperative functional independence score, narcotic use within 3 months prior to surgery, anesthesia type used (subarachnoid block with monitored anesthesia care vs general endotracheal anesthesia), and postoperative complications (yes/no) were included as covariates in each model. The length of hospital stay and the above-mentioned factors were also included as covariates in the model estimating the total oral MED during the hospital stay, on hospital discharge, during the 3-month period after hospital discharge, and at 6 months following hospital discharge.
Statistical analysis was done using SAS version 9.4. The level of significance was set at α = 0.05 (2 tailed), and we implemented the false discovery rate (FDR) procedure to control false positives over multiple tests.16
Results
Two hundred forty-nine patients had 296 elective unilateral TKAs in this study from 2013 through 2018. Thirty-one patients had both unilateral TKAs under the SOC protocol; 5 patients had both unilateral TKAs under the ERAS protocol. Eleven of the patients who eventually had both knees replaced had 1 operation under each protocol The SOC group included 196 TKAs and the ERAS group included 100 TKAs. Of the 196 SOC patients, 94% were male. The mean age was 68.2 years (range, 48-86). The length of hospital stay ranged from 36.6 to 664.3 hours. Of the 100 ERAS patients, 96% were male (Table 2). The mean age was 66.7 years (range, 48-85). The length of hospital stay ranged from 12.5 to 45 hours.
Perioperative Opioid Use
Of the SOC patients, 99.0% received narcotics intraoperatively (range, 0-198 mg MED), and 74.5% received narcotics during PACU recovery (range, 0-141 mg MED). The total oral MED during the hospital stay for the SOC patients ranged from 10 to 2,946 mg. Of the ERAS patients, 86% received no narcotics during surgery (range, 0-110 mg MED), and 98% received no narcotics during PACU recovery (range, 0-65 mg MED). The total oral MED during the hospital stay for the ERAS patients ranged from 10 to 240 mg.
The MED used was significantly lower for the ERAS patients than it was for the SOC patients during surgery (10.5 mg vs 57.4 mg, P = .0001, FDR = .0002) and in the PACU (1.3 mg vs 13.6 mg, P = .0002, FDR = .0004), during the inpatient stay (66.7 mg vs 169.5 mg, P = .0001, FDR = .0002), and on hospital discharge (419.3 mg vs 776.7 mg, P = .0001, FDR = .0002). However, there was no significant difference in the total MED prescriptions filled between patients on the ERAS protocol vs those who received SOC during the 3-month period after hospital discharge (858.3 mg vs 1126.1 mg, P = .29, FDR = .29)(Table 3).
Finally, the logistic regression analysis, adjusting for the covariates demonstrated that the ERAS patients were less likely to take narcotics at 6 months following hospital discharge (OR, 0.23; P = .013; FDR = .018) and less likely to have postoperative nausea and vomiting (OR, 0.18; P = .019; FDR = .02) than SOC patients. There was no statistically significant difference between complication rates for the SOC and ERAS groups, which were 11.2% and 5.0%, respectively, with an overall complication rate of 9.1% (P = .09)(Table 4).
Discussion
Orthopedic surgery has been associated with long-term opioid use and misuse. Orthopedic surgeons are frequently among the highest prescribers of narcotics. According to Volkow and colleagues, orthopedic surgeons were the fourth largest prescribers of opioids in 2009, behind primary care physicians, internists, and dentists.17 The opioid crisis in the United States is well recognized. In 2017, > 70,000 deaths occurred due to drug overdoses, with 68% involving a prescription or illicit opioid. The Centers for Disease Control and Prevention has estimated a total economic burden of $78.5 billion per year as a direct result of misused prescribed opioids.18 This includes the cost of health care, lost productivity, addiction treatment, and the impact on the criminal justice system.
The current opioid crisis places further emphasis on opioid-reducing or sparing techniques in patients undergoing TKA. The use of liposomal bupivacaine for intraoperative periarticular injection is debated in the literature regarding its efficacy and whether it should be included in multimodal protocols. Researchers have argued that liposomal bupivacaine is not superior to regular bupivacaine and because of its increased cost is not justified.19,20 A meta-analysis from Zhao and colleagues showed no difference in pain control and functional recovery when comparing liposomal bupivacaine and control.21 In a randomized clinical trial, Schroer and colleagues matched liposomal bupivacaine against regular bupivacaine and found no difference in pain scores and similar narcotic use during hospitalization.22
Studies evaluating liposomal bupivacaine have demonstrated postoperative benefits in pain relief and potential opioid consumption.23 In a multicenter randomized controlled trial, Barrington and colleagues noted improved pain control at 6 and 12 hours after surgery with liposomal bupivacaine as a periarticular injection vs ropivacaine, though results were similar when compared with intrathecal morphine.24 Snyder and colleagues reported higher patient satisfaction in pain control and overall experience as well as decreased MED consumption in the PACU and on postoperative days 0 to 2 when using liposomal bupivacaine vs a multidrug cocktail for periarticular injection.25
The PILLAR trial, an industry-sponsored study, was designed to compare the effects of local infiltration anesthesia with and without liposomal bupivacaine with emphasis on a meticulous standardized infiltration technique. In our study, we used a similar technique with an expanded volume of injection solution to 140 ml that was delivered throughout the knee in a series of 14 syringes. Each needle-stick delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee. Infiltration technique has varied among the literature focused on periarticular injections.
In our experience, a standard infiltration technique is critical to the effective delivery of liposomal bupivacaine throughout all compartments of the knee and to obtaining reproducible pain control. The importance of injection technique cannot be overemphasized, and variations can be seen in studies published to date.26 Well-designed trials are needed to address this key component.
There have been limited data focused on the veteran population regarding postoperative pain-management strategies and recovery pathways either with or without liposomal bupivacaine. In a retrospective review, Sakamoto and colleagues found VA patients undergoing TKA had reduced opioid use in the first 24 hours after primary TKA with the use of intraoperative liposomal bupivacaine.27 The VA population has been shown to be at high risk for opioid misuse. The prevalence of comorbidities such as traumatic brain injury, posttraumatic stress disorder, and depression in the VA population also places them at risk for polypharmacy of central nervous system–acting medications.28 This emphasizes the importance of multimodal strategies, which can limit or eliminate narcotics in the perioperative period. The implementation of our ERAS protocol reduced opioid use during intraoperative, PACU, and inpatient hospital stay.
While the financial implications of our recovery protocol were not a primary focus of this study, there are many notable benefits on the overall inpatient cost to the VHA. According to the Health Economics Resource Center, the average daily cost of stay while under VA care for an inpatient surgical bed increased from $4,831 in 2013 to $6,220 in 2018.29 Our reduction in length of stay between our cohorts is 44.5 hours, which translates to a substantial financial savings per patient after protocol implementation. A more detailed look at the financial aspect of our protocol would need to be performed to evaluate the financial impact of other aspects of our protocol, such as the elimination of patient-controlled anesthesia and the reduction in total narcotics prescribed in the postoperative global period.
Limitations
The limitations of this study include its retrospective study design. With the VHA patient population, it may be subject to selection bias, as the population is mostly older and predominantly male compared with that of the general population. This could potentially influence the efficacy of our protocol on a population of patients with more women. In a recent study by Perruccio and colleagues, sex was found to moderate the effects of comorbidities, low back pain, and depressive symptoms on postoperative pain in patients undergoing TKA.30
With regard to outpatient narcotic prescriptions, although we cannot fully know whether these filled prescriptions were used for pain control, it is a reasonable assumption that patients who are dealing with continued postoperative or chronic pain issues will fill these prescriptions or seek refills. It is important to note that the data on prescriptions and refills in the 3-month postoperative period include all narcotic prescriptions filled by any VHA prescriber and are not specifically limited to our orthopedic team. For outpatient narcotic use, we were not able to access accurate pill counts for any discharge prescriptions or subsequent refills that were given throughout the VA system. We were able to report on total prescriptions filled in the first 3 months following TKA.
We calculated total oral MEDs to better understand the amount of narcotics being distributed throughout our population of patients. We believe this provides important information about the overall narcotic burden in the veteran population. There was no significant difference between the SOC and ERAS groups regarding oral MED prescribed in the 3-month postoperative period; however, at the 6-month follow-up visit, only 16% of patients in the ERAS group were taking any type of narcotic vs 37.2% in the SOC group (P = .0002).
Conclusions
A multidisciplinary ERAS protocol implemented at VANTHCS was effective in reducing length of stay and opioid burden throughout all phases of surgical care in our patients undergoing primary TKA. Patient and nursing education seem to be critical components to the implementation of a successful multimodal pain protocol. Reducing the narcotic burden has valuable financial and medical benefits in this at-risk population.
1. Inacio MCS, Paxton EW, Graves SE, Namba RS, Nemes S. Projected increase in total knee arthroplasty in the United States - an alternative projection model. Osteoarthritis Cartilage. 2017;25(11):1797-1803. doi:10.1016/j.joca.2017.07.022
2. Chou R, Gordon DB, de Leon-Casasola OA, et al. Management of Postoperative pain: a clinical practice guideline from the American Pain Society, the American Society of Regional Anesthesia and Pain Medicine, and the American Society of Anesthesiologists’ Committee on Regional Anesthesia, Executive Committee, and Administrative Council [published correction appears in J Pain. 2016 Apr;17(4):508-10. Dosage error in article text]. J Pain. 2016;17(2):131-157. doi:10.1016/j.jpain.2015.12.008
3. Moucha CS, Weiser MC, Levin EJ. Current Strategies in anesthesia and analgesia for total knee arthroplasty. J Am Acad Orthop Surg. 2016;24(2):60-73. doi:10.5435/JAAOS-D-14-00259
4. Parvizi J, Miller AG, Gandhi K. Multimodal pain management after total joint arthroplasty. J Bone Joint Surg Am. 2011;93(11):1075-1084. doi:10.2106/JBJS.J.01095
5. Jenstrup MT, Jæger P, Lund J, et al. Effects of adductor-canal-blockade on pain and ambulation after total knee arthroplasty: a randomized study. Acta Anaesthesiol Scand. 2012;56(3):357-364. doi:10.1111/j.1399-6576.2011.02621.x
6. Macfarlane AJ, Prasad GA, Chan VW, Brull R. Does regional anesthesia improve outcome after total knee arthroplasty?. Clin Orthop Relat Res. 2009;467(9):2379-2402. doi:10.1007/s11999-008-0666-9
7. Parvataneni HK, Shah VP, Howard H, Cole N, Ranawat AS, Ranawat CS. Controlling pain after total hip and knee arthroplasty using a multimodal protocol with local periarticular injections: a prospective randomized study. J Arthroplasty. 2007;22(6)(suppl 2):33-38. doi:10.1016/j.arth.2007.03.034
8. Busch CA, Shore BJ, Bhandari R, et al. Efficacy of periarticular multimodal drug injection in total knee arthroplasty. A randomized trial. J Bone Joint Surg Am. 2006;88(5):959-963. doi:10.2106/JBJS.E.00344
9. Lamplot JD, Wagner ER, Manning DW. Multimodal pain management in total knee arthroplasty: a prospective randomized controlled trial. J Arthroplasty. 2014;29(2):329-334. doi:10.1016/j.arth.2013.06.005
10. Hyland SJ, Deliberato DG, Fada RA, Romanelli MJ, Collins CL, Wasielewski RC. Liposomal bupivacaine versus standard periarticular injection in total knee arthroplasty with regional anesthesia: a prospective randomized controlled trial. J Arthroplasty. 2019;34(3):488-494. doi:10.1016/j.arth.2018.11.026
11. Barrington JW, Lovald ST, Ong KL, Watson HN, Emerson RH Jr. Postoperative pain after primary total knee arthroplasty: comparison of local injection analgesic cocktails and the role of demographic and surgical factors. J Arthroplasty. 2016;31(9) (suppl):288-292. doi:10.1016/j.arth.2016.05.002
12. Bramlett K, Onel E, Viscusi ER, Jones K. A randomized, double-blind, dose-ranging study comparing wound infiltration of DepoFoam bupivacaine, an extended-release liposomal bupivacaine, to bupivacaine HCl for postsurgical analgesia in total knee arthroplasty. Knee. 2012;19(5):530-536. doi:10.1016/j.knee.2011.12.004
13. Mont MA, Beaver WB, Dysart SH, Barrington JW, Del Gaizo D. Local infiltration analgesia with liposomal bupivacaine improves pain scores and reduces opioid use after total knee arthroplasty: results of a randomized controlled trial. J Arthroplasty. 2018;33(1):90-96. doi:10.1016/j.arth.2017.07.024
14. Hadlandsmyth K, Vander Weg MW, McCoy KD, Mosher HJ, Vaughan-Sarrazin MS, Lund BC. Risk for prolonged opioid use following total knee arthroplasty in veterans. J Arthroplasty. 2018;33(1):119-123. doi:10.1016/j.arth.2017.08.022
15. Nielsen S, Degenhardt L, Hoban B, Gisev N. A synthesis of oral morphine equivalents (OME) for opioid utilisation studies. Pharmacoepidemiol Drug Saf. 2016;25(6):733-737. doi:10.1002/pds.3945
16. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Statist Soc B. 1995;57(1):289-300. doi:10.1111/j.2517-6161.1995.tb02031.x
17. Volkow ND, McLellan TA, Cotto JH, Karithanom M, Weiss SRB. Characteristics of opioid prescriptions in 2009. JAMA. 2011;305(13):1299-1301. doi:10.1001/jama.2011.401
18. Scholl L, Seth P, Kariisa M, Wilson N, Baldwin G. Drug and opioid-involved overdose deaths - United States, 2013-2017. MMWR Morb Mortal Wkly Rep. 2018;67(5152):1419-1427. doi:10.15585/mmwr.mm675152e1
19. Pichler L, Poeran J, Zubizarreta N, et al. Liposomal bupivacaine does not reduce inpatient opioid prescription or related complications after knee arthroplasty: a database analysis. Anesthesiology. 2018;129(4):689-699. doi:10.1097/ALN.0000000000002267
20. Jain RK, Porat MD, Klingenstein GG, Reid JJ, Post RE, Schoifet SD. The AAHKS Clinical Research Award: liposomal bupivacaine and periarticular injection are not superior to single-shot intra-articular injection for pain control in total knee arthroplasty. J Arthroplasty. 2016;31(9)(suppl):22-25. doi:10.1016/j.arth.2016.03.036
21. Zhao B, Ma X, Zhang J, Ma J, Cao Q. The efficacy of local liposomal bupivacaine infiltration on pain and recovery after total joint arthroplasty: a systematic review and meta-analysis of randomized controlled trials. Medicine (Baltimore). 2019;98(3):e14092. doi:10.1097/MD.0000000000014092
22. Schroer WC, Diesfeld PG, LeMarr AR, Morton DJ, Reedy ME. Does extended-release liposomal bupivacaine better control pain than bupivacaine after total knee arthroplasty (TKA)? A prospective, randomized clinical trial. J Arthroplasty. 2015;30(9)(suppl):64-67. doi:10.1016/j.arth.2015.01.059
23. Ma J, Zhang W, Yao S. Liposomal bupivacaine infiltration versus femoral nerve block for pain control in total knee arthroplasty: a systematic review and meta-analysis. Int J Surg. 2016;36(Pt A): 44-55. doi:10.1016/j.ijsu.2016.10.007
24. Barrington JW, Emerson RH, Lovald ST, Lombardi AV, Berend KR. No difference in early analgesia between liposomal bupivacaine injection and intrathecal morphine after TKA. Clin Orthop Relat Res. 2017;475(1):94-105. doi:10.1007/s11999-016-4931-z
25. Snyder MA, Scheuerman CM, Gregg JL, Ruhnke CJ, Eten K. Improving total knee arthroplasty perioperative pain management using a periarticular injection with bupivacaine liposomal suspension. Arthroplast Today. 2016;2(1):37-42. doi:10.1016/j.artd.2015.05.005
26. Kuang MJ,Du Y, Ma JX, He W, Fu L, Ma XL. The efficacy of liposomal bupivacaine using periarticular injection in total knee arthroplasty: a systematic review and meta-analysis. J Arthroplasty. 2017;32(4):1395-1402. doi:10.1016/j.arth.2016.12.025
27. Sakamoto B, Keiser S, Meldrum R, Harker G, Freese A. Efficacy of liposomal bupivacaine infiltration on the management of total knee arthroplasty. JAMA Surg. 2017;152(1):90-95. doi:10.1001/jamasurg.2016.3474
28. Collett GA, Song K, Jaramillo CA, Potter JS, Finley EP, Pugh MJ. Prevalence of central nervous system polypharmacy and associations with overdose and suicide-related behaviors in Iraq and Afghanistan war veterans in VA care 2010-2011. Drugs Real World Outcomes. 2016;3(1):45-52. doi:10.1007/s40801-015-0055-0
29. US Department of Veterans Affairs. HERC inpatient average cost data. Updated April 2, 2021. Accessed April 16, 2021. https://www.herc.research.va.gov/include/page.asp?id=inpatient#herc-inpat-avg-cost
30. Perruccio AV, Fitzpatrick J, Power JD, et al. Sex-modified effects of depression, low back pain, and comorbidities on pain after total knee arthroplasty for osteoarthritis. Arthritis Care Res (Hoboken). 2020;72(8):1074-1080. doi:10.1002/acr.24002
Total knee arthroplasty (TKA) is one of the most common surgical procedures in the United States. The volume of TKAs is projected to substantially increase over the next 30 years.1 Adequate pain control after TKA is critically important to achieve early mobilization, shorten the length of hospital stay, and reduce postoperative complications. The evolution and inclusion of multimodal pain-management protocols have had a major impact on the clinical outcomes for TKA patients.2,3
Pain-management protocols typically use several modalities to control pain throughout the perioperative period. Multimodal opioid and nonopioid oral medications are administered during the pre- and postoperative periods and often involve a combination of acetaminophen, gabapentinoids, and cyclooxygenase-2 inhibitors.4 Peripheral nerve blocks and central neuraxial blockades are widely used and have been shown to be effective in reducing postoperative pain as well as overall opioid consumption.5,6 Finally, intraoperative periarticular injections have been shown to reduce postoperative pain and opioid consumption as well as improve patient satisfaction scores.7-9 These strategies are routinely used in TKA with the goal of minimizing overall opioid consumption and adverse events, reducing perioperative complications, and improving patient satisfaction.
Periarticular injections during surgery are an integral part of the multimodal pain-management protocols, though no consensus has been reached on proper injection formulation or technique. Liposomal bupivacaine is a local anesthetic depot formulation approved by the US Food and Drug Administration for surgical patients. The reported results have been discrepant regarding the efficacy of using liposomal bupivacaine injection in patients with TKA. Several studies have reported no added benefit of liposomal bupivacaine in contrast to a mixture of local anesthetics.10,11 Other studies have demonstrated superior pain relief.12 Many factors may contribute to the discrepant data, such as injection techniques, infiltration volume, and the assessment tools used to measure efficacy and safety.13
The US Department of Veterans Affairs (VA) Veterans Health Administration (VHA) provides care to a large patient population. Many of the patients in that system have high-risk profiles, including medical comorbidities; exposure to chronic pain and opioid use; and psychological and central nervous system injuries, including posttraumatic stress disorder and traumatic brain injury. Hadlandsmyth and colleagues reported increased risk of prolonged opioid use in VA patients after TKA surgery.14 They found that 20% of the patients were still on long-term opioids more than 90 days after TKA.
The purpose of this study was to evaluate the efficacy of the implementation of a comprehensive enhanced recovery after surgery (ERAS) protocol at a regional VA medical center. We hypothesize that the addition of liposomal bupivacaine in a multidisciplinary ERAS protocol would reduce the length of hospital stay and opioid consumption without any deleterious effects on postoperative outcomes.
Methods
A postoperative recovery protocol was implemented in 2013 at VA North Texas Health Care System (VANTHCS) in Dallas, and many of the patients continued to have issues with satisfactory pain control, prolonged length of stay, and extended opioid consumption postoperatively. A multimodal pain-management protocol and multidisciplinary perioperative case-management protocol were implemented in 2016 to further improve the clinical outcomes of patients undergoing TKA surgery. The senior surgeon (JM) organized a multidisciplinary team of health care providers to identify and implement potential solutions. This task force met weekly and consisted of surgeons, anesthesiologists, certified registered nurse anesthetists, orthopedic physician assistants, a nurse coordinator, a physical therapist, and an occupational therapist, as well as operating room, postanesthesia care unit (PACU), and surgical ward nurses. In addition, the staff from the home health agencies and social services attended the weekly meetings.
We conducted a retrospective review of all patients who had undergone unilateral TKA from 2013 to 2018 at VANTHCS. This was a consecutive, unselected cohort. All patients were under the care of a single surgeon using identical implant systems and identical surgical techniques. This study was approved by the institutional review board at VANTHCS. Patients were divided into 2 distinct and consecutive cohorts. The standard of care (SOC) group included all patients from 2013 to 2016. The ERAS group included all patients after the institution of the standardized protocol until the end of the study period.
Data on patient demographics, the American Society of Anesthesiologists risk classification, and preoperative functional status were extracted. Anesthesia techniques included either general endotracheal anesthesia or subarachnoid block with monitored anesthesia care. The quantity of the opioids given during surgery, in the PACU, during the inpatient stay, as discharge prescriptions, and as refills of the narcotic prescriptions up to 3 months postsurgery were recorded. All opioids were converted into morphine equivalent dosages (MED) in order to be properly analyzed using the statistical methodologies described in the statistical section.15 The VHA is a closed health care delivery system; therefore, all of the prescriptions ordered by surgery providers were recorded in the electronic health record.
ERAS Protocol
The SOC cohort was predominantly managed with general endotracheal anesthesia. The ERAS group was predominantly managed with subarachnoid blocks (Table 1). For the ERAS protocol preoperatively, the patients were administered oral gabapentin 300 mg, acetaminophen 650 mg, and oxycodone 20 mg, and IV ondansetron 4 mg. Intraoperatively, minimal opioids were used. In the PACU, the patients received dilaudid 0.25 mg IV as needed every 15 minutes for up to 1 mg/h. The nursing staff was trained to use the visual analog pain scale scores to titrate the medication. During the inpatient stay, patients received 1 g IV acetaminophen every 6 hours for 3 doses. The patients thereafter received oral acetaminophen as needed. Other medications in the multimodal pain-management protocol included gabapentin 300 mg twice daily, meloxicam 15 mg daily, and oxycodone 10 mg every 4 hours as needed. Rescue medication for insufficient pain relief was dilaudid 0.25 mg IV every 15 minutes for visual analog pain scale > 8. On discharge, the patients received a prescription of 30 tablets of hydrocodone 10 mg.
Periarticular Injections
Intraoperatively, all patients in the SOC and ERAS groups received periarticular injections. The liposomal bupivacaine injection was added to the standard injection mixture for the ERAS group. For the SOC group, the total volume of 100 ml was divided into 10 separate 10 cc syringes, and for the ERAS group, the total volume of 140 ml was divided into 14 separate 10 cc syringes. The SOC group injections were performed with an 18-gauge needle and the periarticular soft tissues grossly infiltrated. The ERAS group injections were done with more attention to anatomical detail. Injection sites for the ERAS group included the posterior joint capsule, the medial compartment, the lateral compartment, the tibial fat pad, the quadriceps and the patellar tendon, the femoral and tibial periosteum circumferentially, and the anterior joint capsule. Each needle-stick in the ERAS group delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee.
Outcome Variable
The primary outcome measure was total oral MED intraoperatively, in the PACU, during the hospital inpatient stay, in the hospital discharge prescription, and during the 3-month period after hospital discharge. Incidence of nausea and vomiting during the inpatient stay and any narcotic use at 6 months postsurgery were secondary binary outcomes.
Statistical Analysis
Demographic data and the clinical characteristics for the entire group were described using the sample mean and SD for continuous variables and the frequency and percentage for categorical variables. Differences between the 2 cohorts were analyzed using a 2-independent-sample t test and Fisher exact test.
The estimation of the total oral MED throughout all phases of care was done using a separate Poisson model due to the data being not normally distributed. A log-linear regression model was used to evaluate the main effect of ERAS vs the SOC cohort on the total oral MED used. Finally, a separate multiple logistic regression model was used to estimate the odds of postoperative nausea and vomiting and narcotic use at 6 months postsurgery between the cohorts. The adjusted odds ratio (OR) was estimated from the logistic model. Age, sex, body mass index, preoperative functional independence score, narcotic use within 3 months prior to surgery, anesthesia type used (subarachnoid block with monitored anesthesia care vs general endotracheal anesthesia), and postoperative complications (yes/no) were included as covariates in each model. The length of hospital stay and the above-mentioned factors were also included as covariates in the model estimating the total oral MED during the hospital stay, on hospital discharge, during the 3-month period after hospital discharge, and at 6 months following hospital discharge.
Statistical analysis was done using SAS version 9.4. The level of significance was set at α = 0.05 (2 tailed), and we implemented the false discovery rate (FDR) procedure to control false positives over multiple tests.16
Results
Two hundred forty-nine patients had 296 elective unilateral TKAs in this study from 2013 through 2018. Thirty-one patients had both unilateral TKAs under the SOC protocol; 5 patients had both unilateral TKAs under the ERAS protocol. Eleven of the patients who eventually had both knees replaced had 1 operation under each protocol The SOC group included 196 TKAs and the ERAS group included 100 TKAs. Of the 196 SOC patients, 94% were male. The mean age was 68.2 years (range, 48-86). The length of hospital stay ranged from 36.6 to 664.3 hours. Of the 100 ERAS patients, 96% were male (Table 2). The mean age was 66.7 years (range, 48-85). The length of hospital stay ranged from 12.5 to 45 hours.
Perioperative Opioid Use
Of the SOC patients, 99.0% received narcotics intraoperatively (range, 0-198 mg MED), and 74.5% received narcotics during PACU recovery (range, 0-141 mg MED). The total oral MED during the hospital stay for the SOC patients ranged from 10 to 2,946 mg. Of the ERAS patients, 86% received no narcotics during surgery (range, 0-110 mg MED), and 98% received no narcotics during PACU recovery (range, 0-65 mg MED). The total oral MED during the hospital stay for the ERAS patients ranged from 10 to 240 mg.
The MED used was significantly lower for the ERAS patients than it was for the SOC patients during surgery (10.5 mg vs 57.4 mg, P = .0001, FDR = .0002) and in the PACU (1.3 mg vs 13.6 mg, P = .0002, FDR = .0004), during the inpatient stay (66.7 mg vs 169.5 mg, P = .0001, FDR = .0002), and on hospital discharge (419.3 mg vs 776.7 mg, P = .0001, FDR = .0002). However, there was no significant difference in the total MED prescriptions filled between patients on the ERAS protocol vs those who received SOC during the 3-month period after hospital discharge (858.3 mg vs 1126.1 mg, P = .29, FDR = .29)(Table 3).
Finally, the logistic regression analysis, adjusting for the covariates demonstrated that the ERAS patients were less likely to take narcotics at 6 months following hospital discharge (OR, 0.23; P = .013; FDR = .018) and less likely to have postoperative nausea and vomiting (OR, 0.18; P = .019; FDR = .02) than SOC patients. There was no statistically significant difference between complication rates for the SOC and ERAS groups, which were 11.2% and 5.0%, respectively, with an overall complication rate of 9.1% (P = .09)(Table 4).
Discussion
Orthopedic surgery has been associated with long-term opioid use and misuse. Orthopedic surgeons are frequently among the highest prescribers of narcotics. According to Volkow and colleagues, orthopedic surgeons were the fourth largest prescribers of opioids in 2009, behind primary care physicians, internists, and dentists.17 The opioid crisis in the United States is well recognized. In 2017, > 70,000 deaths occurred due to drug overdoses, with 68% involving a prescription or illicit opioid. The Centers for Disease Control and Prevention has estimated a total economic burden of $78.5 billion per year as a direct result of misused prescribed opioids.18 This includes the cost of health care, lost productivity, addiction treatment, and the impact on the criminal justice system.
The current opioid crisis places further emphasis on opioid-reducing or sparing techniques in patients undergoing TKA. The use of liposomal bupivacaine for intraoperative periarticular injection is debated in the literature regarding its efficacy and whether it should be included in multimodal protocols. Researchers have argued that liposomal bupivacaine is not superior to regular bupivacaine and because of its increased cost is not justified.19,20 A meta-analysis from Zhao and colleagues showed no difference in pain control and functional recovery when comparing liposomal bupivacaine and control.21 In a randomized clinical trial, Schroer and colleagues matched liposomal bupivacaine against regular bupivacaine and found no difference in pain scores and similar narcotic use during hospitalization.22
Studies evaluating liposomal bupivacaine have demonstrated postoperative benefits in pain relief and potential opioid consumption.23 In a multicenter randomized controlled trial, Barrington and colleagues noted improved pain control at 6 and 12 hours after surgery with liposomal bupivacaine as a periarticular injection vs ropivacaine, though results were similar when compared with intrathecal morphine.24 Snyder and colleagues reported higher patient satisfaction in pain control and overall experience as well as decreased MED consumption in the PACU and on postoperative days 0 to 2 when using liposomal bupivacaine vs a multidrug cocktail for periarticular injection.25
The PILLAR trial, an industry-sponsored study, was designed to compare the effects of local infiltration anesthesia with and without liposomal bupivacaine with emphasis on a meticulous standardized infiltration technique. In our study, we used a similar technique with an expanded volume of injection solution to 140 ml that was delivered throughout the knee in a series of 14 syringes. Each needle-stick delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee. Infiltration technique has varied among the literature focused on periarticular injections.
In our experience, a standard infiltration technique is critical to the effective delivery of liposomal bupivacaine throughout all compartments of the knee and to obtaining reproducible pain control. The importance of injection technique cannot be overemphasized, and variations can be seen in studies published to date.26 Well-designed trials are needed to address this key component.
There have been limited data focused on the veteran population regarding postoperative pain-management strategies and recovery pathways either with or without liposomal bupivacaine. In a retrospective review, Sakamoto and colleagues found VA patients undergoing TKA had reduced opioid use in the first 24 hours after primary TKA with the use of intraoperative liposomal bupivacaine.27 The VA population has been shown to be at high risk for opioid misuse. The prevalence of comorbidities such as traumatic brain injury, posttraumatic stress disorder, and depression in the VA population also places them at risk for polypharmacy of central nervous system–acting medications.28 This emphasizes the importance of multimodal strategies, which can limit or eliminate narcotics in the perioperative period. The implementation of our ERAS protocol reduced opioid use during intraoperative, PACU, and inpatient hospital stay.
While the financial implications of our recovery protocol were not a primary focus of this study, there are many notable benefits on the overall inpatient cost to the VHA. According to the Health Economics Resource Center, the average daily cost of stay while under VA care for an inpatient surgical bed increased from $4,831 in 2013 to $6,220 in 2018.29 Our reduction in length of stay between our cohorts is 44.5 hours, which translates to a substantial financial savings per patient after protocol implementation. A more detailed look at the financial aspect of our protocol would need to be performed to evaluate the financial impact of other aspects of our protocol, such as the elimination of patient-controlled anesthesia and the reduction in total narcotics prescribed in the postoperative global period.
Limitations
The limitations of this study include its retrospective study design. With the VHA patient population, it may be subject to selection bias, as the population is mostly older and predominantly male compared with that of the general population. This could potentially influence the efficacy of our protocol on a population of patients with more women. In a recent study by Perruccio and colleagues, sex was found to moderate the effects of comorbidities, low back pain, and depressive symptoms on postoperative pain in patients undergoing TKA.30
With regard to outpatient narcotic prescriptions, although we cannot fully know whether these filled prescriptions were used for pain control, it is a reasonable assumption that patients who are dealing with continued postoperative or chronic pain issues will fill these prescriptions or seek refills. It is important to note that the data on prescriptions and refills in the 3-month postoperative period include all narcotic prescriptions filled by any VHA prescriber and are not specifically limited to our orthopedic team. For outpatient narcotic use, we were not able to access accurate pill counts for any discharge prescriptions or subsequent refills that were given throughout the VA system. We were able to report on total prescriptions filled in the first 3 months following TKA.
We calculated total oral MEDs to better understand the amount of narcotics being distributed throughout our population of patients. We believe this provides important information about the overall narcotic burden in the veteran population. There was no significant difference between the SOC and ERAS groups regarding oral MED prescribed in the 3-month postoperative period; however, at the 6-month follow-up visit, only 16% of patients in the ERAS group were taking any type of narcotic vs 37.2% in the SOC group (P = .0002).
Conclusions
A multidisciplinary ERAS protocol implemented at VANTHCS was effective in reducing length of stay and opioid burden throughout all phases of surgical care in our patients undergoing primary TKA. Patient and nursing education seem to be critical components to the implementation of a successful multimodal pain protocol. Reducing the narcotic burden has valuable financial and medical benefits in this at-risk population.
Total knee arthroplasty (TKA) is one of the most common surgical procedures in the United States. The volume of TKAs is projected to substantially increase over the next 30 years.1 Adequate pain control after TKA is critically important to achieve early mobilization, shorten the length of hospital stay, and reduce postoperative complications. The evolution and inclusion of multimodal pain-management protocols have had a major impact on the clinical outcomes for TKA patients.2,3
Pain-management protocols typically use several modalities to control pain throughout the perioperative period. Multimodal opioid and nonopioid oral medications are administered during the pre- and postoperative periods and often involve a combination of acetaminophen, gabapentinoids, and cyclooxygenase-2 inhibitors.4 Peripheral nerve blocks and central neuraxial blockades are widely used and have been shown to be effective in reducing postoperative pain as well as overall opioid consumption.5,6 Finally, intraoperative periarticular injections have been shown to reduce postoperative pain and opioid consumption as well as improve patient satisfaction scores.7-9 These strategies are routinely used in TKA with the goal of minimizing overall opioid consumption and adverse events, reducing perioperative complications, and improving patient satisfaction.
Periarticular injections during surgery are an integral part of the multimodal pain-management protocols, though no consensus has been reached on proper injection formulation or technique. Liposomal bupivacaine is a local anesthetic depot formulation approved by the US Food and Drug Administration for surgical patients. The reported results have been discrepant regarding the efficacy of using liposomal bupivacaine injection in patients with TKA. Several studies have reported no added benefit of liposomal bupivacaine in contrast to a mixture of local anesthetics.10,11 Other studies have demonstrated superior pain relief.12 Many factors may contribute to the discrepant data, such as injection techniques, infiltration volume, and the assessment tools used to measure efficacy and safety.13
The US Department of Veterans Affairs (VA) Veterans Health Administration (VHA) provides care to a large patient population. Many of the patients in that system have high-risk profiles, including medical comorbidities; exposure to chronic pain and opioid use; and psychological and central nervous system injuries, including posttraumatic stress disorder and traumatic brain injury. Hadlandsmyth and colleagues reported increased risk of prolonged opioid use in VA patients after TKA surgery.14 They found that 20% of the patients were still on long-term opioids more than 90 days after TKA.
The purpose of this study was to evaluate the efficacy of the implementation of a comprehensive enhanced recovery after surgery (ERAS) protocol at a regional VA medical center. We hypothesize that the addition of liposomal bupivacaine in a multidisciplinary ERAS protocol would reduce the length of hospital stay and opioid consumption without any deleterious effects on postoperative outcomes.
Methods
A postoperative recovery protocol was implemented in 2013 at VA North Texas Health Care System (VANTHCS) in Dallas, and many of the patients continued to have issues with satisfactory pain control, prolonged length of stay, and extended opioid consumption postoperatively. A multimodal pain-management protocol and multidisciplinary perioperative case-management protocol were implemented in 2016 to further improve the clinical outcomes of patients undergoing TKA surgery. The senior surgeon (JM) organized a multidisciplinary team of health care providers to identify and implement potential solutions. This task force met weekly and consisted of surgeons, anesthesiologists, certified registered nurse anesthetists, orthopedic physician assistants, a nurse coordinator, a physical therapist, and an occupational therapist, as well as operating room, postanesthesia care unit (PACU), and surgical ward nurses. In addition, the staff from the home health agencies and social services attended the weekly meetings.
We conducted a retrospective review of all patients who had undergone unilateral TKA from 2013 to 2018 at VANTHCS. This was a consecutive, unselected cohort. All patients were under the care of a single surgeon using identical implant systems and identical surgical techniques. This study was approved by the institutional review board at VANTHCS. Patients were divided into 2 distinct and consecutive cohorts. The standard of care (SOC) group included all patients from 2013 to 2016. The ERAS group included all patients after the institution of the standardized protocol until the end of the study period.
Data on patient demographics, the American Society of Anesthesiologists risk classification, and preoperative functional status were extracted. Anesthesia techniques included either general endotracheal anesthesia or subarachnoid block with monitored anesthesia care. The quantity of the opioids given during surgery, in the PACU, during the inpatient stay, as discharge prescriptions, and as refills of the narcotic prescriptions up to 3 months postsurgery were recorded. All opioids were converted into morphine equivalent dosages (MED) in order to be properly analyzed using the statistical methodologies described in the statistical section.15 The VHA is a closed health care delivery system; therefore, all of the prescriptions ordered by surgery providers were recorded in the electronic health record.
ERAS Protocol
The SOC cohort was predominantly managed with general endotracheal anesthesia. The ERAS group was predominantly managed with subarachnoid blocks (Table 1). For the ERAS protocol preoperatively, the patients were administered oral gabapentin 300 mg, acetaminophen 650 mg, and oxycodone 20 mg, and IV ondansetron 4 mg. Intraoperatively, minimal opioids were used. In the PACU, the patients received dilaudid 0.25 mg IV as needed every 15 minutes for up to 1 mg/h. The nursing staff was trained to use the visual analog pain scale scores to titrate the medication. During the inpatient stay, patients received 1 g IV acetaminophen every 6 hours for 3 doses. The patients thereafter received oral acetaminophen as needed. Other medications in the multimodal pain-management protocol included gabapentin 300 mg twice daily, meloxicam 15 mg daily, and oxycodone 10 mg every 4 hours as needed. Rescue medication for insufficient pain relief was dilaudid 0.25 mg IV every 15 minutes for visual analog pain scale > 8. On discharge, the patients received a prescription of 30 tablets of hydrocodone 10 mg.
Periarticular Injections
Intraoperatively, all patients in the SOC and ERAS groups received periarticular injections. The liposomal bupivacaine injection was added to the standard injection mixture for the ERAS group. For the SOC group, the total volume of 100 ml was divided into 10 separate 10 cc syringes, and for the ERAS group, the total volume of 140 ml was divided into 14 separate 10 cc syringes. The SOC group injections were performed with an 18-gauge needle and the periarticular soft tissues grossly infiltrated. The ERAS group injections were done with more attention to anatomical detail. Injection sites for the ERAS group included the posterior joint capsule, the medial compartment, the lateral compartment, the tibial fat pad, the quadriceps and the patellar tendon, the femoral and tibial periosteum circumferentially, and the anterior joint capsule. Each needle-stick in the ERAS group delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee.
Outcome Variable
The primary outcome measure was total oral MED intraoperatively, in the PACU, during the hospital inpatient stay, in the hospital discharge prescription, and during the 3-month period after hospital discharge. Incidence of nausea and vomiting during the inpatient stay and any narcotic use at 6 months postsurgery were secondary binary outcomes.
Statistical Analysis
Demographic data and the clinical characteristics for the entire group were described using the sample mean and SD for continuous variables and the frequency and percentage for categorical variables. Differences between the 2 cohorts were analyzed using a 2-independent-sample t test and Fisher exact test.
The estimation of the total oral MED throughout all phases of care was done using a separate Poisson model due to the data being not normally distributed. A log-linear regression model was used to evaluate the main effect of ERAS vs the SOC cohort on the total oral MED used. Finally, a separate multiple logistic regression model was used to estimate the odds of postoperative nausea and vomiting and narcotic use at 6 months postsurgery between the cohorts. The adjusted odds ratio (OR) was estimated from the logistic model. Age, sex, body mass index, preoperative functional independence score, narcotic use within 3 months prior to surgery, anesthesia type used (subarachnoid block with monitored anesthesia care vs general endotracheal anesthesia), and postoperative complications (yes/no) were included as covariates in each model. The length of hospital stay and the above-mentioned factors were also included as covariates in the model estimating the total oral MED during the hospital stay, on hospital discharge, during the 3-month period after hospital discharge, and at 6 months following hospital discharge.
Statistical analysis was done using SAS version 9.4. The level of significance was set at α = 0.05 (2 tailed), and we implemented the false discovery rate (FDR) procedure to control false positives over multiple tests.16
Results
Two hundred forty-nine patients had 296 elective unilateral TKAs in this study from 2013 through 2018. Thirty-one patients had both unilateral TKAs under the SOC protocol; 5 patients had both unilateral TKAs under the ERAS protocol. Eleven of the patients who eventually had both knees replaced had 1 operation under each protocol The SOC group included 196 TKAs and the ERAS group included 100 TKAs. Of the 196 SOC patients, 94% were male. The mean age was 68.2 years (range, 48-86). The length of hospital stay ranged from 36.6 to 664.3 hours. Of the 100 ERAS patients, 96% were male (Table 2). The mean age was 66.7 years (range, 48-85). The length of hospital stay ranged from 12.5 to 45 hours.
Perioperative Opioid Use
Of the SOC patients, 99.0% received narcotics intraoperatively (range, 0-198 mg MED), and 74.5% received narcotics during PACU recovery (range, 0-141 mg MED). The total oral MED during the hospital stay for the SOC patients ranged from 10 to 2,946 mg. Of the ERAS patients, 86% received no narcotics during surgery (range, 0-110 mg MED), and 98% received no narcotics during PACU recovery (range, 0-65 mg MED). The total oral MED during the hospital stay for the ERAS patients ranged from 10 to 240 mg.
The MED used was significantly lower for the ERAS patients than it was for the SOC patients during surgery (10.5 mg vs 57.4 mg, P = .0001, FDR = .0002) and in the PACU (1.3 mg vs 13.6 mg, P = .0002, FDR = .0004), during the inpatient stay (66.7 mg vs 169.5 mg, P = .0001, FDR = .0002), and on hospital discharge (419.3 mg vs 776.7 mg, P = .0001, FDR = .0002). However, there was no significant difference in the total MED prescriptions filled between patients on the ERAS protocol vs those who received SOC during the 3-month period after hospital discharge (858.3 mg vs 1126.1 mg, P = .29, FDR = .29)(Table 3).
Finally, the logistic regression analysis, adjusting for the covariates demonstrated that the ERAS patients were less likely to take narcotics at 6 months following hospital discharge (OR, 0.23; P = .013; FDR = .018) and less likely to have postoperative nausea and vomiting (OR, 0.18; P = .019; FDR = .02) than SOC patients. There was no statistically significant difference between complication rates for the SOC and ERAS groups, which were 11.2% and 5.0%, respectively, with an overall complication rate of 9.1% (P = .09)(Table 4).
Discussion
Orthopedic surgery has been associated with long-term opioid use and misuse. Orthopedic surgeons are frequently among the highest prescribers of narcotics. According to Volkow and colleagues, orthopedic surgeons were the fourth largest prescribers of opioids in 2009, behind primary care physicians, internists, and dentists.17 The opioid crisis in the United States is well recognized. In 2017, > 70,000 deaths occurred due to drug overdoses, with 68% involving a prescription or illicit opioid. The Centers for Disease Control and Prevention has estimated a total economic burden of $78.5 billion per year as a direct result of misused prescribed opioids.18 This includes the cost of health care, lost productivity, addiction treatment, and the impact on the criminal justice system.
The current opioid crisis places further emphasis on opioid-reducing or sparing techniques in patients undergoing TKA. The use of liposomal bupivacaine for intraoperative periarticular injection is debated in the literature regarding its efficacy and whether it should be included in multimodal protocols. Researchers have argued that liposomal bupivacaine is not superior to regular bupivacaine and because of its increased cost is not justified.19,20 A meta-analysis from Zhao and colleagues showed no difference in pain control and functional recovery when comparing liposomal bupivacaine and control.21 In a randomized clinical trial, Schroer and colleagues matched liposomal bupivacaine against regular bupivacaine and found no difference in pain scores and similar narcotic use during hospitalization.22
Studies evaluating liposomal bupivacaine have demonstrated postoperative benefits in pain relief and potential opioid consumption.23 In a multicenter randomized controlled trial, Barrington and colleagues noted improved pain control at 6 and 12 hours after surgery with liposomal bupivacaine as a periarticular injection vs ropivacaine, though results were similar when compared with intrathecal morphine.24 Snyder and colleagues reported higher patient satisfaction in pain control and overall experience as well as decreased MED consumption in the PACU and on postoperative days 0 to 2 when using liposomal bupivacaine vs a multidrug cocktail for periarticular injection.25
The PILLAR trial, an industry-sponsored study, was designed to compare the effects of local infiltration anesthesia with and without liposomal bupivacaine with emphasis on a meticulous standardized infiltration technique. In our study, we used a similar technique with an expanded volume of injection solution to 140 ml that was delivered throughout the knee in a series of 14 syringes. Each needle-stick delivered 1 to 1.5 ml through a 22-gauge needle to each compartment of the knee. Infiltration technique has varied among the literature focused on periarticular injections.
In our experience, a standard infiltration technique is critical to the effective delivery of liposomal bupivacaine throughout all compartments of the knee and to obtaining reproducible pain control. The importance of injection technique cannot be overemphasized, and variations can be seen in studies published to date.26 Well-designed trials are needed to address this key component.
There have been limited data focused on the veteran population regarding postoperative pain-management strategies and recovery pathways either with or without liposomal bupivacaine. In a retrospective review, Sakamoto and colleagues found VA patients undergoing TKA had reduced opioid use in the first 24 hours after primary TKA with the use of intraoperative liposomal bupivacaine.27 The VA population has been shown to be at high risk for opioid misuse. The prevalence of comorbidities such as traumatic brain injury, posttraumatic stress disorder, and depression in the VA population also places them at risk for polypharmacy of central nervous system–acting medications.28 This emphasizes the importance of multimodal strategies, which can limit or eliminate narcotics in the perioperative period. The implementation of our ERAS protocol reduced opioid use during intraoperative, PACU, and inpatient hospital stay.
While the financial implications of our recovery protocol were not a primary focus of this study, there are many notable benefits on the overall inpatient cost to the VHA. According to the Health Economics Resource Center, the average daily cost of stay while under VA care for an inpatient surgical bed increased from $4,831 in 2013 to $6,220 in 2018.29 Our reduction in length of stay between our cohorts is 44.5 hours, which translates to a substantial financial savings per patient after protocol implementation. A more detailed look at the financial aspect of our protocol would need to be performed to evaluate the financial impact of other aspects of our protocol, such as the elimination of patient-controlled anesthesia and the reduction in total narcotics prescribed in the postoperative global period.
Limitations
The limitations of this study include its retrospective study design. With the VHA patient population, it may be subject to selection bias, as the population is mostly older and predominantly male compared with that of the general population. This could potentially influence the efficacy of our protocol on a population of patients with more women. In a recent study by Perruccio and colleagues, sex was found to moderate the effects of comorbidities, low back pain, and depressive symptoms on postoperative pain in patients undergoing TKA.30
With regard to outpatient narcotic prescriptions, although we cannot fully know whether these filled prescriptions were used for pain control, it is a reasonable assumption that patients who are dealing with continued postoperative or chronic pain issues will fill these prescriptions or seek refills. It is important to note that the data on prescriptions and refills in the 3-month postoperative period include all narcotic prescriptions filled by any VHA prescriber and are not specifically limited to our orthopedic team. For outpatient narcotic use, we were not able to access accurate pill counts for any discharge prescriptions or subsequent refills that were given throughout the VA system. We were able to report on total prescriptions filled in the first 3 months following TKA.
We calculated total oral MEDs to better understand the amount of narcotics being distributed throughout our population of patients. We believe this provides important information about the overall narcotic burden in the veteran population. There was no significant difference between the SOC and ERAS groups regarding oral MED prescribed in the 3-month postoperative period; however, at the 6-month follow-up visit, only 16% of patients in the ERAS group were taking any type of narcotic vs 37.2% in the SOC group (P = .0002).
Conclusions
A multidisciplinary ERAS protocol implemented at VANTHCS was effective in reducing length of stay and opioid burden throughout all phases of surgical care in our patients undergoing primary TKA. Patient and nursing education seem to be critical components to the implementation of a successful multimodal pain protocol. Reducing the narcotic burden has valuable financial and medical benefits in this at-risk population.
1. Inacio MCS, Paxton EW, Graves SE, Namba RS, Nemes S. Projected increase in total knee arthroplasty in the United States - an alternative projection model. Osteoarthritis Cartilage. 2017;25(11):1797-1803. doi:10.1016/j.joca.2017.07.022
2. Chou R, Gordon DB, de Leon-Casasola OA, et al. Management of Postoperative pain: a clinical practice guideline from the American Pain Society, the American Society of Regional Anesthesia and Pain Medicine, and the American Society of Anesthesiologists’ Committee on Regional Anesthesia, Executive Committee, and Administrative Council [published correction appears in J Pain. 2016 Apr;17(4):508-10. Dosage error in article text]. J Pain. 2016;17(2):131-157. doi:10.1016/j.jpain.2015.12.008
3. Moucha CS, Weiser MC, Levin EJ. Current Strategies in anesthesia and analgesia for total knee arthroplasty. J Am Acad Orthop Surg. 2016;24(2):60-73. doi:10.5435/JAAOS-D-14-00259
4. Parvizi J, Miller AG, Gandhi K. Multimodal pain management after total joint arthroplasty. J Bone Joint Surg Am. 2011;93(11):1075-1084. doi:10.2106/JBJS.J.01095
5. Jenstrup MT, Jæger P, Lund J, et al. Effects of adductor-canal-blockade on pain and ambulation after total knee arthroplasty: a randomized study. Acta Anaesthesiol Scand. 2012;56(3):357-364. doi:10.1111/j.1399-6576.2011.02621.x
6. Macfarlane AJ, Prasad GA, Chan VW, Brull R. Does regional anesthesia improve outcome after total knee arthroplasty?. Clin Orthop Relat Res. 2009;467(9):2379-2402. doi:10.1007/s11999-008-0666-9
7. Parvataneni HK, Shah VP, Howard H, Cole N, Ranawat AS, Ranawat CS. Controlling pain after total hip and knee arthroplasty using a multimodal protocol with local periarticular injections: a prospective randomized study. J Arthroplasty. 2007;22(6)(suppl 2):33-38. doi:10.1016/j.arth.2007.03.034
8. Busch CA, Shore BJ, Bhandari R, et al. Efficacy of periarticular multimodal drug injection in total knee arthroplasty. A randomized trial. J Bone Joint Surg Am. 2006;88(5):959-963. doi:10.2106/JBJS.E.00344
9. Lamplot JD, Wagner ER, Manning DW. Multimodal pain management in total knee arthroplasty: a prospective randomized controlled trial. J Arthroplasty. 2014;29(2):329-334. doi:10.1016/j.arth.2013.06.005
10. Hyland SJ, Deliberato DG, Fada RA, Romanelli MJ, Collins CL, Wasielewski RC. Liposomal bupivacaine versus standard periarticular injection in total knee arthroplasty with regional anesthesia: a prospective randomized controlled trial. J Arthroplasty. 2019;34(3):488-494. doi:10.1016/j.arth.2018.11.026
11. Barrington JW, Lovald ST, Ong KL, Watson HN, Emerson RH Jr. Postoperative pain after primary total knee arthroplasty: comparison of local injection analgesic cocktails and the role of demographic and surgical factors. J Arthroplasty. 2016;31(9) (suppl):288-292. doi:10.1016/j.arth.2016.05.002
12. Bramlett K, Onel E, Viscusi ER, Jones K. A randomized, double-blind, dose-ranging study comparing wound infiltration of DepoFoam bupivacaine, an extended-release liposomal bupivacaine, to bupivacaine HCl for postsurgical analgesia in total knee arthroplasty. Knee. 2012;19(5):530-536. doi:10.1016/j.knee.2011.12.004
13. Mont MA, Beaver WB, Dysart SH, Barrington JW, Del Gaizo D. Local infiltration analgesia with liposomal bupivacaine improves pain scores and reduces opioid use after total knee arthroplasty: results of a randomized controlled trial. J Arthroplasty. 2018;33(1):90-96. doi:10.1016/j.arth.2017.07.024
14. Hadlandsmyth K, Vander Weg MW, McCoy KD, Mosher HJ, Vaughan-Sarrazin MS, Lund BC. Risk for prolonged opioid use following total knee arthroplasty in veterans. J Arthroplasty. 2018;33(1):119-123. doi:10.1016/j.arth.2017.08.022
15. Nielsen S, Degenhardt L, Hoban B, Gisev N. A synthesis of oral morphine equivalents (OME) for opioid utilisation studies. Pharmacoepidemiol Drug Saf. 2016;25(6):733-737. doi:10.1002/pds.3945
16. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Statist Soc B. 1995;57(1):289-300. doi:10.1111/j.2517-6161.1995.tb02031.x
17. Volkow ND, McLellan TA, Cotto JH, Karithanom M, Weiss SRB. Characteristics of opioid prescriptions in 2009. JAMA. 2011;305(13):1299-1301. doi:10.1001/jama.2011.401
18. Scholl L, Seth P, Kariisa M, Wilson N, Baldwin G. Drug and opioid-involved overdose deaths - United States, 2013-2017. MMWR Morb Mortal Wkly Rep. 2018;67(5152):1419-1427. doi:10.15585/mmwr.mm675152e1
19. Pichler L, Poeran J, Zubizarreta N, et al. Liposomal bupivacaine does not reduce inpatient opioid prescription or related complications after knee arthroplasty: a database analysis. Anesthesiology. 2018;129(4):689-699. doi:10.1097/ALN.0000000000002267
20. Jain RK, Porat MD, Klingenstein GG, Reid JJ, Post RE, Schoifet SD. The AAHKS Clinical Research Award: liposomal bupivacaine and periarticular injection are not superior to single-shot intra-articular injection for pain control in total knee arthroplasty. J Arthroplasty. 2016;31(9)(suppl):22-25. doi:10.1016/j.arth.2016.03.036
21. Zhao B, Ma X, Zhang J, Ma J, Cao Q. The efficacy of local liposomal bupivacaine infiltration on pain and recovery after total joint arthroplasty: a systematic review and meta-analysis of randomized controlled trials. Medicine (Baltimore). 2019;98(3):e14092. doi:10.1097/MD.0000000000014092
22. Schroer WC, Diesfeld PG, LeMarr AR, Morton DJ, Reedy ME. Does extended-release liposomal bupivacaine better control pain than bupivacaine after total knee arthroplasty (TKA)? A prospective, randomized clinical trial. J Arthroplasty. 2015;30(9)(suppl):64-67. doi:10.1016/j.arth.2015.01.059
23. Ma J, Zhang W, Yao S. Liposomal bupivacaine infiltration versus femoral nerve block for pain control in total knee arthroplasty: a systematic review and meta-analysis. Int J Surg. 2016;36(Pt A): 44-55. doi:10.1016/j.ijsu.2016.10.007
24. Barrington JW, Emerson RH, Lovald ST, Lombardi AV, Berend KR. No difference in early analgesia between liposomal bupivacaine injection and intrathecal morphine after TKA. Clin Orthop Relat Res. 2017;475(1):94-105. doi:10.1007/s11999-016-4931-z
25. Snyder MA, Scheuerman CM, Gregg JL, Ruhnke CJ, Eten K. Improving total knee arthroplasty perioperative pain management using a periarticular injection with bupivacaine liposomal suspension. Arthroplast Today. 2016;2(1):37-42. doi:10.1016/j.artd.2015.05.005
26. Kuang MJ,Du Y, Ma JX, He W, Fu L, Ma XL. The efficacy of liposomal bupivacaine using periarticular injection in total knee arthroplasty: a systematic review and meta-analysis. J Arthroplasty. 2017;32(4):1395-1402. doi:10.1016/j.arth.2016.12.025
27. Sakamoto B, Keiser S, Meldrum R, Harker G, Freese A. Efficacy of liposomal bupivacaine infiltration on the management of total knee arthroplasty. JAMA Surg. 2017;152(1):90-95. doi:10.1001/jamasurg.2016.3474
28. Collett GA, Song K, Jaramillo CA, Potter JS, Finley EP, Pugh MJ. Prevalence of central nervous system polypharmacy and associations with overdose and suicide-related behaviors in Iraq and Afghanistan war veterans in VA care 2010-2011. Drugs Real World Outcomes. 2016;3(1):45-52. doi:10.1007/s40801-015-0055-0
29. US Department of Veterans Affairs. HERC inpatient average cost data. Updated April 2, 2021. Accessed April 16, 2021. https://www.herc.research.va.gov/include/page.asp?id=inpatient#herc-inpat-avg-cost
30. Perruccio AV, Fitzpatrick J, Power JD, et al. Sex-modified effects of depression, low back pain, and comorbidities on pain after total knee arthroplasty for osteoarthritis. Arthritis Care Res (Hoboken). 2020;72(8):1074-1080. doi:10.1002/acr.24002
1. Inacio MCS, Paxton EW, Graves SE, Namba RS, Nemes S. Projected increase in total knee arthroplasty in the United States - an alternative projection model. Osteoarthritis Cartilage. 2017;25(11):1797-1803. doi:10.1016/j.joca.2017.07.022
2. Chou R, Gordon DB, de Leon-Casasola OA, et al. Management of Postoperative pain: a clinical practice guideline from the American Pain Society, the American Society of Regional Anesthesia and Pain Medicine, and the American Society of Anesthesiologists’ Committee on Regional Anesthesia, Executive Committee, and Administrative Council [published correction appears in J Pain. 2016 Apr;17(4):508-10. Dosage error in article text]. J Pain. 2016;17(2):131-157. doi:10.1016/j.jpain.2015.12.008
3. Moucha CS, Weiser MC, Levin EJ. Current Strategies in anesthesia and analgesia for total knee arthroplasty. J Am Acad Orthop Surg. 2016;24(2):60-73. doi:10.5435/JAAOS-D-14-00259
4. Parvizi J, Miller AG, Gandhi K. Multimodal pain management after total joint arthroplasty. J Bone Joint Surg Am. 2011;93(11):1075-1084. doi:10.2106/JBJS.J.01095
5. Jenstrup MT, Jæger P, Lund J, et al. Effects of adductor-canal-blockade on pain and ambulation after total knee arthroplasty: a randomized study. Acta Anaesthesiol Scand. 2012;56(3):357-364. doi:10.1111/j.1399-6576.2011.02621.x
6. Macfarlane AJ, Prasad GA, Chan VW, Brull R. Does regional anesthesia improve outcome after total knee arthroplasty?. Clin Orthop Relat Res. 2009;467(9):2379-2402. doi:10.1007/s11999-008-0666-9
7. Parvataneni HK, Shah VP, Howard H, Cole N, Ranawat AS, Ranawat CS. Controlling pain after total hip and knee arthroplasty using a multimodal protocol with local periarticular injections: a prospective randomized study. J Arthroplasty. 2007;22(6)(suppl 2):33-38. doi:10.1016/j.arth.2007.03.034
8. Busch CA, Shore BJ, Bhandari R, et al. Efficacy of periarticular multimodal drug injection in total knee arthroplasty. A randomized trial. J Bone Joint Surg Am. 2006;88(5):959-963. doi:10.2106/JBJS.E.00344
9. Lamplot JD, Wagner ER, Manning DW. Multimodal pain management in total knee arthroplasty: a prospective randomized controlled trial. J Arthroplasty. 2014;29(2):329-334. doi:10.1016/j.arth.2013.06.005
10. Hyland SJ, Deliberato DG, Fada RA, Romanelli MJ, Collins CL, Wasielewski RC. Liposomal bupivacaine versus standard periarticular injection in total knee arthroplasty with regional anesthesia: a prospective randomized controlled trial. J Arthroplasty. 2019;34(3):488-494. doi:10.1016/j.arth.2018.11.026
11. Barrington JW, Lovald ST, Ong KL, Watson HN, Emerson RH Jr. Postoperative pain after primary total knee arthroplasty: comparison of local injection analgesic cocktails and the role of demographic and surgical factors. J Arthroplasty. 2016;31(9) (suppl):288-292. doi:10.1016/j.arth.2016.05.002
12. Bramlett K, Onel E, Viscusi ER, Jones K. A randomized, double-blind, dose-ranging study comparing wound infiltration of DepoFoam bupivacaine, an extended-release liposomal bupivacaine, to bupivacaine HCl for postsurgical analgesia in total knee arthroplasty. Knee. 2012;19(5):530-536. doi:10.1016/j.knee.2011.12.004
13. Mont MA, Beaver WB, Dysart SH, Barrington JW, Del Gaizo D. Local infiltration analgesia with liposomal bupivacaine improves pain scores and reduces opioid use after total knee arthroplasty: results of a randomized controlled trial. J Arthroplasty. 2018;33(1):90-96. doi:10.1016/j.arth.2017.07.024
14. Hadlandsmyth K, Vander Weg MW, McCoy KD, Mosher HJ, Vaughan-Sarrazin MS, Lund BC. Risk for prolonged opioid use following total knee arthroplasty in veterans. J Arthroplasty. 2018;33(1):119-123. doi:10.1016/j.arth.2017.08.022
15. Nielsen S, Degenhardt L, Hoban B, Gisev N. A synthesis of oral morphine equivalents (OME) for opioid utilisation studies. Pharmacoepidemiol Drug Saf. 2016;25(6):733-737. doi:10.1002/pds.3945
16. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Statist Soc B. 1995;57(1):289-300. doi:10.1111/j.2517-6161.1995.tb02031.x
17. Volkow ND, McLellan TA, Cotto JH, Karithanom M, Weiss SRB. Characteristics of opioid prescriptions in 2009. JAMA. 2011;305(13):1299-1301. doi:10.1001/jama.2011.401
18. Scholl L, Seth P, Kariisa M, Wilson N, Baldwin G. Drug and opioid-involved overdose deaths - United States, 2013-2017. MMWR Morb Mortal Wkly Rep. 2018;67(5152):1419-1427. doi:10.15585/mmwr.mm675152e1
19. Pichler L, Poeran J, Zubizarreta N, et al. Liposomal bupivacaine does not reduce inpatient opioid prescription or related complications after knee arthroplasty: a database analysis. Anesthesiology. 2018;129(4):689-699. doi:10.1097/ALN.0000000000002267
20. Jain RK, Porat MD, Klingenstein GG, Reid JJ, Post RE, Schoifet SD. The AAHKS Clinical Research Award: liposomal bupivacaine and periarticular injection are not superior to single-shot intra-articular injection for pain control in total knee arthroplasty. J Arthroplasty. 2016;31(9)(suppl):22-25. doi:10.1016/j.arth.2016.03.036
21. Zhao B, Ma X, Zhang J, Ma J, Cao Q. The efficacy of local liposomal bupivacaine infiltration on pain and recovery after total joint arthroplasty: a systematic review and meta-analysis of randomized controlled trials. Medicine (Baltimore). 2019;98(3):e14092. doi:10.1097/MD.0000000000014092
22. Schroer WC, Diesfeld PG, LeMarr AR, Morton DJ, Reedy ME. Does extended-release liposomal bupivacaine better control pain than bupivacaine after total knee arthroplasty (TKA)? A prospective, randomized clinical trial. J Arthroplasty. 2015;30(9)(suppl):64-67. doi:10.1016/j.arth.2015.01.059
23. Ma J, Zhang W, Yao S. Liposomal bupivacaine infiltration versus femoral nerve block for pain control in total knee arthroplasty: a systematic review and meta-analysis. Int J Surg. 2016;36(Pt A): 44-55. doi:10.1016/j.ijsu.2016.10.007
24. Barrington JW, Emerson RH, Lovald ST, Lombardi AV, Berend KR. No difference in early analgesia between liposomal bupivacaine injection and intrathecal morphine after TKA. Clin Orthop Relat Res. 2017;475(1):94-105. doi:10.1007/s11999-016-4931-z
25. Snyder MA, Scheuerman CM, Gregg JL, Ruhnke CJ, Eten K. Improving total knee arthroplasty perioperative pain management using a periarticular injection with bupivacaine liposomal suspension. Arthroplast Today. 2016;2(1):37-42. doi:10.1016/j.artd.2015.05.005
26. Kuang MJ,Du Y, Ma JX, He W, Fu L, Ma XL. The efficacy of liposomal bupivacaine using periarticular injection in total knee arthroplasty: a systematic review and meta-analysis. J Arthroplasty. 2017;32(4):1395-1402. doi:10.1016/j.arth.2016.12.025
27. Sakamoto B, Keiser S, Meldrum R, Harker G, Freese A. Efficacy of liposomal bupivacaine infiltration on the management of total knee arthroplasty. JAMA Surg. 2017;152(1):90-95. doi:10.1001/jamasurg.2016.3474
28. Collett GA, Song K, Jaramillo CA, Potter JS, Finley EP, Pugh MJ. Prevalence of central nervous system polypharmacy and associations with overdose and suicide-related behaviors in Iraq and Afghanistan war veterans in VA care 2010-2011. Drugs Real World Outcomes. 2016;3(1):45-52. doi:10.1007/s40801-015-0055-0
29. US Department of Veterans Affairs. HERC inpatient average cost data. Updated April 2, 2021. Accessed April 16, 2021. https://www.herc.research.va.gov/include/page.asp?id=inpatient#herc-inpat-avg-cost
30. Perruccio AV, Fitzpatrick J, Power JD, et al. Sex-modified effects of depression, low back pain, and comorbidities on pain after total knee arthroplasty for osteoarthritis. Arthritis Care Res (Hoboken). 2020;72(8):1074-1080. doi:10.1002/acr.24002
Reducing False-Positive Results With Fourth-Generation HIV Testing at a Veterans Affairs Medical Center
Ever since the first clinical reports of patients with AIDS in 1981, there have been improvements both in the knowledge base of the pathogenesis of HIV in causing AIDS as well as a progressive refinement in the test methodologies used to diagnose this illness.1-3 Given that there are both public health and clinical benefits in earlier diagnosis and treatment of patients with available antiretroviral therapies, universal screening with opt-out consent has been a standard of practice recommendation by the Centers of Disease Control and Prevention (CDC) since 2006; universal screening with opt-out consent also has been recommended by the US Preventative Task Force and has been widely implemented.4-7
HIV Screening
While HIV screening assays have evolved to be accurate with very high sensitivities and specificities, false-positive results are a significant issue both currently and historically.8-16 The use of an HIV assay on a low prevalence population predictably reduces the positive predictive value (PPV) of even an otherwise accurate assay.8-23 In light of this, laboratory HIV testing algorithms include confirmatory testing to increase the likelihood that the correct diagnosis is being rendered.
The fourth-generation assay has been shown to be more sensitive and specific compared with that of the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 Due to these improvements, in the general population, increased sensitivity/specificity with a reduction in both false positives and false negatives have been reported.
It has been observed in the nonveteran population that switching from the older third-generation to a more sensitive and specific fourth-generation HIV screening assay has reduced the false-positive screening rate.18,19,22 For instance, Muthukumar and colleagues demonstrated a false-positive rate of only 2 out of 99 (2%) tested specimens for the fourth-generation ARCHITECT HIV Ag/Ab Combo assay vs 9 out of 99 tested specimens (9%) for the third-generation ADVIA Centaur HIV 1/O/2 Enhanced assay.18 In addition, it has been noted that fourth-generation HIV screening assays can reduce the window period by detecting HIV infection sooner after initial acute infection.19 Mitchell and colleagues demonstrated even highly specific fourth-generation HIV assays with specificities estimated at 99.7% can have PPVs as low as 25.0% if used in a population of low HIV prevalence (such as a 0.1% prevalence population).19 However, the veteran population has been documented to differ significantly on a number of population variables, including severity of disease and susceptibility to infections, and as a result extrapolation of these data from the general population may be limited.24-26 To our knowledge, this article represents the first study directly examining the reduction in false-positive results with the switch to a fourth-generation HIV generation assay from a third-generation assay for the veteran patient population at a regional US Department of Veterans Affairs (VA) medical center (VAMC).8,11
Methods
Quality assurance documents on test volume were retrospectively reviewed to obtain the number of HIV screening tests that were performed by the laboratory at the Corporal Michael J. Crescenz VAMC (CMJCVAMC) in Philadelphia, Pennsylvania, between March 1, 2016 and February 28, 2017, prior to implementation of the fourth-generation assay. The study also include results from the first year of use of the fourth-generation assay (March 1, 2017 to February 28, 2018). In addition, paper quality assurance records of all positive screening results during those periods were reviewed and manually counted for the abstract presentation of these data.
For assurance of accuracy, a search of all HIV testing assays using Veterans Health Information Systems and Technology Architecture and FileMan also was performed, and the results were compared to records in the Computerized Patient Record System (CPRS). Any discrepancies in the numbers of test results generated by both searches were investigated, and data for the manuscript were derived from records associating tests with particular patients. Only results from patient samples were considered for the electronic search. Quality samples that did not correspond to a true patient as identified in CPRS or same time patient sample duplicates were excluded from the calculations. Basic demographic data (age, ethnicity, and gender) were obtained from this FileMan search. The third-generation assay was the Ortho-Clinical Diagnostics Vitros, and the fourth-generation assay was the Abbott Architect.
To interpret the true HIV result of each sample with a reactive or positive screening result, the CDC laboratory HIV testing algorithm was followed and reviewed with a clinical pathologist or microbiologist director.12,13 All specimens interpreted as HIV positive by the pathologist or microbiologist director were discussed with the clinical health care provider at the time of the test with results added to CPRS after all testing was complete and discussions had taken place. All initially reactive specimens (confirmed with retesting in duplicate on the screening platform with at least 1 repeat reactive result) were further tested with the Bio-Rad Geenius HIV 1/2 Supplemental Assay, which screens for both HIV-1 and HIV-2 antibodies. Specimens with reactive results by this supplemental assay were interpreted as positive for HIV based on the CDC laboratory HIV testing algorithm. Specimens with negative or indeterminant results by the supplemental assay then underwent HIV-1 nucleic acid testing (NAT) using the Roche Diagnostics COBAS AmpliPrep/COBAS TaqMan HIV-1 Test v2.0. Specimens with viral load detected on NAT were positive for HIV infection, while specimens with viral load not detected on NAT testing were interpreted as negative for HIV-1 infection. Although there were no HIV-2 positive or indeterminant specimens during the study period, HIV-2 reactivity also would have been interpreted per the CDC laboratory HIV testing algorithm. Specimens with inadequate volume to complete all testing steps would be interpreted as indeterminant for HIV with request for additional specimen to complete testing. All testing platforms used for HIV testing in the laboratory had been properly validated prior to use.
The number of false positives and indeterminant results was tabulated in Microsoft Excel by month throughout the study period alongside the total number of HIV screening tests performed. Statistical analyses to verify statistical significance was performed by 1-tailed homoscedastic t test calculation using Excel.
Results
From March 1, 2016 to February 28, 2017, 7,516 specimens were screened for HIV, using the third-generation assay, and 52 specimens tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 24 tests were true positive and 28 were false positives with a PPV of 46% (24/52) (Figure 1).
From March 1, 2017 to February 28, 2018, 7,802 specimens were screened for HIV using a fourth-generation assay and 23 tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 16 were true positive and 7 were false positives with a PPV of 70% (16/23).
The fourth-generation assay was more specific when compared with the third-generation assay (0.09% vs 0.37%, respectively) with a 75.7% decrease in the false-positivity rate after the implementation of fourth-generation testing. The decreased number of false-positive test results per month with the fourth-generation test implementation was statistically significant (P = .002). The mean (SD) number of false-positive test results for the third-generation assay was 2.3 (1.7) per month, while the fourth-generation assay only had a mean (SD) of 0.58 (0.9) false positives monthly. The decrease in the percentage of false positives per month with the implementation of the fourth-generation assay also was statistically significant (P = .002) (Figure 2).
For population-based reference of the tested population at CMJCVAMC, there was a FileMan search for basic demographic data of patients for the HIV specimens screened by the third- or fourth-generation test (Table). For the population tested by the third-generation assay, 1,114 out of the 7,516 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For 6,402 of 7,516 patients tested by the third-generation assay with demographic information, the age ranged from 25 to 97 years with a mean of 57 years. This population of 6,402 was 88% male (n = 5,639), 50% African American (n = 3,220) and 43% White (n = 2,756). For the population tested by the fourth-generation assay, 993 of 7,802 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For the 6,809 of 7,802 patients tested by the fourth-generation assay with demographic information, the age ranged from 24 to 97 years with a mean age of 56 years. This population was 88% male (n = 5,971), 47% African American (n = 3,189), and 46% White (n = 3,149).
Discussion
Current practice guidelines from the CDC and the US Preventive Services Task Force recommend universal screening of the population for HIV infection.5,6 As the general population to be screened would normally have a low prevalence of HIV infection, the risk of a false positive on the initial screen is significant.17 Indeed, the CMJCVAMC experience has been that with the third-generation screening assay, the number of false-positive test results outnumbered the number of true-positive test results. Even with the fourth-generation assay, approximately one-third of the results were false positives. These results are similar to those observed in studies involving nonveteran populations in which the implementation of a fourth-generation screening assay led to significantly fewer false-positive results.18
For laboratories that do not follows CDC testing algorithm guidelines, each false-positive screening result represents a potential opportunity for a HIV misdiagnosis.Even in laboratories with proper procedures in place, false-positive results have consequences for the patients and for the cost-effectiveness of laboratory operations.9-11,18 As per CDC HIV testing guidelines, all positive screening results should be retested, which leads to additional use of technologist time and reagents. After this additional testing is performed and reviewed appropriately, only then can an appropriate final laboratory diagnosis be rendered that meets the standard of laboratory care.
Cost Savings
As observed at CMJCVAMC, the use of a fourth-generation assay with increased sensitivity/specificity led to a reduction in these false-positive results, which improved laboratory efficiency and avoided wasted resources for confirmatory tests.11,18 Cost savings at CMJCVAMC from the implementation of the fourth-generation assay would include technologist time and reagent cost. Generalizable technologist time costs at any institution would include the time needed to perform the confirmatory HIV-1/HIV-2 antibody differentiation assay (slightly less than 1 hour at CMJCVAMC per specimen) and the time needed to perform the viral load assay (about 6 hours to run a batch of 24 tests at CMJCVAMC). We calculated that confirmatory testing cost $184.51 per test at CMJCVAMC. Replacing the third-generation assay with the more sensitive and specific fourth-generation test saved an estimated $3,875 annually. This cost savings does not even consider savings in the pathologist/director’s time for reviewing HIV results after the completion of the algorithm or the clinician/patient costs or anxiety while waiting for results of the confirmatory sequence of tests.
As diagnosis of HIV can have a significant psychological impact on the patient, it is important to ensure the diagnosis conveyed is correct.27 The provision of an HIV diagnosis to a patient has been described as a traumatic stressor capable of causing psychological harm; this harm should ideally be avoided if the HIV diagnosis is not accurate. There can be a temptation, when presented with a positive or reactive screening test that is known to come from an instrument or assay with a very high sensitivity and specificity, to present this result as a diagnosis to the patient. However, a false diagnosis from a false-positive screen would not only be harmful, but given the low prevalence of the disease in the screened population, would happen fairly frequently; in some settings the number of false positives may actually outnumber the number of true positive test results.
Better screening assays with greater specificity (even fractions of a percentage, given that specificities are already > 99%) would help reduce the number of false positives and reduce the number of potential enticements to convey an incorrect diagnosis. Therefore, by adding an additional layer of safety through greater specificity, the fourth-generation assay implementation helped improve the diagnostic safety of the laboratory and reduced the significant error risk to the clinician who would ultimately bear responsibility for conveying the HIV diagnoses to the patient. Given the increased prevalence of psychological and physical ailments in veterans, it may be even more important to ensure the diagnosis is correct to avoid increased psychological harm.27,28
Veteran Population
For the general population, the fourth-generation assay has been shown to be more sensitive and specific when compared with the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 However, the veteran population that receives VA medical care differs significantly from the nonveteran general population. Compared with nonveterans, veterans tend to have generally poorer health status, more comorbid conditions, and greater need to use medical resources.24-26 In addition, veterans also may differ in sociodemographic status, race, ethnicity, and gender.24-26
VA research in the veteran population is unique, and veterans who use VA health care services are an even more highly selected subpopulation.26 Conclusions made from studies of the general population may not always be applicable to the veteran population treated by VA health care services due to these population differences. Therefore, specific studies tailored to this special veteran population in the specific VA health care setting are essential to ensure that the results of the general population truly and definitively apply to the veteran population.
While the false-positive risk is most closely associated with testing in a population of low prevalence, it also should be noted that false-positive screening results also can occur in high-risk individuals, such as an individual on preexposure prophylaxis (PrEP) for continuous behavior that places the individual at high risk of HIV acquisition.8,29 The false-positive result in these cases can lead to a conundrum for the clinician, and the differential diagnosis should consider both detection of very early infection as well as false positive. Interventions could include either stopping PrEP and treating for presumed early primary infection with HIV or continuing the PrEP. These interventions all have the potential to impact the patient whether through the production of resistant HIV virus due to the inadvertent provision of an inadequate treatment regimen, increased risk of infection if taken off PrEP as the patient may likely continue the behavior regardless, or the risks carried by the administration of additional antiretroviral therapies for the complete empiric therapy. Cases of an individual on PrEP who had a false-positive HIV screening test has been reported previously both within and outside the veteran population.8 Better screening tests with greater sensitivity/specificity can only help in guiding better patient care.
Limitations
This quality assurance study was limited to retrospectively identifying the improvement in the false-positive rate on the transition from the third-generation to the more advanced fourth-generation HIV screen. False-positive screen cases could be easily picked up on review of the confirmatory testing per the CDC laboratory HIV testing algorithm.12,13 This study also was a retrospective review of clinically ordered and indicated testing; as a result, without confirmatory testing performed on all negative screen cases, a false-negative rate would not be calculable.
This study also was restricted to only the population being treated in a VA health care setting. This population is known to be different from the general population.24-26
Conclusions
The switch to a fourth-generation assay resulted in a significant reduction in false-positive test results for veteran patients at CMJCVAMC. This reduction in false-positive screening not only reduced laboratory workload due to the necessary confirmatory testing and subsequent review, but also saved costs for technologist’s time and reagents. While this reduction in false-positive results has been documented in nonveteran populations, this is the first study specifically on a veteran population treated at a VAMC.8,11,18 This study confirms previously documented findings of improvement in the false-positive rate of HIV screening tests with the change from third-generation to fourth-generation assay for a veteran population.24
1. Feinberg MB. Changing the natural history of HIV disease. Lancet. 1996;348(9022):239-246. doi:10.1016/s0140-6736(96)06231-9.
2. Alexander TS. Human immunodeficiency virus diagnostic testing: 30 years of evolution. Clin Vaccine Immunol. 2016;23(4):249-253. Published 2016 Apr 4. doi:10.1128/CVI.00053-16
3. Mortimer PP, Parry JV, Mortimer JY. Which anti-HTLV III/LAV assays for screening and confirmatory testing?. Lancet. 1985;2(8460):873-877. doi:10.1016/s0140-6736(85)90136-9
4. Holmberg SD, Palella FJ Jr, Lichtenstein KA, Havlir DV. The case for earlier treatment of HIV infection [published correction appears in Clin Infect Dis. 2004 Dec 15;39(12):1869]. Clin Infect Dis. 2004;39(11):1699-1704. doi:10.1086/425743
5. US Preventive Services Task Force, Owens DK, Davidson KW, et al. Screening for HIV Infection: US Preventive Services Task Force Recommendation Statement. JAMA. 2019;321(23):2326-2336. doi:10.1001/jama.2019.6587
6. Branson BM, Handsfield HH, Lampe MA, et al. Revised recommendations for HIV testing of adults, adolescents, and pregnant women in health-care settings. MMWR Recomm Rep. 2006;55(RR-14):1-CE4.
7. Bayer R, Philbin M, Remien RH. The end of written informed consent for HIV testing: not with a bang but a whimper. Am J Public Health. 2017;107(8):1259-1265. doi:10.2105/AJPH.2017.303819
8. Petersen J, Jhala D. Its not HIV! The pitfall of unconfirmed positive HIV screening assays. Abstract presented at: Annual Meeting Pennsylvania Association of Pathologists; April 14, 2018.
9. Wood RW, Dunphy C, Okita K, Swenson P. Two “HIV-infected” persons not really infected. Arch Intern Med. 2003;163(15):1857-1859. doi:10.1001/archinte.163.15.1857
10. Permpalung N, Ungprasert P, Chongnarungsin D, Okoli A, Hyman CL. A diagnostic blind spot: acute infectious mononucleosis or acute retroviral syndrome. Am J Med. 2013;126(9):e5-e6. doi:10.1016/j.amjmed.2013.03.017
11. Dalal S, Petersen J, Luta D, Jhala D. Third- to fourth-generation HIV testing: reduction in false-positive results with the new way of testing, the Corporal Michael J. Crescenz Veteran Affairs Medical Center (CMCVAMC) Experience. Am J Clin Pathol.2018;150(suppl 1):S70-S71. doi:10.1093/ajcp/aqy093.172
12. Centers for Disease Control and Prevention. Laboratory testing for the diagnosis of HIV infection: updated recommendations. Published June 27, 2014. Accessed April 14, 2021. doi:10.15620/cdc.23447
13. Centers for Disease Control and Prevention. 2018 quick reference guide: recommended laboratory HIV testing algorithm for serum or plasma specimens. Updated January 2018. Accessed April 14, 202. https://stacks.cdc.gov/view/cdc/50872
14. Masciotra S, McDougal JS, Feldman J, Sprinkle P, Wesolowski L, Owen SM. Evaluation of an alternative HIV diagnostic algorithm using specimens from seroconversion panels and persons with established HIV infections. J Clin Virol. 2011;52(suppl 1):S17-S22. doi:10.1016/j.jcv.2011.09.011
15. Morton A. When lab tests lie … heterophile antibodies. Aust Fam Physician. 2014;43(6):391-393.
16. Spencer DV, Nolte FS, Zhu Y. Heterophilic antibody interference causing false-positive rapid human immunodeficiency virus antibody testing. Clin Chim Acta. 2009;399(1-2):121-122. doi:10.1016/j.cca.2008.09.030
17. Kim S, Lee JH, Choi JY, Kim JM, Kim HS. False-positive rate of a “fourth-generation” HIV antigen/antibody combination assay in an area of low HIV prevalence. Clin Vaccine Immunol. 2010;17(10):1642-1644. doi:10.1128/CVI.00258-10
18. Muthukumar A, Alatoom A, Burns S, et al. Comparison of 4th-generation HIV antigen/antibody combination assay with 3rd-generation HIV antibody assays for the occurrence of false-positive and false-negative results. Lab Med. 2015;46(2):84-e29. doi:10.1309/LMM3X37NSWUCMVRS
19. Mitchell EO, Stewart G, Bajzik O, Ferret M, Bentsen C, Shriver MK. Performance comparison of the 4th generation Bio-Rad Laboratories GS HIV Combo Ag/Ab EIA on the EVOLIS™ automated system versus Abbott ARCHITECT HIV Ag/Ab Combo, Ortho Anti-HIV 1+2 EIA on Vitros ECi and Siemens HIV-1/O/2 enhanced on Advia Centaur. J Clin Virol. 2013;58(suppl 1):e79-e84. doi:10.1016/j.jcv.2013.08.009
20. Dubravac T, Gahan TF, Pentella MA. Use of the Abbott Architect HIV antigen/antibody assay in a low incidence population. J Clin Virol. 2013;58(suppl 1):e76-e78. doi:10.1016/j.jcv.2013.10.020
21. Montesinos I, Eykmans J, Delforge ML. Evaluation of the Bio-Rad Geenius HIV-1/2 test as a confirmatory assay. J Clin Virol. 2014;60(4):399-401. doi:10.1016/j.jcv.2014.04.025
22. van Binsbergen J, Siebelink A, Jacobs A, et al. Improved performance of seroconversion with a 4th generation HIV antigen/antibody assay. J Virol Methods. 1999;82(1):77-84. doi:10.1016/s0166-0934(99)00086-5
23. CLSI. User Protocol for Evaluation of Qualitative Test Performance: Approved Guideline. Second ed. EP12-A2. CLSI; 2008:1-46.
24. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252
25. Eibner C, Krull H, Brown KM, et al. Current and projected characteristics and unique health care needs of the patient population served by the Department of Veterans Affairs. Rand Health Q. 2016;5(4):13. Published 2016 May 9.
26. Morgan RO, Teal CR, Reddy SG, Ford ME, Ashton CM. Measurement in Veterans Affairs Health Services Research: veterans as a special population. Health Serv Res. 2005;40(5, pt 2):1573-1583. doi:10.1111/j.1475-6773.2005.00448.x
27. Nightingale VR, Sher TG, Hansen NB. The impact of receiving an HIV diagnosis and cognitive processing on psychological distress and posttraumatic growth. J Trauma Stress. 2010;23(4):452-460. doi:10.1002/jts.20554
28. Spelman JF, Hunt SC, Seal KH, Burgo-Black AL. Post deployment care for returning combat veterans. J Gen Intern Med. 2012;27(9):1200-1209. doi:10.1007/s11606-012-2061-1
29. Ndase P, Celum C, Kidoguchi L, et al. Frequency of false positive rapid HIV serologic tests in African men and women receiving PrEP for HIV prevention: implications for programmatic roll-out of biomedical interventions. PLoS One. 2015;10(4):e0123005. Published 2015 Apr 17. doi:10.1371/journal.pone.0123005
Ever since the first clinical reports of patients with AIDS in 1981, there have been improvements both in the knowledge base of the pathogenesis of HIV in causing AIDS as well as a progressive refinement in the test methodologies used to diagnose this illness.1-3 Given that there are both public health and clinical benefits in earlier diagnosis and treatment of patients with available antiretroviral therapies, universal screening with opt-out consent has been a standard of practice recommendation by the Centers of Disease Control and Prevention (CDC) since 2006; universal screening with opt-out consent also has been recommended by the US Preventative Task Force and has been widely implemented.4-7
HIV Screening
While HIV screening assays have evolved to be accurate with very high sensitivities and specificities, false-positive results are a significant issue both currently and historically.8-16 The use of an HIV assay on a low prevalence population predictably reduces the positive predictive value (PPV) of even an otherwise accurate assay.8-23 In light of this, laboratory HIV testing algorithms include confirmatory testing to increase the likelihood that the correct diagnosis is being rendered.
The fourth-generation assay has been shown to be more sensitive and specific compared with that of the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 Due to these improvements, in the general population, increased sensitivity/specificity with a reduction in both false positives and false negatives have been reported.
It has been observed in the nonveteran population that switching from the older third-generation to a more sensitive and specific fourth-generation HIV screening assay has reduced the false-positive screening rate.18,19,22 For instance, Muthukumar and colleagues demonstrated a false-positive rate of only 2 out of 99 (2%) tested specimens for the fourth-generation ARCHITECT HIV Ag/Ab Combo assay vs 9 out of 99 tested specimens (9%) for the third-generation ADVIA Centaur HIV 1/O/2 Enhanced assay.18 In addition, it has been noted that fourth-generation HIV screening assays can reduce the window period by detecting HIV infection sooner after initial acute infection.19 Mitchell and colleagues demonstrated even highly specific fourth-generation HIV assays with specificities estimated at 99.7% can have PPVs as low as 25.0% if used in a population of low HIV prevalence (such as a 0.1% prevalence population).19 However, the veteran population has been documented to differ significantly on a number of population variables, including severity of disease and susceptibility to infections, and as a result extrapolation of these data from the general population may be limited.24-26 To our knowledge, this article represents the first study directly examining the reduction in false-positive results with the switch to a fourth-generation HIV generation assay from a third-generation assay for the veteran patient population at a regional US Department of Veterans Affairs (VA) medical center (VAMC).8,11
Methods
Quality assurance documents on test volume were retrospectively reviewed to obtain the number of HIV screening tests that were performed by the laboratory at the Corporal Michael J. Crescenz VAMC (CMJCVAMC) in Philadelphia, Pennsylvania, between March 1, 2016 and February 28, 2017, prior to implementation of the fourth-generation assay. The study also include results from the first year of use of the fourth-generation assay (March 1, 2017 to February 28, 2018). In addition, paper quality assurance records of all positive screening results during those periods were reviewed and manually counted for the abstract presentation of these data.
For assurance of accuracy, a search of all HIV testing assays using Veterans Health Information Systems and Technology Architecture and FileMan also was performed, and the results were compared to records in the Computerized Patient Record System (CPRS). Any discrepancies in the numbers of test results generated by both searches were investigated, and data for the manuscript were derived from records associating tests with particular patients. Only results from patient samples were considered for the electronic search. Quality samples that did not correspond to a true patient as identified in CPRS or same time patient sample duplicates were excluded from the calculations. Basic demographic data (age, ethnicity, and gender) were obtained from this FileMan search. The third-generation assay was the Ortho-Clinical Diagnostics Vitros, and the fourth-generation assay was the Abbott Architect.
To interpret the true HIV result of each sample with a reactive or positive screening result, the CDC laboratory HIV testing algorithm was followed and reviewed with a clinical pathologist or microbiologist director.12,13 All specimens interpreted as HIV positive by the pathologist or microbiologist director were discussed with the clinical health care provider at the time of the test with results added to CPRS after all testing was complete and discussions had taken place. All initially reactive specimens (confirmed with retesting in duplicate on the screening platform with at least 1 repeat reactive result) were further tested with the Bio-Rad Geenius HIV 1/2 Supplemental Assay, which screens for both HIV-1 and HIV-2 antibodies. Specimens with reactive results by this supplemental assay were interpreted as positive for HIV based on the CDC laboratory HIV testing algorithm. Specimens with negative or indeterminant results by the supplemental assay then underwent HIV-1 nucleic acid testing (NAT) using the Roche Diagnostics COBAS AmpliPrep/COBAS TaqMan HIV-1 Test v2.0. Specimens with viral load detected on NAT were positive for HIV infection, while specimens with viral load not detected on NAT testing were interpreted as negative for HIV-1 infection. Although there were no HIV-2 positive or indeterminant specimens during the study period, HIV-2 reactivity also would have been interpreted per the CDC laboratory HIV testing algorithm. Specimens with inadequate volume to complete all testing steps would be interpreted as indeterminant for HIV with request for additional specimen to complete testing. All testing platforms used for HIV testing in the laboratory had been properly validated prior to use.
The number of false positives and indeterminant results was tabulated in Microsoft Excel by month throughout the study period alongside the total number of HIV screening tests performed. Statistical analyses to verify statistical significance was performed by 1-tailed homoscedastic t test calculation using Excel.
Results
From March 1, 2016 to February 28, 2017, 7,516 specimens were screened for HIV, using the third-generation assay, and 52 specimens tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 24 tests were true positive and 28 were false positives with a PPV of 46% (24/52) (Figure 1).
From March 1, 2017 to February 28, 2018, 7,802 specimens were screened for HIV using a fourth-generation assay and 23 tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 16 were true positive and 7 were false positives with a PPV of 70% (16/23).
The fourth-generation assay was more specific when compared with the third-generation assay (0.09% vs 0.37%, respectively) with a 75.7% decrease in the false-positivity rate after the implementation of fourth-generation testing. The decreased number of false-positive test results per month with the fourth-generation test implementation was statistically significant (P = .002). The mean (SD) number of false-positive test results for the third-generation assay was 2.3 (1.7) per month, while the fourth-generation assay only had a mean (SD) of 0.58 (0.9) false positives monthly. The decrease in the percentage of false positives per month with the implementation of the fourth-generation assay also was statistically significant (P = .002) (Figure 2).
For population-based reference of the tested population at CMJCVAMC, there was a FileMan search for basic demographic data of patients for the HIV specimens screened by the third- or fourth-generation test (Table). For the population tested by the third-generation assay, 1,114 out of the 7,516 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For 6,402 of 7,516 patients tested by the third-generation assay with demographic information, the age ranged from 25 to 97 years with a mean of 57 years. This population of 6,402 was 88% male (n = 5,639), 50% African American (n = 3,220) and 43% White (n = 2,756). For the population tested by the fourth-generation assay, 993 of 7,802 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For the 6,809 of 7,802 patients tested by the fourth-generation assay with demographic information, the age ranged from 24 to 97 years with a mean age of 56 years. This population was 88% male (n = 5,971), 47% African American (n = 3,189), and 46% White (n = 3,149).
Discussion
Current practice guidelines from the CDC and the US Preventive Services Task Force recommend universal screening of the population for HIV infection.5,6 As the general population to be screened would normally have a low prevalence of HIV infection, the risk of a false positive on the initial screen is significant.17 Indeed, the CMJCVAMC experience has been that with the third-generation screening assay, the number of false-positive test results outnumbered the number of true-positive test results. Even with the fourth-generation assay, approximately one-third of the results were false positives. These results are similar to those observed in studies involving nonveteran populations in which the implementation of a fourth-generation screening assay led to significantly fewer false-positive results.18
For laboratories that do not follows CDC testing algorithm guidelines, each false-positive screening result represents a potential opportunity for a HIV misdiagnosis.Even in laboratories with proper procedures in place, false-positive results have consequences for the patients and for the cost-effectiveness of laboratory operations.9-11,18 As per CDC HIV testing guidelines, all positive screening results should be retested, which leads to additional use of technologist time and reagents. After this additional testing is performed and reviewed appropriately, only then can an appropriate final laboratory diagnosis be rendered that meets the standard of laboratory care.
Cost Savings
As observed at CMJCVAMC, the use of a fourth-generation assay with increased sensitivity/specificity led to a reduction in these false-positive results, which improved laboratory efficiency and avoided wasted resources for confirmatory tests.11,18 Cost savings at CMJCVAMC from the implementation of the fourth-generation assay would include technologist time and reagent cost. Generalizable technologist time costs at any institution would include the time needed to perform the confirmatory HIV-1/HIV-2 antibody differentiation assay (slightly less than 1 hour at CMJCVAMC per specimen) and the time needed to perform the viral load assay (about 6 hours to run a batch of 24 tests at CMJCVAMC). We calculated that confirmatory testing cost $184.51 per test at CMJCVAMC. Replacing the third-generation assay with the more sensitive and specific fourth-generation test saved an estimated $3,875 annually. This cost savings does not even consider savings in the pathologist/director’s time for reviewing HIV results after the completion of the algorithm or the clinician/patient costs or anxiety while waiting for results of the confirmatory sequence of tests.
As diagnosis of HIV can have a significant psychological impact on the patient, it is important to ensure the diagnosis conveyed is correct.27 The provision of an HIV diagnosis to a patient has been described as a traumatic stressor capable of causing psychological harm; this harm should ideally be avoided if the HIV diagnosis is not accurate. There can be a temptation, when presented with a positive or reactive screening test that is known to come from an instrument or assay with a very high sensitivity and specificity, to present this result as a diagnosis to the patient. However, a false diagnosis from a false-positive screen would not only be harmful, but given the low prevalence of the disease in the screened population, would happen fairly frequently; in some settings the number of false positives may actually outnumber the number of true positive test results.
Better screening assays with greater specificity (even fractions of a percentage, given that specificities are already > 99%) would help reduce the number of false positives and reduce the number of potential enticements to convey an incorrect diagnosis. Therefore, by adding an additional layer of safety through greater specificity, the fourth-generation assay implementation helped improve the diagnostic safety of the laboratory and reduced the significant error risk to the clinician who would ultimately bear responsibility for conveying the HIV diagnoses to the patient. Given the increased prevalence of psychological and physical ailments in veterans, it may be even more important to ensure the diagnosis is correct to avoid increased psychological harm.27,28
Veteran Population
For the general population, the fourth-generation assay has been shown to be more sensitive and specific when compared with the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 However, the veteran population that receives VA medical care differs significantly from the nonveteran general population. Compared with nonveterans, veterans tend to have generally poorer health status, more comorbid conditions, and greater need to use medical resources.24-26 In addition, veterans also may differ in sociodemographic status, race, ethnicity, and gender.24-26
VA research in the veteran population is unique, and veterans who use VA health care services are an even more highly selected subpopulation.26 Conclusions made from studies of the general population may not always be applicable to the veteran population treated by VA health care services due to these population differences. Therefore, specific studies tailored to this special veteran population in the specific VA health care setting are essential to ensure that the results of the general population truly and definitively apply to the veteran population.
While the false-positive risk is most closely associated with testing in a population of low prevalence, it also should be noted that false-positive screening results also can occur in high-risk individuals, such as an individual on preexposure prophylaxis (PrEP) for continuous behavior that places the individual at high risk of HIV acquisition.8,29 The false-positive result in these cases can lead to a conundrum for the clinician, and the differential diagnosis should consider both detection of very early infection as well as false positive. Interventions could include either stopping PrEP and treating for presumed early primary infection with HIV or continuing the PrEP. These interventions all have the potential to impact the patient whether through the production of resistant HIV virus due to the inadvertent provision of an inadequate treatment regimen, increased risk of infection if taken off PrEP as the patient may likely continue the behavior regardless, or the risks carried by the administration of additional antiretroviral therapies for the complete empiric therapy. Cases of an individual on PrEP who had a false-positive HIV screening test has been reported previously both within and outside the veteran population.8 Better screening tests with greater sensitivity/specificity can only help in guiding better patient care.
Limitations
This quality assurance study was limited to retrospectively identifying the improvement in the false-positive rate on the transition from the third-generation to the more advanced fourth-generation HIV screen. False-positive screen cases could be easily picked up on review of the confirmatory testing per the CDC laboratory HIV testing algorithm.12,13 This study also was a retrospective review of clinically ordered and indicated testing; as a result, without confirmatory testing performed on all negative screen cases, a false-negative rate would not be calculable.
This study also was restricted to only the population being treated in a VA health care setting. This population is known to be different from the general population.24-26
Conclusions
The switch to a fourth-generation assay resulted in a significant reduction in false-positive test results for veteran patients at CMJCVAMC. This reduction in false-positive screening not only reduced laboratory workload due to the necessary confirmatory testing and subsequent review, but also saved costs for technologist’s time and reagents. While this reduction in false-positive results has been documented in nonveteran populations, this is the first study specifically on a veteran population treated at a VAMC.8,11,18 This study confirms previously documented findings of improvement in the false-positive rate of HIV screening tests with the change from third-generation to fourth-generation assay for a veteran population.24
Ever since the first clinical reports of patients with AIDS in 1981, there have been improvements both in the knowledge base of the pathogenesis of HIV in causing AIDS as well as a progressive refinement in the test methodologies used to diagnose this illness.1-3 Given that there are both public health and clinical benefits in earlier diagnosis and treatment of patients with available antiretroviral therapies, universal screening with opt-out consent has been a standard of practice recommendation by the Centers of Disease Control and Prevention (CDC) since 2006; universal screening with opt-out consent also has been recommended by the US Preventative Task Force and has been widely implemented.4-7
HIV Screening
While HIV screening assays have evolved to be accurate with very high sensitivities and specificities, false-positive results are a significant issue both currently and historically.8-16 The use of an HIV assay on a low prevalence population predictably reduces the positive predictive value (PPV) of even an otherwise accurate assay.8-23 In light of this, laboratory HIV testing algorithms include confirmatory testing to increase the likelihood that the correct diagnosis is being rendered.
The fourth-generation assay has been shown to be more sensitive and specific compared with that of the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 Due to these improvements, in the general population, increased sensitivity/specificity with a reduction in both false positives and false negatives have been reported.
It has been observed in the nonveteran population that switching from the older third-generation to a more sensitive and specific fourth-generation HIV screening assay has reduced the false-positive screening rate.18,19,22 For instance, Muthukumar and colleagues demonstrated a false-positive rate of only 2 out of 99 (2%) tested specimens for the fourth-generation ARCHITECT HIV Ag/Ab Combo assay vs 9 out of 99 tested specimens (9%) for the third-generation ADVIA Centaur HIV 1/O/2 Enhanced assay.18 In addition, it has been noted that fourth-generation HIV screening assays can reduce the window period by detecting HIV infection sooner after initial acute infection.19 Mitchell and colleagues demonstrated even highly specific fourth-generation HIV assays with specificities estimated at 99.7% can have PPVs as low as 25.0% if used in a population of low HIV prevalence (such as a 0.1% prevalence population).19 However, the veteran population has been documented to differ significantly on a number of population variables, including severity of disease and susceptibility to infections, and as a result extrapolation of these data from the general population may be limited.24-26 To our knowledge, this article represents the first study directly examining the reduction in false-positive results with the switch to a fourth-generation HIV generation assay from a third-generation assay for the veteran patient population at a regional US Department of Veterans Affairs (VA) medical center (VAMC).8,11
Methods
Quality assurance documents on test volume were retrospectively reviewed to obtain the number of HIV screening tests that were performed by the laboratory at the Corporal Michael J. Crescenz VAMC (CMJCVAMC) in Philadelphia, Pennsylvania, between March 1, 2016 and February 28, 2017, prior to implementation of the fourth-generation assay. The study also include results from the first year of use of the fourth-generation assay (March 1, 2017 to February 28, 2018). In addition, paper quality assurance records of all positive screening results during those periods were reviewed and manually counted for the abstract presentation of these data.
For assurance of accuracy, a search of all HIV testing assays using Veterans Health Information Systems and Technology Architecture and FileMan also was performed, and the results were compared to records in the Computerized Patient Record System (CPRS). Any discrepancies in the numbers of test results generated by both searches were investigated, and data for the manuscript were derived from records associating tests with particular patients. Only results from patient samples were considered for the electronic search. Quality samples that did not correspond to a true patient as identified in CPRS or same time patient sample duplicates were excluded from the calculations. Basic demographic data (age, ethnicity, and gender) were obtained from this FileMan search. The third-generation assay was the Ortho-Clinical Diagnostics Vitros, and the fourth-generation assay was the Abbott Architect.
To interpret the true HIV result of each sample with a reactive or positive screening result, the CDC laboratory HIV testing algorithm was followed and reviewed with a clinical pathologist or microbiologist director.12,13 All specimens interpreted as HIV positive by the pathologist or microbiologist director were discussed with the clinical health care provider at the time of the test with results added to CPRS after all testing was complete and discussions had taken place. All initially reactive specimens (confirmed with retesting in duplicate on the screening platform with at least 1 repeat reactive result) were further tested with the Bio-Rad Geenius HIV 1/2 Supplemental Assay, which screens for both HIV-1 and HIV-2 antibodies. Specimens with reactive results by this supplemental assay were interpreted as positive for HIV based on the CDC laboratory HIV testing algorithm. Specimens with negative or indeterminant results by the supplemental assay then underwent HIV-1 nucleic acid testing (NAT) using the Roche Diagnostics COBAS AmpliPrep/COBAS TaqMan HIV-1 Test v2.0. Specimens with viral load detected on NAT were positive for HIV infection, while specimens with viral load not detected on NAT testing were interpreted as negative for HIV-1 infection. Although there were no HIV-2 positive or indeterminant specimens during the study period, HIV-2 reactivity also would have been interpreted per the CDC laboratory HIV testing algorithm. Specimens with inadequate volume to complete all testing steps would be interpreted as indeterminant for HIV with request for additional specimen to complete testing. All testing platforms used for HIV testing in the laboratory had been properly validated prior to use.
The number of false positives and indeterminant results was tabulated in Microsoft Excel by month throughout the study period alongside the total number of HIV screening tests performed. Statistical analyses to verify statistical significance was performed by 1-tailed homoscedastic t test calculation using Excel.
Results
From March 1, 2016 to February 28, 2017, 7,516 specimens were screened for HIV, using the third-generation assay, and 52 specimens tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 24 tests were true positive and 28 were false positives with a PPV of 46% (24/52) (Figure 1).
From March 1, 2017 to February 28, 2018, 7,802 specimens were screened for HIV using a fourth-generation assay and 23 tested positive for HIV. On further review of these reactive specimens per the CDC laboratory testing algorithm, 16 were true positive and 7 were false positives with a PPV of 70% (16/23).
The fourth-generation assay was more specific when compared with the third-generation assay (0.09% vs 0.37%, respectively) with a 75.7% decrease in the false-positivity rate after the implementation of fourth-generation testing. The decreased number of false-positive test results per month with the fourth-generation test implementation was statistically significant (P = .002). The mean (SD) number of false-positive test results for the third-generation assay was 2.3 (1.7) per month, while the fourth-generation assay only had a mean (SD) of 0.58 (0.9) false positives monthly. The decrease in the percentage of false positives per month with the implementation of the fourth-generation assay also was statistically significant (P = .002) (Figure 2).
For population-based reference of the tested population at CMJCVAMC, there was a FileMan search for basic demographic data of patients for the HIV specimens screened by the third- or fourth-generation test (Table). For the population tested by the third-generation assay, 1,114 out of the 7,516 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For 6,402 of 7,516 patients tested by the third-generation assay with demographic information, the age ranged from 25 to 97 years with a mean of 57 years. This population of 6,402 was 88% male (n = 5,639), 50% African American (n = 3,220) and 43% White (n = 2,756). For the population tested by the fourth-generation assay, 993 of 7,802 total tested population did not have readily available demographic information by the FileMan search as the specimens originated outside of the facility. For the 6,809 of 7,802 patients tested by the fourth-generation assay with demographic information, the age ranged from 24 to 97 years with a mean age of 56 years. This population was 88% male (n = 5,971), 47% African American (n = 3,189), and 46% White (n = 3,149).
Discussion
Current practice guidelines from the CDC and the US Preventive Services Task Force recommend universal screening of the population for HIV infection.5,6 As the general population to be screened would normally have a low prevalence of HIV infection, the risk of a false positive on the initial screen is significant.17 Indeed, the CMJCVAMC experience has been that with the third-generation screening assay, the number of false-positive test results outnumbered the number of true-positive test results. Even with the fourth-generation assay, approximately one-third of the results were false positives. These results are similar to those observed in studies involving nonveteran populations in which the implementation of a fourth-generation screening assay led to significantly fewer false-positive results.18
For laboratories that do not follows CDC testing algorithm guidelines, each false-positive screening result represents a potential opportunity for a HIV misdiagnosis.Even in laboratories with proper procedures in place, false-positive results have consequences for the patients and for the cost-effectiveness of laboratory operations.9-11,18 As per CDC HIV testing guidelines, all positive screening results should be retested, which leads to additional use of technologist time and reagents. After this additional testing is performed and reviewed appropriately, only then can an appropriate final laboratory diagnosis be rendered that meets the standard of laboratory care.
Cost Savings
As observed at CMJCVAMC, the use of a fourth-generation assay with increased sensitivity/specificity led to a reduction in these false-positive results, which improved laboratory efficiency and avoided wasted resources for confirmatory tests.11,18 Cost savings at CMJCVAMC from the implementation of the fourth-generation assay would include technologist time and reagent cost. Generalizable technologist time costs at any institution would include the time needed to perform the confirmatory HIV-1/HIV-2 antibody differentiation assay (slightly less than 1 hour at CMJCVAMC per specimen) and the time needed to perform the viral load assay (about 6 hours to run a batch of 24 tests at CMJCVAMC). We calculated that confirmatory testing cost $184.51 per test at CMJCVAMC. Replacing the third-generation assay with the more sensitive and specific fourth-generation test saved an estimated $3,875 annually. This cost savings does not even consider savings in the pathologist/director’s time for reviewing HIV results after the completion of the algorithm or the clinician/patient costs or anxiety while waiting for results of the confirmatory sequence of tests.
As diagnosis of HIV can have a significant psychological impact on the patient, it is important to ensure the diagnosis conveyed is correct.27 The provision of an HIV diagnosis to a patient has been described as a traumatic stressor capable of causing psychological harm; this harm should ideally be avoided if the HIV diagnosis is not accurate. There can be a temptation, when presented with a positive or reactive screening test that is known to come from an instrument or assay with a very high sensitivity and specificity, to present this result as a diagnosis to the patient. However, a false diagnosis from a false-positive screen would not only be harmful, but given the low prevalence of the disease in the screened population, would happen fairly frequently; in some settings the number of false positives may actually outnumber the number of true positive test results.
Better screening assays with greater specificity (even fractions of a percentage, given that specificities are already > 99%) would help reduce the number of false positives and reduce the number of potential enticements to convey an incorrect diagnosis. Therefore, by adding an additional layer of safety through greater specificity, the fourth-generation assay implementation helped improve the diagnostic safety of the laboratory and reduced the significant error risk to the clinician who would ultimately bear responsibility for conveying the HIV diagnoses to the patient. Given the increased prevalence of psychological and physical ailments in veterans, it may be even more important to ensure the diagnosis is correct to avoid increased psychological harm.27,28
Veteran Population
For the general population, the fourth-generation assay has been shown to be more sensitive and specific when compared with the third-generation assay due to the addition of detection of p24 antigen and the refinement of the antigenic targets for the antibody detection.6,8,11-13,18-20,22 However, the veteran population that receives VA medical care differs significantly from the nonveteran general population. Compared with nonveterans, veterans tend to have generally poorer health status, more comorbid conditions, and greater need to use medical resources.24-26 In addition, veterans also may differ in sociodemographic status, race, ethnicity, and gender.24-26
VA research in the veteran population is unique, and veterans who use VA health care services are an even more highly selected subpopulation.26 Conclusions made from studies of the general population may not always be applicable to the veteran population treated by VA health care services due to these population differences. Therefore, specific studies tailored to this special veteran population in the specific VA health care setting are essential to ensure that the results of the general population truly and definitively apply to the veteran population.
While the false-positive risk is most closely associated with testing in a population of low prevalence, it also should be noted that false-positive screening results also can occur in high-risk individuals, such as an individual on preexposure prophylaxis (PrEP) for continuous behavior that places the individual at high risk of HIV acquisition.8,29 The false-positive result in these cases can lead to a conundrum for the clinician, and the differential diagnosis should consider both detection of very early infection as well as false positive. Interventions could include either stopping PrEP and treating for presumed early primary infection with HIV or continuing the PrEP. These interventions all have the potential to impact the patient whether through the production of resistant HIV virus due to the inadvertent provision of an inadequate treatment regimen, increased risk of infection if taken off PrEP as the patient may likely continue the behavior regardless, or the risks carried by the administration of additional antiretroviral therapies for the complete empiric therapy. Cases of an individual on PrEP who had a false-positive HIV screening test has been reported previously both within and outside the veteran population.8 Better screening tests with greater sensitivity/specificity can only help in guiding better patient care.
Limitations
This quality assurance study was limited to retrospectively identifying the improvement in the false-positive rate on the transition from the third-generation to the more advanced fourth-generation HIV screen. False-positive screen cases could be easily picked up on review of the confirmatory testing per the CDC laboratory HIV testing algorithm.12,13 This study also was a retrospective review of clinically ordered and indicated testing; as a result, without confirmatory testing performed on all negative screen cases, a false-negative rate would not be calculable.
This study also was restricted to only the population being treated in a VA health care setting. This population is known to be different from the general population.24-26
Conclusions
The switch to a fourth-generation assay resulted in a significant reduction in false-positive test results for veteran patients at CMJCVAMC. This reduction in false-positive screening not only reduced laboratory workload due to the necessary confirmatory testing and subsequent review, but also saved costs for technologist’s time and reagents. While this reduction in false-positive results has been documented in nonveteran populations, this is the first study specifically on a veteran population treated at a VAMC.8,11,18 This study confirms previously documented findings of improvement in the false-positive rate of HIV screening tests with the change from third-generation to fourth-generation assay for a veteran population.24
1. Feinberg MB. Changing the natural history of HIV disease. Lancet. 1996;348(9022):239-246. doi:10.1016/s0140-6736(96)06231-9.
2. Alexander TS. Human immunodeficiency virus diagnostic testing: 30 years of evolution. Clin Vaccine Immunol. 2016;23(4):249-253. Published 2016 Apr 4. doi:10.1128/CVI.00053-16
3. Mortimer PP, Parry JV, Mortimer JY. Which anti-HTLV III/LAV assays for screening and confirmatory testing?. Lancet. 1985;2(8460):873-877. doi:10.1016/s0140-6736(85)90136-9
4. Holmberg SD, Palella FJ Jr, Lichtenstein KA, Havlir DV. The case for earlier treatment of HIV infection [published correction appears in Clin Infect Dis. 2004 Dec 15;39(12):1869]. Clin Infect Dis. 2004;39(11):1699-1704. doi:10.1086/425743
5. US Preventive Services Task Force, Owens DK, Davidson KW, et al. Screening for HIV Infection: US Preventive Services Task Force Recommendation Statement. JAMA. 2019;321(23):2326-2336. doi:10.1001/jama.2019.6587
6. Branson BM, Handsfield HH, Lampe MA, et al. Revised recommendations for HIV testing of adults, adolescents, and pregnant women in health-care settings. MMWR Recomm Rep. 2006;55(RR-14):1-CE4.
7. Bayer R, Philbin M, Remien RH. The end of written informed consent for HIV testing: not with a bang but a whimper. Am J Public Health. 2017;107(8):1259-1265. doi:10.2105/AJPH.2017.303819
8. Petersen J, Jhala D. Its not HIV! The pitfall of unconfirmed positive HIV screening assays. Abstract presented at: Annual Meeting Pennsylvania Association of Pathologists; April 14, 2018.
9. Wood RW, Dunphy C, Okita K, Swenson P. Two “HIV-infected” persons not really infected. Arch Intern Med. 2003;163(15):1857-1859. doi:10.1001/archinte.163.15.1857
10. Permpalung N, Ungprasert P, Chongnarungsin D, Okoli A, Hyman CL. A diagnostic blind spot: acute infectious mononucleosis or acute retroviral syndrome. Am J Med. 2013;126(9):e5-e6. doi:10.1016/j.amjmed.2013.03.017
11. Dalal S, Petersen J, Luta D, Jhala D. Third- to fourth-generation HIV testing: reduction in false-positive results with the new way of testing, the Corporal Michael J. Crescenz Veteran Affairs Medical Center (CMCVAMC) Experience. Am J Clin Pathol.2018;150(suppl 1):S70-S71. doi:10.1093/ajcp/aqy093.172
12. Centers for Disease Control and Prevention. Laboratory testing for the diagnosis of HIV infection: updated recommendations. Published June 27, 2014. Accessed April 14, 2021. doi:10.15620/cdc.23447
13. Centers for Disease Control and Prevention. 2018 quick reference guide: recommended laboratory HIV testing algorithm for serum or plasma specimens. Updated January 2018. Accessed April 14, 202. https://stacks.cdc.gov/view/cdc/50872
14. Masciotra S, McDougal JS, Feldman J, Sprinkle P, Wesolowski L, Owen SM. Evaluation of an alternative HIV diagnostic algorithm using specimens from seroconversion panels and persons with established HIV infections. J Clin Virol. 2011;52(suppl 1):S17-S22. doi:10.1016/j.jcv.2011.09.011
15. Morton A. When lab tests lie … heterophile antibodies. Aust Fam Physician. 2014;43(6):391-393.
16. Spencer DV, Nolte FS, Zhu Y. Heterophilic antibody interference causing false-positive rapid human immunodeficiency virus antibody testing. Clin Chim Acta. 2009;399(1-2):121-122. doi:10.1016/j.cca.2008.09.030
17. Kim S, Lee JH, Choi JY, Kim JM, Kim HS. False-positive rate of a “fourth-generation” HIV antigen/antibody combination assay in an area of low HIV prevalence. Clin Vaccine Immunol. 2010;17(10):1642-1644. doi:10.1128/CVI.00258-10
18. Muthukumar A, Alatoom A, Burns S, et al. Comparison of 4th-generation HIV antigen/antibody combination assay with 3rd-generation HIV antibody assays for the occurrence of false-positive and false-negative results. Lab Med. 2015;46(2):84-e29. doi:10.1309/LMM3X37NSWUCMVRS
19. Mitchell EO, Stewart G, Bajzik O, Ferret M, Bentsen C, Shriver MK. Performance comparison of the 4th generation Bio-Rad Laboratories GS HIV Combo Ag/Ab EIA on the EVOLIS™ automated system versus Abbott ARCHITECT HIV Ag/Ab Combo, Ortho Anti-HIV 1+2 EIA on Vitros ECi and Siemens HIV-1/O/2 enhanced on Advia Centaur. J Clin Virol. 2013;58(suppl 1):e79-e84. doi:10.1016/j.jcv.2013.08.009
20. Dubravac T, Gahan TF, Pentella MA. Use of the Abbott Architect HIV antigen/antibody assay in a low incidence population. J Clin Virol. 2013;58(suppl 1):e76-e78. doi:10.1016/j.jcv.2013.10.020
21. Montesinos I, Eykmans J, Delforge ML. Evaluation of the Bio-Rad Geenius HIV-1/2 test as a confirmatory assay. J Clin Virol. 2014;60(4):399-401. doi:10.1016/j.jcv.2014.04.025
22. van Binsbergen J, Siebelink A, Jacobs A, et al. Improved performance of seroconversion with a 4th generation HIV antigen/antibody assay. J Virol Methods. 1999;82(1):77-84. doi:10.1016/s0166-0934(99)00086-5
23. CLSI. User Protocol for Evaluation of Qualitative Test Performance: Approved Guideline. Second ed. EP12-A2. CLSI; 2008:1-46.
24. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252
25. Eibner C, Krull H, Brown KM, et al. Current and projected characteristics and unique health care needs of the patient population served by the Department of Veterans Affairs. Rand Health Q. 2016;5(4):13. Published 2016 May 9.
26. Morgan RO, Teal CR, Reddy SG, Ford ME, Ashton CM. Measurement in Veterans Affairs Health Services Research: veterans as a special population. Health Serv Res. 2005;40(5, pt 2):1573-1583. doi:10.1111/j.1475-6773.2005.00448.x
27. Nightingale VR, Sher TG, Hansen NB. The impact of receiving an HIV diagnosis and cognitive processing on psychological distress and posttraumatic growth. J Trauma Stress. 2010;23(4):452-460. doi:10.1002/jts.20554
28. Spelman JF, Hunt SC, Seal KH, Burgo-Black AL. Post deployment care for returning combat veterans. J Gen Intern Med. 2012;27(9):1200-1209. doi:10.1007/s11606-012-2061-1
29. Ndase P, Celum C, Kidoguchi L, et al. Frequency of false positive rapid HIV serologic tests in African men and women receiving PrEP for HIV prevention: implications for programmatic roll-out of biomedical interventions. PLoS One. 2015;10(4):e0123005. Published 2015 Apr 17. doi:10.1371/journal.pone.0123005
1. Feinberg MB. Changing the natural history of HIV disease. Lancet. 1996;348(9022):239-246. doi:10.1016/s0140-6736(96)06231-9.
2. Alexander TS. Human immunodeficiency virus diagnostic testing: 30 years of evolution. Clin Vaccine Immunol. 2016;23(4):249-253. Published 2016 Apr 4. doi:10.1128/CVI.00053-16
3. Mortimer PP, Parry JV, Mortimer JY. Which anti-HTLV III/LAV assays for screening and confirmatory testing?. Lancet. 1985;2(8460):873-877. doi:10.1016/s0140-6736(85)90136-9
4. Holmberg SD, Palella FJ Jr, Lichtenstein KA, Havlir DV. The case for earlier treatment of HIV infection [published correction appears in Clin Infect Dis. 2004 Dec 15;39(12):1869]. Clin Infect Dis. 2004;39(11):1699-1704. doi:10.1086/425743
5. US Preventive Services Task Force, Owens DK, Davidson KW, et al. Screening for HIV Infection: US Preventive Services Task Force Recommendation Statement. JAMA. 2019;321(23):2326-2336. doi:10.1001/jama.2019.6587
6. Branson BM, Handsfield HH, Lampe MA, et al. Revised recommendations for HIV testing of adults, adolescents, and pregnant women in health-care settings. MMWR Recomm Rep. 2006;55(RR-14):1-CE4.
7. Bayer R, Philbin M, Remien RH. The end of written informed consent for HIV testing: not with a bang but a whimper. Am J Public Health. 2017;107(8):1259-1265. doi:10.2105/AJPH.2017.303819
8. Petersen J, Jhala D. Its not HIV! The pitfall of unconfirmed positive HIV screening assays. Abstract presented at: Annual Meeting Pennsylvania Association of Pathologists; April 14, 2018.
9. Wood RW, Dunphy C, Okita K, Swenson P. Two “HIV-infected” persons not really infected. Arch Intern Med. 2003;163(15):1857-1859. doi:10.1001/archinte.163.15.1857
10. Permpalung N, Ungprasert P, Chongnarungsin D, Okoli A, Hyman CL. A diagnostic blind spot: acute infectious mononucleosis or acute retroviral syndrome. Am J Med. 2013;126(9):e5-e6. doi:10.1016/j.amjmed.2013.03.017
11. Dalal S, Petersen J, Luta D, Jhala D. Third- to fourth-generation HIV testing: reduction in false-positive results with the new way of testing, the Corporal Michael J. Crescenz Veteran Affairs Medical Center (CMCVAMC) Experience. Am J Clin Pathol.2018;150(suppl 1):S70-S71. doi:10.1093/ajcp/aqy093.172
12. Centers for Disease Control and Prevention. Laboratory testing for the diagnosis of HIV infection: updated recommendations. Published June 27, 2014. Accessed April 14, 2021. doi:10.15620/cdc.23447
13. Centers for Disease Control and Prevention. 2018 quick reference guide: recommended laboratory HIV testing algorithm for serum or plasma specimens. Updated January 2018. Accessed April 14, 202. https://stacks.cdc.gov/view/cdc/50872
14. Masciotra S, McDougal JS, Feldman J, Sprinkle P, Wesolowski L, Owen SM. Evaluation of an alternative HIV diagnostic algorithm using specimens from seroconversion panels and persons with established HIV infections. J Clin Virol. 2011;52(suppl 1):S17-S22. doi:10.1016/j.jcv.2011.09.011
15. Morton A. When lab tests lie … heterophile antibodies. Aust Fam Physician. 2014;43(6):391-393.
16. Spencer DV, Nolte FS, Zhu Y. Heterophilic antibody interference causing false-positive rapid human immunodeficiency virus antibody testing. Clin Chim Acta. 2009;399(1-2):121-122. doi:10.1016/j.cca.2008.09.030
17. Kim S, Lee JH, Choi JY, Kim JM, Kim HS. False-positive rate of a “fourth-generation” HIV antigen/antibody combination assay in an area of low HIV prevalence. Clin Vaccine Immunol. 2010;17(10):1642-1644. doi:10.1128/CVI.00258-10
18. Muthukumar A, Alatoom A, Burns S, et al. Comparison of 4th-generation HIV antigen/antibody combination assay with 3rd-generation HIV antibody assays for the occurrence of false-positive and false-negative results. Lab Med. 2015;46(2):84-e29. doi:10.1309/LMM3X37NSWUCMVRS
19. Mitchell EO, Stewart G, Bajzik O, Ferret M, Bentsen C, Shriver MK. Performance comparison of the 4th generation Bio-Rad Laboratories GS HIV Combo Ag/Ab EIA on the EVOLIS™ automated system versus Abbott ARCHITECT HIV Ag/Ab Combo, Ortho Anti-HIV 1+2 EIA on Vitros ECi and Siemens HIV-1/O/2 enhanced on Advia Centaur. J Clin Virol. 2013;58(suppl 1):e79-e84. doi:10.1016/j.jcv.2013.08.009
20. Dubravac T, Gahan TF, Pentella MA. Use of the Abbott Architect HIV antigen/antibody assay in a low incidence population. J Clin Virol. 2013;58(suppl 1):e76-e78. doi:10.1016/j.jcv.2013.10.020
21. Montesinos I, Eykmans J, Delforge ML. Evaluation of the Bio-Rad Geenius HIV-1/2 test as a confirmatory assay. J Clin Virol. 2014;60(4):399-401. doi:10.1016/j.jcv.2014.04.025
22. van Binsbergen J, Siebelink A, Jacobs A, et al. Improved performance of seroconversion with a 4th generation HIV antigen/antibody assay. J Virol Methods. 1999;82(1):77-84. doi:10.1016/s0166-0934(99)00086-5
23. CLSI. User Protocol for Evaluation of Qualitative Test Performance: Approved Guideline. Second ed. EP12-A2. CLSI; 2008:1-46.
24. Agha Z, Lofgren RP, VanRuiswyk JV, Layde PM. Are patients at Veterans Affairs medical centers sicker? A comparative analysis of health status and medical resource use. Arch Intern Med. 2000;160(21):3252-3257. doi:10.1001/archinte.160.21.3252
25. Eibner C, Krull H, Brown KM, et al. Current and projected characteristics and unique health care needs of the patient population served by the Department of Veterans Affairs. Rand Health Q. 2016;5(4):13. Published 2016 May 9.
26. Morgan RO, Teal CR, Reddy SG, Ford ME, Ashton CM. Measurement in Veterans Affairs Health Services Research: veterans as a special population. Health Serv Res. 2005;40(5, pt 2):1573-1583. doi:10.1111/j.1475-6773.2005.00448.x
27. Nightingale VR, Sher TG, Hansen NB. The impact of receiving an HIV diagnosis and cognitive processing on psychological distress and posttraumatic growth. J Trauma Stress. 2010;23(4):452-460. doi:10.1002/jts.20554
28. Spelman JF, Hunt SC, Seal KH, Burgo-Black AL. Post deployment care for returning combat veterans. J Gen Intern Med. 2012;27(9):1200-1209. doi:10.1007/s11606-012-2061-1
29. Ndase P, Celum C, Kidoguchi L, et al. Frequency of false positive rapid HIV serologic tests in African men and women receiving PrEP for HIV prevention: implications for programmatic roll-out of biomedical interventions. PLoS One. 2015;10(4):e0123005. Published 2015 Apr 17. doi:10.1371/journal.pone.0123005
Risk Factors and Antipsychotic Usage Patterns Associated With Terminal Delirium in a Veteran Long-Term Care Hospice Population
Delirium is a condition commonly exhibited by hospitalized patients and by those who are approaching the end of life.1 Patients who experience a disturbance in attention that develops over a relatively short period and represents an acute change may have delirium.2 Furthermore, there is often an additional cognitive disturbance, such as disorientation, memory deficit, language deficits, visuospatial deficit, or perception. Terminal delirium is defined as delirium that occurs in the dying process and implies that reversal is less likely.3 When death is anticipated, diagnostic workups are not recommended, and treatment of the physiologic abnormalities that contribute to delirium is generally ineffective.4
Background
Delirium is often underdiagnosed and undetected by the clinician. Some studies have shown that delirium is not detected in 22 to 50% of cases.5 Factors that contribute to the underdetection of delirium include preexisting dementia, older age, presence of visual or hearing impairment, and hypoactive presentation of delirium. Other possible reasons for nondetection of delirium are its fluctuating nature and lack of formal cognitive assessment as part of a routine screening across care settings.5 Another study found that 41% of health care providers (HCPs) felt that screening for delirium was burdensome.6
To date, there are no veteran-focused studies that investigate prevalence or risk factors for terminal delirium in US Department of Veterans Affairs (VA) long-term care hospice units. Most long-term care hospice units in the VA are in community living centers (CLCs) that follow regulatory guidelines for using antipsychotic medications. The Centers for Medicare and Medicaid Services state that if antipsychotics are prescribed, documentation must clearly show the indication for the antipsychotic medication, the multiple attempts to implement planned care, nonpharmacologic approaches, and ongoing evaluation of the effectiveness of these interventions.7 The symptoms of terminal delirium cause significant distress to patients, family and caregivers, and nursing staff. Literature suggests that delirium poses significant relational challenges for patients, families, and HCPs in end-of-life situations.8,9 We hypothesize that the early identification of risk factors for the development of terminal delirium in this population may lead to increased use of nonpharmacologic measures to prevent terminal delirium, increase nursing vigilance for development of symptoms, and reduce symptom burden should terminal delirium develop.
Prevalence of delirium in the long-term care setting has ranged between 1.4 and 70.3%.10 The rate was found to be much higher in institutionalized populations compared with that of patients classified as at-home. In a study of the prevalence, severity, and natural history of neuropsychiatric syndromes in terminally ill veterans enrolled in community hospice, delirium was found to be present in only 4.1% on the initial visit and 42.5% during last visit. Also, more than half had at least 1 episode of delirium during the 90-day study period.11 In a study of the prevalence of delirium in terminal cancer patients admitted to hospice, 80% experienced delirium in their final days.12
Risk factors for the development of delirium that have been identified in actively dying patients include bowel or bladder obstruction, fluid and electrolyte imbalances, suboptimal pain management, medication adverse effects and toxicity (eg, benzodiazepines, opioids, anticholinergics, and steroids), the addition of ≥ 3 medications, infection, hepatic and renal failure, poor glycemic control, hypoxia, and hematologic disturbances.4,5,13 A high percentage of patients with a previous diagnosis of dementia were found to exhibit terminal delirium.14
There are 2 major subtypes of delirium: hyperactive and hypoactive.4 Patients with hypoactive delirium exhibit lethargy, reduced motor activity, lack of interest, and/or incoherent speech. There is currently little evidence to guide the treatment of hypoactive delirium. By contrast, hyperactive delirium is associated with hallucinations, agitation, heightened arousal, and inappropriate behavior. Many studies suggest both nonpharmacologic and pharmacologic treatment modalities for the treatment of hyperactive delirium.4,13 Nonpharmacologic interventions may minimize the risk and severity of symptoms associated with delirium. Current guidelines recommend these interventions before pharmacologic treatment.4 Nonpharmacologic interventions include but are not limited to the following: engaging the patient in mentally stimulating activities; surrounding the patient with familiar materials (eg, photos); ensuring that all individuals identify themselves when they encounter a patient; minimizing the intensity of stimulation, providing family or volunteer presence, soft lighting and warm blankets; and ensuring the patient uses hearing aids and glasses if needed.4,14
Although there are no US Food and Drug Administration-approved medications to treat hyperactive delirium, first-generation antipsychotics (eg, haloperidol, chlorpromazine) are considered the first-line treatment for patients exhibiting psychosis and psychomotor agitation.3,4,14-16 In terminally ill patients, there is limited evidence from clinical trials to support the efficacy of drug therapy.14 One study showed lack of efficacy with hydration and opioid rotation.17 In terminally ill patients experiencing hyperactive delirium, there is a significant increased risk of muscle tension, myoclonic seizures, and distress to the patient, family, and caregiver.1 Benzodiazepines can be considered first-line treatment for dying patients with terminal delirium in which the goals of treatment are to relieve muscle tension, ensure amnesia, reduce the risk of seizures, and decrease psychosis and agitation.18,19 Furthermore, in patients with history of alcohol misuse who are experiencing terminal delirium, benzodiazepines also may be the preferred pharmacologic treatment.20 Caution must be exercised with the use of benzodiazepines because they can also cause oversedation, increased confusion, and/or a paradoxical worsening of delirium.3,4,14
Methods
This was a retrospective case-control study of patients who died in the Edward Hines Jr. Veterans Affairs Hospital CLC in Hines, Illinois, under the treating specialty nursing home hospice from October 1, 2013 to September 30, 2015. Due to the retrospective nature of this trial, the use of antipsychotics within the last 2 weeks of life was a surrogate marker for development of terminal delirium. Cases were defined as patients who were treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Controls were defined as patients who were not treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Living hospice patients and patients who were discharged from the CLC before death were excluded.
The goals of this study were to (1) determine risk factors in the VA CLC hospice veteran population for the development of terminal delirium; (2) evaluate documentation by the nursing staff of nonpharmacologic interventions and indications for antipsychotic use in the treatment of terminal delirium; and (3) examine the current usage patterns of antipsychotics for the treatment of terminal delirium.
Veterans’ medical records were reviewed from 2 weeks before death until the recorded death date. Factors that were assessed included age, war era of service, date of death, terminal diagnosis, time interval from cancer diagnosis to death, comorbid conditions, prescribed antipsychotic medications, and other medications potentially contributing to delirium. Nursing documentation was reviewed for indications for administration of antipsychotic medications and nonpharmacologic interventions used to mitigate the symptoms of terminal delirium.
Statistical analysis was conducted in SAS Version 9.3. Cases were compared with controls using univariate and multivariate statistics as appropriate. Comparisons for continuous variables (eg, age) were conducted with Student t tests. Categorical variables (eg, PTSD diagnosis) were compared using χ2 analysis or Fisher exact test as appropriate. Variables with a P value < .1 in the univariate analysis were included in logistic regression models. Independent variables were removed from the models, using a backward selection process. Interaction terms were tested based on significance and clinical relevance. A P value < .05 was considered statistically significant.
Results
From October 1, 2013 to September 30, 2015, 307 patients were analyzed for inclusion in this study. Within this population, 186 received antipsychotic medications for the treatment of terminal delirium (cases), while 90 did not receive antipsychotics (controls). Of the 31 excluded patients, 13 were discharged to receive home hospice care, 11 were discharged to community nursing homes, 5 died in acute care units of Edward Hines, Jr. VA Hospital, and 2 died outside of the study period.
The mean age of all included patients was 75.5 years, and the most common terminal diagnosis was cancer, which occurred in 156 patients (56.5%) (Table 1). The baseline characteristics were similar between the cases and controls, including war era of veteran, terminal diagnosis, and comorbid conditions. The mean time between cancer diagnosis and death was not notably longer in the control group compared with that of the case group (25 vs 16 mo, respectively). There was no statistically significant difference in terminal diagnoses between cases and controls. Veterans in the control group spent more days (mean [SD]) in the hospice unit compared with veterans who experienced terminal delirium (48.5 [168.4] vs 28.2 [46.9]; P = .01). Patients with suspected infections were more likely found in the control group (P = .04; odds ratio [OR] = 1.70; 95% CI, 1.02-2.82).
The most common antipsychotic administered in the last 14 days of life was haloperidol. In the case group, 175 (94%) received haloperidol at least once in the last 2 weeks of life. Four (4.4%) veterans in the control group received haloperidol for the indication of nausea/vomiting; not terminal delirium. Atypical antipsychotics were infrequently used and included risperidone, olanzapine, quetiapine, and aripiprazole.
A total of 186 veterans received at least 1 dose of an antipsychotic for terminal delirium: 97 (52.2% ) veterans requiring antipsychotics for the treatment of terminal delirium required both scheduled and as-needed doses; 75 (40.3%) received only as-needed doses, and 14 (7.5%) required only scheduled doses. When the number of as-needed and scheduled doses were combined, each veteran received a mean 14.9 doses. However, for those veterans with antipsychotics ordered only as needed, a mean 5.8 doses were received per patient. Administration of antipsychotic doses was split evenly among the 3 nursing shifts (day-evening-night) with about 30% of doses administered on each shift.
Nurses were expected to document nonpharmacologic interventions that preceded the administration of each antipsychotic dose. Of the 1,028 doses administered to the 186 veterans who received at least 1 dose of an antipsychotic for terminal delirium, most of the doses (99.4%) had inadequate documentation based on current long-term care guidelines for prudent antipsychotic use.9
Several risk factors for terminal delirium were identified in this veteran population. Veterans with a history of drug or alcohol abuse were found to be at a significantly higher risk for terminal delirium (P = .04; OR, 1.87; 95% CI, 1.03-3.37). As noted in previous studies, steroid use (P = .01; OR, 2.57; 95% CI, 1.26-5.22); opioids (P = .007; OR, 5.94; 95% CI, 1.54-22.99), and anticholinergic medications (P = .01; OR, 2.06; 95% CI, 1.21-3.52) also increased the risk of delirium (Table 2).
When risk factors were combined, interaction terms were identified (Table 3). Those patients found to be at a higher risk of terminal delirium included Vietnam-era veterans with liver disease (P = .04; OR, 1.21; 95% CI, 1.01-1.45) and veterans with a history of drug or alcohol abuse plus comorbid liver disease (P = .03; OR, 1.26; 95% CI, 1.02-1.56). In a stratified analysis in veterans with a terminal diagnosis of cancer, those with a mental health condition (eg, PTSD, bipolar disorder, or schizophrenia) (P = .048; OR, 2.73; 95% CI, 0.98-7.58) also had higher risk of delirium, though not statistically significant. Within the cancer cohort, veterans with liver disease and a history of drug/alcohol abuse had increased risk of delirium (P = .01; OR, 1.43; 95% CI, 1.07-1.91).
Discussion
Terminal delirium is experienced by many individuals in their last days to weeks of life. Symptoms can present as hyperactive (eg, agitation, hallucinations, heightened arousal) or hypoactive (lethargy, reduced motor activity, incoherent speech). Hyperactive terminal delirium is particularly problematic because it causes increased distress to the patient, family, and caregivers. Delirium can lead to safety concerns, such as fall risk, due to patients’ decreased insight into functional decline.
Many studies suggest both nonpharmacologic and pharmacologic treatments for nonterminal delirium that may also apply to terminal delirium. Nonpharmacologic methods, such as providing a quiet and familiar environment, relieving urinary retention or constipation, and attending to sensory deficits may help prevent or minimize delirium. Pharmacologic interventions, such as antipsychotics or benzodiazepines, may benefit when other modalities have failed to assuage distressing symptoms of delirium. Because hypoactive delirium is usually accompanied by somnolence and reduced motor activity, medication is most often administered to individuals with hyperactive delirium.
The VA provides long-term care hospice beds in their CLCs for veterans who are nearing end of life and have inadequate caregiver support for comprehensive end-of-life care in the home (Case Presentation). Because of their military service and other factors common in their life histories, they may have a unique set of characteristics that are predictive of developing terminal delirium. Awareness of the propensity for terminal delirium will allow for early identification of symptoms, timely initiation of nonpharmacologic interventions, and potentially a decreased need for use of antipsychotic medications.
In this study, as noted in previous studies, certain medications (eg, steroids, opioids, and anticholinergics) increased the risk of developing terminal delirium in this veteran population. Steroids and opioids are commonly used in management of neoplasm-related pain and are prescribed throughout the course of terminal illness. The utility of these medications often outweighs potential adverse effects but should be considered when assessing the risk for development of delirium. Anticholinergics (eg, glycopyrrolate or scopolamine) are often prescribed in the last days of life for terminal secretions despite lack of evidence of patient benefit. Nonetheless, anticholinergics are used to reduce family and caregiver distress resulting from bothersome sounds from terminal secretions, referred to as the death rattle.21
It was found that veterans in the control group lived longer on the hospice unit. It is unclear whether the severity of illness was related to the development of terminal delirium or whether the development of terminal delirium contributed to a hastened death. Veterans with a suspected infection were identified by the use of antibiotics on admission to the hospice unit or when antibiotics were prescribed during the last 2 weeks of life. Thus, treatment of the underlying infection may have contributed to the finding of less delirium in the control group.
More than half the veterans in this study received at least 1 dose of an antipsychotic in the last 2 weeks of life for the treatment of terminal delirium. The most commonly administered medication was haloperidol, given either orally or subcutaneously. Atypical antipsychotics were used less often and were sometimes transitioned to subcutaneous haloperidol as the ability to swallow declined if symptoms persisted.
In this veteran population, having a history of drug or alcohol abuse (even if not recent) increased the risk of terminal delirium. Comorbid cancer and history of mental health disease (eg, PTSD, schizophrenia, bipolar disorder) and Vietnam-era veterans with liver disease (primary cancer, metastases, or cirrhosis) also were more likely to develop terminal delirium.
Just as hospice care is being provided in community settings, nurses are at the forefront of symptom management for veterans residing in VA CLCs under hospice care. Nonpharmacologic interventions are provided by the around-the-clock bedside team to provide comfort for veterans, families, and caregivers throughout the dying process. Nurses’ assessment skills and documentation inform the plan of care for the entire interdisciplinary hospice team. Because the treatment of terminal delirium often involves the administration of antipsychotic medications, scrutiny is applied to documentation surrounding these medications.7 This study suggested that there is a need for a more rigorous and consistent method of documenting the assessment of, and interventions for, terminal delirium.
Limitations
Limitations to the current study include hyperactive delirium that was misinterpreted and treated as pain; the probable underreporting of hypoactive delirium and associated symptoms; the use of antipsychotics as a surrogate marker for the development of terminal delirium; and lack of nursing documentation of assessment and interventions of terminal delirium. In addition, the total milligrams of antipsychotics administered per patient were not collected. Finally, there was the potential that other risk factors were not identified due to low numbers of veterans with certain diagnoses (eg, dementia).
Conclusions
Based on the findings in this study, several steps have been implemented to enhance the care of veterans under hospice care in this CLC: (1) Nurses providing direct patient care have been educated on the assessment by use of the mRASS and treatment of terminal delirium;22 (2) A hospice delirium note template has been created that details symptoms of terminal delirium, nonpharmacologic interventions, the use of antipsychotic medications if indicated, and the outcome of interventions; (3) Providers (eg, physician, advanced practice nurses) review each veteran’s medical history for the risk factors noted above; (4) Any risk factor(s) identified by this study will lead to a nursing order for delirium precautions, which requires completion of the delirium note template by nurses each shift.
The goal for this enhanced process is to identify veterans at risk for terminal delirium, observe changes that may indicate the onset of delirium, and intervene promptly to decrease symptom burden and improve quality of life and safety. Potentially, there will be less requirement for the use of antipsychotic medications to control the more severe symptoms of terminal delirium. A future study will evaluate the outcome of this enhanced process for the assessment and treatment of terminal delirium in this veteran population.
Acknowledgment
We thank Martin J. Gorbien, MD, associate chief of staff of Geriatrics and Extended Care, for his continued support throughout this project.
1. Casarett DJ, Inouye SK. Diagnosis and management of delirium near the end of life. Ann Intern Med. 2001;135(1):32-40.
2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC; 2013.
3. Grassi L, Caraceni A, Mitchell A, et al. Management of delirium in palliative care: a review. Curr Psychiatry Rep. 2015;17(13):1-9. doi:10.1007/s11920-015-0550-8
4. Bush S, Leonard M, Agar M, et al. End-of-life delirium: issues regarding the recognition, optimal management, and role of sedation in the dying phase. J Pain Symptom Manage. 2014;48 (2):215-230. doi:10.1016/j.jpainsymman. 2014.05.009
5. Moyer D. Terminal delirium in geriatric patients with cancer at end of life. Am J Hosp Palliat Med. 2010;28(1):44-51. doi:10.1177/1049909110376755
6. Lai X, Huang Z, Chen C, et al. Delirium screening in patients in a palliative care ward: a best practice implementation project. JBI Database System Rev Implement Rep. 2019;17(3):429-441. doi:10.11124/JBISRIR-2017-003646
7. Centers for Medicare and Medicaid Services. Medicare and Medicaid Programs; reform of requirements for long-term care facilities. Final rule. Fed Regist. 2016;81 (192):68688-68872. Accessed April 17, 2021. https://pubmed.ncbi.nlm.nih.gov/27731960
8. Wright D, Brajtman S, Macdonald M. A relational ethical approach to end-of-life delirium. J Pain Symptom Manage. 2014;48(2):191-198. doi:10.1016/j.jpainsymman.2013.08.015
9. Brajtman S, Higuchi K, McPherson C. Caring for patients with terminal delirium: palliative care unit and home care nurses’ experience. Int J Palliat Nurs. 2006;12(4):150-156. doi:10.12968/ijpn.2006.12.4.21010
10. Lange E, Verhaak P, Meer K. Prevalence, presentation, and prognosis of delirium in older people in the population, at home and in long-term care: a review. Int J Geriatr Psychiatry. 2013;28(2):127-134. doi:10.1002/gps.3814
11. Goy E, Ganzini L. Prevalence and natural history of neuropsychiatric syndromes in veteran hospice patients. J Pain Symptom Manage. 2011;41(12):394-401. doi:10.1016/j.jpainsymman.2010.04.015
12. Bush S, Bruera E. The assessment and management of delirium in cancer patients. Oncologist. 2009;4(10):1039-1049. doi:10.1634/theoncologist.2009-0122
13. Clary P, Lawson P. Pharmacologic pearls for end-of-life care. Am Fam Physician. 2009;79(12):1059-1065.
14. Blinderman CD, Billings J. Comfort for patients dying in the hospital. N Engl J Med. 2015;373(26):2549-2561. doi:10.1056/NEJMra1411746
15. Irwin SA, Pirrello RD, Hirst JM, Buckholz GT, Ferris F.D. Clarifying delirium management: practical evidence-based, expert recommendation for clinical practice. J Palliat Med. 2013;16(4):423-435. doi:10.1089/jpm.2012.0319
16. Bobb B. Dyspnea and delirium at the end of life. Clin J Oncol Nurs. 2016;20(3):244-246. doi:10.1188/16.CJON.244-246
17. Morita T, Tei Y, Inoue S. Agitated terminal delirium and association with partial opioid substitution and hydration. J Palliat Med. 2003;6(4):557-563. doi:10.1089/109662103768253669
18. Attard A, Ranjith G, Taylor D. Delirium and its treatment. CNS Drugs. 2008;22(8):631-644-649. doi:10.2165/00023210-200822080-00002
19. Hui D. Benzodiazepines for agitation in patients with delirium: selecting the right patient, right time, and right indication. Curr Opin Support Palliat Care. 2018;12(4):489-494. doi:10.1097/SPC.0000000000000395
20. Irwin P, Murray S, Bilinski A, Chern B, Stafford B. Alcohol withdrawal as an underrated cause of agitated delirium and terminal restlessness in patients with advanced malignancy. J Pain Symptom Manage. 2005;29(1):104-108. doi:10.1016/j.jpainsymman.2004.04.010
21. Lokker ME, van Zuylen L, van der Rijt CCD, van der Heide A. Prevalence, impact, and treatment of death rattle: a systematic review. J Pain Symptom Manage. 2014;48:2-12. doi:10.1016/j.jpainsymman.2013.03.011
22. Sessler C, Gosnell M, Grap M, et al. The Richmond Agitation–Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002:166(10):1338-1344. doi:10.1164/rccm.2107138
Delirium is a condition commonly exhibited by hospitalized patients and by those who are approaching the end of life.1 Patients who experience a disturbance in attention that develops over a relatively short period and represents an acute change may have delirium.2 Furthermore, there is often an additional cognitive disturbance, such as disorientation, memory deficit, language deficits, visuospatial deficit, or perception. Terminal delirium is defined as delirium that occurs in the dying process and implies that reversal is less likely.3 When death is anticipated, diagnostic workups are not recommended, and treatment of the physiologic abnormalities that contribute to delirium is generally ineffective.4
Background
Delirium is often underdiagnosed and undetected by the clinician. Some studies have shown that delirium is not detected in 22 to 50% of cases.5 Factors that contribute to the underdetection of delirium include preexisting dementia, older age, presence of visual or hearing impairment, and hypoactive presentation of delirium. Other possible reasons for nondetection of delirium are its fluctuating nature and lack of formal cognitive assessment as part of a routine screening across care settings.5 Another study found that 41% of health care providers (HCPs) felt that screening for delirium was burdensome.6
To date, there are no veteran-focused studies that investigate prevalence or risk factors for terminal delirium in US Department of Veterans Affairs (VA) long-term care hospice units. Most long-term care hospice units in the VA are in community living centers (CLCs) that follow regulatory guidelines for using antipsychotic medications. The Centers for Medicare and Medicaid Services state that if antipsychotics are prescribed, documentation must clearly show the indication for the antipsychotic medication, the multiple attempts to implement planned care, nonpharmacologic approaches, and ongoing evaluation of the effectiveness of these interventions.7 The symptoms of terminal delirium cause significant distress to patients, family and caregivers, and nursing staff. Literature suggests that delirium poses significant relational challenges for patients, families, and HCPs in end-of-life situations.8,9 We hypothesize that the early identification of risk factors for the development of terminal delirium in this population may lead to increased use of nonpharmacologic measures to prevent terminal delirium, increase nursing vigilance for development of symptoms, and reduce symptom burden should terminal delirium develop.
Prevalence of delirium in the long-term care setting has ranged between 1.4 and 70.3%.10 The rate was found to be much higher in institutionalized populations compared with that of patients classified as at-home. In a study of the prevalence, severity, and natural history of neuropsychiatric syndromes in terminally ill veterans enrolled in community hospice, delirium was found to be present in only 4.1% on the initial visit and 42.5% during last visit. Also, more than half had at least 1 episode of delirium during the 90-day study period.11 In a study of the prevalence of delirium in terminal cancer patients admitted to hospice, 80% experienced delirium in their final days.12
Risk factors for the development of delirium that have been identified in actively dying patients include bowel or bladder obstruction, fluid and electrolyte imbalances, suboptimal pain management, medication adverse effects and toxicity (eg, benzodiazepines, opioids, anticholinergics, and steroids), the addition of ≥ 3 medications, infection, hepatic and renal failure, poor glycemic control, hypoxia, and hematologic disturbances.4,5,13 A high percentage of patients with a previous diagnosis of dementia were found to exhibit terminal delirium.14
There are 2 major subtypes of delirium: hyperactive and hypoactive.4 Patients with hypoactive delirium exhibit lethargy, reduced motor activity, lack of interest, and/or incoherent speech. There is currently little evidence to guide the treatment of hypoactive delirium. By contrast, hyperactive delirium is associated with hallucinations, agitation, heightened arousal, and inappropriate behavior. Many studies suggest both nonpharmacologic and pharmacologic treatment modalities for the treatment of hyperactive delirium.4,13 Nonpharmacologic interventions may minimize the risk and severity of symptoms associated with delirium. Current guidelines recommend these interventions before pharmacologic treatment.4 Nonpharmacologic interventions include but are not limited to the following: engaging the patient in mentally stimulating activities; surrounding the patient with familiar materials (eg, photos); ensuring that all individuals identify themselves when they encounter a patient; minimizing the intensity of stimulation, providing family or volunteer presence, soft lighting and warm blankets; and ensuring the patient uses hearing aids and glasses if needed.4,14
Although there are no US Food and Drug Administration-approved medications to treat hyperactive delirium, first-generation antipsychotics (eg, haloperidol, chlorpromazine) are considered the first-line treatment for patients exhibiting psychosis and psychomotor agitation.3,4,14-16 In terminally ill patients, there is limited evidence from clinical trials to support the efficacy of drug therapy.14 One study showed lack of efficacy with hydration and opioid rotation.17 In terminally ill patients experiencing hyperactive delirium, there is a significant increased risk of muscle tension, myoclonic seizures, and distress to the patient, family, and caregiver.1 Benzodiazepines can be considered first-line treatment for dying patients with terminal delirium in which the goals of treatment are to relieve muscle tension, ensure amnesia, reduce the risk of seizures, and decrease psychosis and agitation.18,19 Furthermore, in patients with history of alcohol misuse who are experiencing terminal delirium, benzodiazepines also may be the preferred pharmacologic treatment.20 Caution must be exercised with the use of benzodiazepines because they can also cause oversedation, increased confusion, and/or a paradoxical worsening of delirium.3,4,14
Methods
This was a retrospective case-control study of patients who died in the Edward Hines Jr. Veterans Affairs Hospital CLC in Hines, Illinois, under the treating specialty nursing home hospice from October 1, 2013 to September 30, 2015. Due to the retrospective nature of this trial, the use of antipsychotics within the last 2 weeks of life was a surrogate marker for development of terminal delirium. Cases were defined as patients who were treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Controls were defined as patients who were not treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Living hospice patients and patients who were discharged from the CLC before death were excluded.
The goals of this study were to (1) determine risk factors in the VA CLC hospice veteran population for the development of terminal delirium; (2) evaluate documentation by the nursing staff of nonpharmacologic interventions and indications for antipsychotic use in the treatment of terminal delirium; and (3) examine the current usage patterns of antipsychotics for the treatment of terminal delirium.
Veterans’ medical records were reviewed from 2 weeks before death until the recorded death date. Factors that were assessed included age, war era of service, date of death, terminal diagnosis, time interval from cancer diagnosis to death, comorbid conditions, prescribed antipsychotic medications, and other medications potentially contributing to delirium. Nursing documentation was reviewed for indications for administration of antipsychotic medications and nonpharmacologic interventions used to mitigate the symptoms of terminal delirium.
Statistical analysis was conducted in SAS Version 9.3. Cases were compared with controls using univariate and multivariate statistics as appropriate. Comparisons for continuous variables (eg, age) were conducted with Student t tests. Categorical variables (eg, PTSD diagnosis) were compared using χ2 analysis or Fisher exact test as appropriate. Variables with a P value < .1 in the univariate analysis were included in logistic regression models. Independent variables were removed from the models, using a backward selection process. Interaction terms were tested based on significance and clinical relevance. A P value < .05 was considered statistically significant.
Results
From October 1, 2013 to September 30, 2015, 307 patients were analyzed for inclusion in this study. Within this population, 186 received antipsychotic medications for the treatment of terminal delirium (cases), while 90 did not receive antipsychotics (controls). Of the 31 excluded patients, 13 were discharged to receive home hospice care, 11 were discharged to community nursing homes, 5 died in acute care units of Edward Hines, Jr. VA Hospital, and 2 died outside of the study period.
The mean age of all included patients was 75.5 years, and the most common terminal diagnosis was cancer, which occurred in 156 patients (56.5%) (Table 1). The baseline characteristics were similar between the cases and controls, including war era of veteran, terminal diagnosis, and comorbid conditions. The mean time between cancer diagnosis and death was not notably longer in the control group compared with that of the case group (25 vs 16 mo, respectively). There was no statistically significant difference in terminal diagnoses between cases and controls. Veterans in the control group spent more days (mean [SD]) in the hospice unit compared with veterans who experienced terminal delirium (48.5 [168.4] vs 28.2 [46.9]; P = .01). Patients with suspected infections were more likely found in the control group (P = .04; odds ratio [OR] = 1.70; 95% CI, 1.02-2.82).
The most common antipsychotic administered in the last 14 days of life was haloperidol. In the case group, 175 (94%) received haloperidol at least once in the last 2 weeks of life. Four (4.4%) veterans in the control group received haloperidol for the indication of nausea/vomiting; not terminal delirium. Atypical antipsychotics were infrequently used and included risperidone, olanzapine, quetiapine, and aripiprazole.
A total of 186 veterans received at least 1 dose of an antipsychotic for terminal delirium: 97 (52.2% ) veterans requiring antipsychotics for the treatment of terminal delirium required both scheduled and as-needed doses; 75 (40.3%) received only as-needed doses, and 14 (7.5%) required only scheduled doses. When the number of as-needed and scheduled doses were combined, each veteran received a mean 14.9 doses. However, for those veterans with antipsychotics ordered only as needed, a mean 5.8 doses were received per patient. Administration of antipsychotic doses was split evenly among the 3 nursing shifts (day-evening-night) with about 30% of doses administered on each shift.
Nurses were expected to document nonpharmacologic interventions that preceded the administration of each antipsychotic dose. Of the 1,028 doses administered to the 186 veterans who received at least 1 dose of an antipsychotic for terminal delirium, most of the doses (99.4%) had inadequate documentation based on current long-term care guidelines for prudent antipsychotic use.9
Several risk factors for terminal delirium were identified in this veteran population. Veterans with a history of drug or alcohol abuse were found to be at a significantly higher risk for terminal delirium (P = .04; OR, 1.87; 95% CI, 1.03-3.37). As noted in previous studies, steroid use (P = .01; OR, 2.57; 95% CI, 1.26-5.22); opioids (P = .007; OR, 5.94; 95% CI, 1.54-22.99), and anticholinergic medications (P = .01; OR, 2.06; 95% CI, 1.21-3.52) also increased the risk of delirium (Table 2).
When risk factors were combined, interaction terms were identified (Table 3). Those patients found to be at a higher risk of terminal delirium included Vietnam-era veterans with liver disease (P = .04; OR, 1.21; 95% CI, 1.01-1.45) and veterans with a history of drug or alcohol abuse plus comorbid liver disease (P = .03; OR, 1.26; 95% CI, 1.02-1.56). In a stratified analysis in veterans with a terminal diagnosis of cancer, those with a mental health condition (eg, PTSD, bipolar disorder, or schizophrenia) (P = .048; OR, 2.73; 95% CI, 0.98-7.58) also had higher risk of delirium, though not statistically significant. Within the cancer cohort, veterans with liver disease and a history of drug/alcohol abuse had increased risk of delirium (P = .01; OR, 1.43; 95% CI, 1.07-1.91).
Discussion
Terminal delirium is experienced by many individuals in their last days to weeks of life. Symptoms can present as hyperactive (eg, agitation, hallucinations, heightened arousal) or hypoactive (lethargy, reduced motor activity, incoherent speech). Hyperactive terminal delirium is particularly problematic because it causes increased distress to the patient, family, and caregivers. Delirium can lead to safety concerns, such as fall risk, due to patients’ decreased insight into functional decline.
Many studies suggest both nonpharmacologic and pharmacologic treatments for nonterminal delirium that may also apply to terminal delirium. Nonpharmacologic methods, such as providing a quiet and familiar environment, relieving urinary retention or constipation, and attending to sensory deficits may help prevent or minimize delirium. Pharmacologic interventions, such as antipsychotics or benzodiazepines, may benefit when other modalities have failed to assuage distressing symptoms of delirium. Because hypoactive delirium is usually accompanied by somnolence and reduced motor activity, medication is most often administered to individuals with hyperactive delirium.
The VA provides long-term care hospice beds in their CLCs for veterans who are nearing end of life and have inadequate caregiver support for comprehensive end-of-life care in the home (Case Presentation). Because of their military service and other factors common in their life histories, they may have a unique set of characteristics that are predictive of developing terminal delirium. Awareness of the propensity for terminal delirium will allow for early identification of symptoms, timely initiation of nonpharmacologic interventions, and potentially a decreased need for use of antipsychotic medications.
In this study, as noted in previous studies, certain medications (eg, steroids, opioids, and anticholinergics) increased the risk of developing terminal delirium in this veteran population. Steroids and opioids are commonly used in management of neoplasm-related pain and are prescribed throughout the course of terminal illness. The utility of these medications often outweighs potential adverse effects but should be considered when assessing the risk for development of delirium. Anticholinergics (eg, glycopyrrolate or scopolamine) are often prescribed in the last days of life for terminal secretions despite lack of evidence of patient benefit. Nonetheless, anticholinergics are used to reduce family and caregiver distress resulting from bothersome sounds from terminal secretions, referred to as the death rattle.21
It was found that veterans in the control group lived longer on the hospice unit. It is unclear whether the severity of illness was related to the development of terminal delirium or whether the development of terminal delirium contributed to a hastened death. Veterans with a suspected infection were identified by the use of antibiotics on admission to the hospice unit or when antibiotics were prescribed during the last 2 weeks of life. Thus, treatment of the underlying infection may have contributed to the finding of less delirium in the control group.
More than half the veterans in this study received at least 1 dose of an antipsychotic in the last 2 weeks of life for the treatment of terminal delirium. The most commonly administered medication was haloperidol, given either orally or subcutaneously. Atypical antipsychotics were used less often and were sometimes transitioned to subcutaneous haloperidol as the ability to swallow declined if symptoms persisted.
In this veteran population, having a history of drug or alcohol abuse (even if not recent) increased the risk of terminal delirium. Comorbid cancer and history of mental health disease (eg, PTSD, schizophrenia, bipolar disorder) and Vietnam-era veterans with liver disease (primary cancer, metastases, or cirrhosis) also were more likely to develop terminal delirium.
Just as hospice care is being provided in community settings, nurses are at the forefront of symptom management for veterans residing in VA CLCs under hospice care. Nonpharmacologic interventions are provided by the around-the-clock bedside team to provide comfort for veterans, families, and caregivers throughout the dying process. Nurses’ assessment skills and documentation inform the plan of care for the entire interdisciplinary hospice team. Because the treatment of terminal delirium often involves the administration of antipsychotic medications, scrutiny is applied to documentation surrounding these medications.7 This study suggested that there is a need for a more rigorous and consistent method of documenting the assessment of, and interventions for, terminal delirium.
Limitations
Limitations to the current study include hyperactive delirium that was misinterpreted and treated as pain; the probable underreporting of hypoactive delirium and associated symptoms; the use of antipsychotics as a surrogate marker for the development of terminal delirium; and lack of nursing documentation of assessment and interventions of terminal delirium. In addition, the total milligrams of antipsychotics administered per patient were not collected. Finally, there was the potential that other risk factors were not identified due to low numbers of veterans with certain diagnoses (eg, dementia).
Conclusions
Based on the findings in this study, several steps have been implemented to enhance the care of veterans under hospice care in this CLC: (1) Nurses providing direct patient care have been educated on the assessment by use of the mRASS and treatment of terminal delirium;22 (2) A hospice delirium note template has been created that details symptoms of terminal delirium, nonpharmacologic interventions, the use of antipsychotic medications if indicated, and the outcome of interventions; (3) Providers (eg, physician, advanced practice nurses) review each veteran’s medical history for the risk factors noted above; (4) Any risk factor(s) identified by this study will lead to a nursing order for delirium precautions, which requires completion of the delirium note template by nurses each shift.
The goal for this enhanced process is to identify veterans at risk for terminal delirium, observe changes that may indicate the onset of delirium, and intervene promptly to decrease symptom burden and improve quality of life and safety. Potentially, there will be less requirement for the use of antipsychotic medications to control the more severe symptoms of terminal delirium. A future study will evaluate the outcome of this enhanced process for the assessment and treatment of terminal delirium in this veteran population.
Acknowledgment
We thank Martin J. Gorbien, MD, associate chief of staff of Geriatrics and Extended Care, for his continued support throughout this project.
Delirium is a condition commonly exhibited by hospitalized patients and by those who are approaching the end of life.1 Patients who experience a disturbance in attention that develops over a relatively short period and represents an acute change may have delirium.2 Furthermore, there is often an additional cognitive disturbance, such as disorientation, memory deficit, language deficits, visuospatial deficit, or perception. Terminal delirium is defined as delirium that occurs in the dying process and implies that reversal is less likely.3 When death is anticipated, diagnostic workups are not recommended, and treatment of the physiologic abnormalities that contribute to delirium is generally ineffective.4
Background
Delirium is often underdiagnosed and undetected by the clinician. Some studies have shown that delirium is not detected in 22 to 50% of cases.5 Factors that contribute to the underdetection of delirium include preexisting dementia, older age, presence of visual or hearing impairment, and hypoactive presentation of delirium. Other possible reasons for nondetection of delirium are its fluctuating nature and lack of formal cognitive assessment as part of a routine screening across care settings.5 Another study found that 41% of health care providers (HCPs) felt that screening for delirium was burdensome.6
To date, there are no veteran-focused studies that investigate prevalence or risk factors for terminal delirium in US Department of Veterans Affairs (VA) long-term care hospice units. Most long-term care hospice units in the VA are in community living centers (CLCs) that follow regulatory guidelines for using antipsychotic medications. The Centers for Medicare and Medicaid Services state that if antipsychotics are prescribed, documentation must clearly show the indication for the antipsychotic medication, the multiple attempts to implement planned care, nonpharmacologic approaches, and ongoing evaluation of the effectiveness of these interventions.7 The symptoms of terminal delirium cause significant distress to patients, family and caregivers, and nursing staff. Literature suggests that delirium poses significant relational challenges for patients, families, and HCPs in end-of-life situations.8,9 We hypothesize that the early identification of risk factors for the development of terminal delirium in this population may lead to increased use of nonpharmacologic measures to prevent terminal delirium, increase nursing vigilance for development of symptoms, and reduce symptom burden should terminal delirium develop.
Prevalence of delirium in the long-term care setting has ranged between 1.4 and 70.3%.10 The rate was found to be much higher in institutionalized populations compared with that of patients classified as at-home. In a study of the prevalence, severity, and natural history of neuropsychiatric syndromes in terminally ill veterans enrolled in community hospice, delirium was found to be present in only 4.1% on the initial visit and 42.5% during last visit. Also, more than half had at least 1 episode of delirium during the 90-day study period.11 In a study of the prevalence of delirium in terminal cancer patients admitted to hospice, 80% experienced delirium in their final days.12
Risk factors for the development of delirium that have been identified in actively dying patients include bowel or bladder obstruction, fluid and electrolyte imbalances, suboptimal pain management, medication adverse effects and toxicity (eg, benzodiazepines, opioids, anticholinergics, and steroids), the addition of ≥ 3 medications, infection, hepatic and renal failure, poor glycemic control, hypoxia, and hematologic disturbances.4,5,13 A high percentage of patients with a previous diagnosis of dementia were found to exhibit terminal delirium.14
There are 2 major subtypes of delirium: hyperactive and hypoactive.4 Patients with hypoactive delirium exhibit lethargy, reduced motor activity, lack of interest, and/or incoherent speech. There is currently little evidence to guide the treatment of hypoactive delirium. By contrast, hyperactive delirium is associated with hallucinations, agitation, heightened arousal, and inappropriate behavior. Many studies suggest both nonpharmacologic and pharmacologic treatment modalities for the treatment of hyperactive delirium.4,13 Nonpharmacologic interventions may minimize the risk and severity of symptoms associated with delirium. Current guidelines recommend these interventions before pharmacologic treatment.4 Nonpharmacologic interventions include but are not limited to the following: engaging the patient in mentally stimulating activities; surrounding the patient with familiar materials (eg, photos); ensuring that all individuals identify themselves when they encounter a patient; minimizing the intensity of stimulation, providing family or volunteer presence, soft lighting and warm blankets; and ensuring the patient uses hearing aids and glasses if needed.4,14
Although there are no US Food and Drug Administration-approved medications to treat hyperactive delirium, first-generation antipsychotics (eg, haloperidol, chlorpromazine) are considered the first-line treatment for patients exhibiting psychosis and psychomotor agitation.3,4,14-16 In terminally ill patients, there is limited evidence from clinical trials to support the efficacy of drug therapy.14 One study showed lack of efficacy with hydration and opioid rotation.17 In terminally ill patients experiencing hyperactive delirium, there is a significant increased risk of muscle tension, myoclonic seizures, and distress to the patient, family, and caregiver.1 Benzodiazepines can be considered first-line treatment for dying patients with terminal delirium in which the goals of treatment are to relieve muscle tension, ensure amnesia, reduce the risk of seizures, and decrease psychosis and agitation.18,19 Furthermore, in patients with history of alcohol misuse who are experiencing terminal delirium, benzodiazepines also may be the preferred pharmacologic treatment.20 Caution must be exercised with the use of benzodiazepines because they can also cause oversedation, increased confusion, and/or a paradoxical worsening of delirium.3,4,14
Methods
This was a retrospective case-control study of patients who died in the Edward Hines Jr. Veterans Affairs Hospital CLC in Hines, Illinois, under the treating specialty nursing home hospice from October 1, 2013 to September 30, 2015. Due to the retrospective nature of this trial, the use of antipsychotics within the last 2 weeks of life was a surrogate marker for development of terminal delirium. Cases were defined as patients who were treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Controls were defined as patients who were not treated with antipsychotics for terminal delirium within the last 2 weeks of their lives. Living hospice patients and patients who were discharged from the CLC before death were excluded.
The goals of this study were to (1) determine risk factors in the VA CLC hospice veteran population for the development of terminal delirium; (2) evaluate documentation by the nursing staff of nonpharmacologic interventions and indications for antipsychotic use in the treatment of terminal delirium; and (3) examine the current usage patterns of antipsychotics for the treatment of terminal delirium.
Veterans’ medical records were reviewed from 2 weeks before death until the recorded death date. Factors that were assessed included age, war era of service, date of death, terminal diagnosis, time interval from cancer diagnosis to death, comorbid conditions, prescribed antipsychotic medications, and other medications potentially contributing to delirium. Nursing documentation was reviewed for indications for administration of antipsychotic medications and nonpharmacologic interventions used to mitigate the symptoms of terminal delirium.
Statistical analysis was conducted in SAS Version 9.3. Cases were compared with controls using univariate and multivariate statistics as appropriate. Comparisons for continuous variables (eg, age) were conducted with Student t tests. Categorical variables (eg, PTSD diagnosis) were compared using χ2 analysis or Fisher exact test as appropriate. Variables with a P value < .1 in the univariate analysis were included in logistic regression models. Independent variables were removed from the models, using a backward selection process. Interaction terms were tested based on significance and clinical relevance. A P value < .05 was considered statistically significant.
Results
From October 1, 2013 to September 30, 2015, 307 patients were analyzed for inclusion in this study. Within this population, 186 received antipsychotic medications for the treatment of terminal delirium (cases), while 90 did not receive antipsychotics (controls). Of the 31 excluded patients, 13 were discharged to receive home hospice care, 11 were discharged to community nursing homes, 5 died in acute care units of Edward Hines, Jr. VA Hospital, and 2 died outside of the study period.
The mean age of all included patients was 75.5 years, and the most common terminal diagnosis was cancer, which occurred in 156 patients (56.5%) (Table 1). The baseline characteristics were similar between the cases and controls, including war era of veteran, terminal diagnosis, and comorbid conditions. The mean time between cancer diagnosis and death was not notably longer in the control group compared with that of the case group (25 vs 16 mo, respectively). There was no statistically significant difference in terminal diagnoses between cases and controls. Veterans in the control group spent more days (mean [SD]) in the hospice unit compared with veterans who experienced terminal delirium (48.5 [168.4] vs 28.2 [46.9]; P = .01). Patients with suspected infections were more likely found in the control group (P = .04; odds ratio [OR] = 1.70; 95% CI, 1.02-2.82).
The most common antipsychotic administered in the last 14 days of life was haloperidol. In the case group, 175 (94%) received haloperidol at least once in the last 2 weeks of life. Four (4.4%) veterans in the control group received haloperidol for the indication of nausea/vomiting; not terminal delirium. Atypical antipsychotics were infrequently used and included risperidone, olanzapine, quetiapine, and aripiprazole.
A total of 186 veterans received at least 1 dose of an antipsychotic for terminal delirium: 97 (52.2% ) veterans requiring antipsychotics for the treatment of terminal delirium required both scheduled and as-needed doses; 75 (40.3%) received only as-needed doses, and 14 (7.5%) required only scheduled doses. When the number of as-needed and scheduled doses were combined, each veteran received a mean 14.9 doses. However, for those veterans with antipsychotics ordered only as needed, a mean 5.8 doses were received per patient. Administration of antipsychotic doses was split evenly among the 3 nursing shifts (day-evening-night) with about 30% of doses administered on each shift.
Nurses were expected to document nonpharmacologic interventions that preceded the administration of each antipsychotic dose. Of the 1,028 doses administered to the 186 veterans who received at least 1 dose of an antipsychotic for terminal delirium, most of the doses (99.4%) had inadequate documentation based on current long-term care guidelines for prudent antipsychotic use.9
Several risk factors for terminal delirium were identified in this veteran population. Veterans with a history of drug or alcohol abuse were found to be at a significantly higher risk for terminal delirium (P = .04; OR, 1.87; 95% CI, 1.03-3.37). As noted in previous studies, steroid use (P = .01; OR, 2.57; 95% CI, 1.26-5.22); opioids (P = .007; OR, 5.94; 95% CI, 1.54-22.99), and anticholinergic medications (P = .01; OR, 2.06; 95% CI, 1.21-3.52) also increased the risk of delirium (Table 2).
When risk factors were combined, interaction terms were identified (Table 3). Those patients found to be at a higher risk of terminal delirium included Vietnam-era veterans with liver disease (P = .04; OR, 1.21; 95% CI, 1.01-1.45) and veterans with a history of drug or alcohol abuse plus comorbid liver disease (P = .03; OR, 1.26; 95% CI, 1.02-1.56). In a stratified analysis in veterans with a terminal diagnosis of cancer, those with a mental health condition (eg, PTSD, bipolar disorder, or schizophrenia) (P = .048; OR, 2.73; 95% CI, 0.98-7.58) also had higher risk of delirium, though not statistically significant. Within the cancer cohort, veterans with liver disease and a history of drug/alcohol abuse had increased risk of delirium (P = .01; OR, 1.43; 95% CI, 1.07-1.91).
Discussion
Terminal delirium is experienced by many individuals in their last days to weeks of life. Symptoms can present as hyperactive (eg, agitation, hallucinations, heightened arousal) or hypoactive (lethargy, reduced motor activity, incoherent speech). Hyperactive terminal delirium is particularly problematic because it causes increased distress to the patient, family, and caregivers. Delirium can lead to safety concerns, such as fall risk, due to patients’ decreased insight into functional decline.
Many studies suggest both nonpharmacologic and pharmacologic treatments for nonterminal delirium that may also apply to terminal delirium. Nonpharmacologic methods, such as providing a quiet and familiar environment, relieving urinary retention or constipation, and attending to sensory deficits may help prevent or minimize delirium. Pharmacologic interventions, such as antipsychotics or benzodiazepines, may benefit when other modalities have failed to assuage distressing symptoms of delirium. Because hypoactive delirium is usually accompanied by somnolence and reduced motor activity, medication is most often administered to individuals with hyperactive delirium.
The VA provides long-term care hospice beds in their CLCs for veterans who are nearing end of life and have inadequate caregiver support for comprehensive end-of-life care in the home (Case Presentation). Because of their military service and other factors common in their life histories, they may have a unique set of characteristics that are predictive of developing terminal delirium. Awareness of the propensity for terminal delirium will allow for early identification of symptoms, timely initiation of nonpharmacologic interventions, and potentially a decreased need for use of antipsychotic medications.
In this study, as noted in previous studies, certain medications (eg, steroids, opioids, and anticholinergics) increased the risk of developing terminal delirium in this veteran population. Steroids and opioids are commonly used in management of neoplasm-related pain and are prescribed throughout the course of terminal illness. The utility of these medications often outweighs potential adverse effects but should be considered when assessing the risk for development of delirium. Anticholinergics (eg, glycopyrrolate or scopolamine) are often prescribed in the last days of life for terminal secretions despite lack of evidence of patient benefit. Nonetheless, anticholinergics are used to reduce family and caregiver distress resulting from bothersome sounds from terminal secretions, referred to as the death rattle.21
It was found that veterans in the control group lived longer on the hospice unit. It is unclear whether the severity of illness was related to the development of terminal delirium or whether the development of terminal delirium contributed to a hastened death. Veterans with a suspected infection were identified by the use of antibiotics on admission to the hospice unit or when antibiotics were prescribed during the last 2 weeks of life. Thus, treatment of the underlying infection may have contributed to the finding of less delirium in the control group.
More than half the veterans in this study received at least 1 dose of an antipsychotic in the last 2 weeks of life for the treatment of terminal delirium. The most commonly administered medication was haloperidol, given either orally or subcutaneously. Atypical antipsychotics were used less often and were sometimes transitioned to subcutaneous haloperidol as the ability to swallow declined if symptoms persisted.
In this veteran population, having a history of drug or alcohol abuse (even if not recent) increased the risk of terminal delirium. Comorbid cancer and history of mental health disease (eg, PTSD, schizophrenia, bipolar disorder) and Vietnam-era veterans with liver disease (primary cancer, metastases, or cirrhosis) also were more likely to develop terminal delirium.
Just as hospice care is being provided in community settings, nurses are at the forefront of symptom management for veterans residing in VA CLCs under hospice care. Nonpharmacologic interventions are provided by the around-the-clock bedside team to provide comfort for veterans, families, and caregivers throughout the dying process. Nurses’ assessment skills and documentation inform the plan of care for the entire interdisciplinary hospice team. Because the treatment of terminal delirium often involves the administration of antipsychotic medications, scrutiny is applied to documentation surrounding these medications.7 This study suggested that there is a need for a more rigorous and consistent method of documenting the assessment of, and interventions for, terminal delirium.
Limitations
Limitations to the current study include hyperactive delirium that was misinterpreted and treated as pain; the probable underreporting of hypoactive delirium and associated symptoms; the use of antipsychotics as a surrogate marker for the development of terminal delirium; and lack of nursing documentation of assessment and interventions of terminal delirium. In addition, the total milligrams of antipsychotics administered per patient were not collected. Finally, there was the potential that other risk factors were not identified due to low numbers of veterans with certain diagnoses (eg, dementia).
Conclusions
Based on the findings in this study, several steps have been implemented to enhance the care of veterans under hospice care in this CLC: (1) Nurses providing direct patient care have been educated on the assessment by use of the mRASS and treatment of terminal delirium;22 (2) A hospice delirium note template has been created that details symptoms of terminal delirium, nonpharmacologic interventions, the use of antipsychotic medications if indicated, and the outcome of interventions; (3) Providers (eg, physician, advanced practice nurses) review each veteran’s medical history for the risk factors noted above; (4) Any risk factor(s) identified by this study will lead to a nursing order for delirium precautions, which requires completion of the delirium note template by nurses each shift.
The goal for this enhanced process is to identify veterans at risk for terminal delirium, observe changes that may indicate the onset of delirium, and intervene promptly to decrease symptom burden and improve quality of life and safety. Potentially, there will be less requirement for the use of antipsychotic medications to control the more severe symptoms of terminal delirium. A future study will evaluate the outcome of this enhanced process for the assessment and treatment of terminal delirium in this veteran population.
Acknowledgment
We thank Martin J. Gorbien, MD, associate chief of staff of Geriatrics and Extended Care, for his continued support throughout this project.
1. Casarett DJ, Inouye SK. Diagnosis and management of delirium near the end of life. Ann Intern Med. 2001;135(1):32-40.
2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC; 2013.
3. Grassi L, Caraceni A, Mitchell A, et al. Management of delirium in palliative care: a review. Curr Psychiatry Rep. 2015;17(13):1-9. doi:10.1007/s11920-015-0550-8
4. Bush S, Leonard M, Agar M, et al. End-of-life delirium: issues regarding the recognition, optimal management, and role of sedation in the dying phase. J Pain Symptom Manage. 2014;48 (2):215-230. doi:10.1016/j.jpainsymman. 2014.05.009
5. Moyer D. Terminal delirium in geriatric patients with cancer at end of life. Am J Hosp Palliat Med. 2010;28(1):44-51. doi:10.1177/1049909110376755
6. Lai X, Huang Z, Chen C, et al. Delirium screening in patients in a palliative care ward: a best practice implementation project. JBI Database System Rev Implement Rep. 2019;17(3):429-441. doi:10.11124/JBISRIR-2017-003646
7. Centers for Medicare and Medicaid Services. Medicare and Medicaid Programs; reform of requirements for long-term care facilities. Final rule. Fed Regist. 2016;81 (192):68688-68872. Accessed April 17, 2021. https://pubmed.ncbi.nlm.nih.gov/27731960
8. Wright D, Brajtman S, Macdonald M. A relational ethical approach to end-of-life delirium. J Pain Symptom Manage. 2014;48(2):191-198. doi:10.1016/j.jpainsymman.2013.08.015
9. Brajtman S, Higuchi K, McPherson C. Caring for patients with terminal delirium: palliative care unit and home care nurses’ experience. Int J Palliat Nurs. 2006;12(4):150-156. doi:10.12968/ijpn.2006.12.4.21010
10. Lange E, Verhaak P, Meer K. Prevalence, presentation, and prognosis of delirium in older people in the population, at home and in long-term care: a review. Int J Geriatr Psychiatry. 2013;28(2):127-134. doi:10.1002/gps.3814
11. Goy E, Ganzini L. Prevalence and natural history of neuropsychiatric syndromes in veteran hospice patients. J Pain Symptom Manage. 2011;41(12):394-401. doi:10.1016/j.jpainsymman.2010.04.015
12. Bush S, Bruera E. The assessment and management of delirium in cancer patients. Oncologist. 2009;4(10):1039-1049. doi:10.1634/theoncologist.2009-0122
13. Clary P, Lawson P. Pharmacologic pearls for end-of-life care. Am Fam Physician. 2009;79(12):1059-1065.
14. Blinderman CD, Billings J. Comfort for patients dying in the hospital. N Engl J Med. 2015;373(26):2549-2561. doi:10.1056/NEJMra1411746
15. Irwin SA, Pirrello RD, Hirst JM, Buckholz GT, Ferris F.D. Clarifying delirium management: practical evidence-based, expert recommendation for clinical practice. J Palliat Med. 2013;16(4):423-435. doi:10.1089/jpm.2012.0319
16. Bobb B. Dyspnea and delirium at the end of life. Clin J Oncol Nurs. 2016;20(3):244-246. doi:10.1188/16.CJON.244-246
17. Morita T, Tei Y, Inoue S. Agitated terminal delirium and association with partial opioid substitution and hydration. J Palliat Med. 2003;6(4):557-563. doi:10.1089/109662103768253669
18. Attard A, Ranjith G, Taylor D. Delirium and its treatment. CNS Drugs. 2008;22(8):631-644-649. doi:10.2165/00023210-200822080-00002
19. Hui D. Benzodiazepines for agitation in patients with delirium: selecting the right patient, right time, and right indication. Curr Opin Support Palliat Care. 2018;12(4):489-494. doi:10.1097/SPC.0000000000000395
20. Irwin P, Murray S, Bilinski A, Chern B, Stafford B. Alcohol withdrawal as an underrated cause of agitated delirium and terminal restlessness in patients with advanced malignancy. J Pain Symptom Manage. 2005;29(1):104-108. doi:10.1016/j.jpainsymman.2004.04.010
21. Lokker ME, van Zuylen L, van der Rijt CCD, van der Heide A. Prevalence, impact, and treatment of death rattle: a systematic review. J Pain Symptom Manage. 2014;48:2-12. doi:10.1016/j.jpainsymman.2013.03.011
22. Sessler C, Gosnell M, Grap M, et al. The Richmond Agitation–Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002:166(10):1338-1344. doi:10.1164/rccm.2107138
1. Casarett DJ, Inouye SK. Diagnosis and management of delirium near the end of life. Ann Intern Med. 2001;135(1):32-40.
2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC; 2013.
3. Grassi L, Caraceni A, Mitchell A, et al. Management of delirium in palliative care: a review. Curr Psychiatry Rep. 2015;17(13):1-9. doi:10.1007/s11920-015-0550-8
4. Bush S, Leonard M, Agar M, et al. End-of-life delirium: issues regarding the recognition, optimal management, and role of sedation in the dying phase. J Pain Symptom Manage. 2014;48 (2):215-230. doi:10.1016/j.jpainsymman. 2014.05.009
5. Moyer D. Terminal delirium in geriatric patients with cancer at end of life. Am J Hosp Palliat Med. 2010;28(1):44-51. doi:10.1177/1049909110376755
6. Lai X, Huang Z, Chen C, et al. Delirium screening in patients in a palliative care ward: a best practice implementation project. JBI Database System Rev Implement Rep. 2019;17(3):429-441. doi:10.11124/JBISRIR-2017-003646
7. Centers for Medicare and Medicaid Services. Medicare and Medicaid Programs; reform of requirements for long-term care facilities. Final rule. Fed Regist. 2016;81 (192):68688-68872. Accessed April 17, 2021. https://pubmed.ncbi.nlm.nih.gov/27731960
8. Wright D, Brajtman S, Macdonald M. A relational ethical approach to end-of-life delirium. J Pain Symptom Manage. 2014;48(2):191-198. doi:10.1016/j.jpainsymman.2013.08.015
9. Brajtman S, Higuchi K, McPherson C. Caring for patients with terminal delirium: palliative care unit and home care nurses’ experience. Int J Palliat Nurs. 2006;12(4):150-156. doi:10.12968/ijpn.2006.12.4.21010
10. Lange E, Verhaak P, Meer K. Prevalence, presentation, and prognosis of delirium in older people in the population, at home and in long-term care: a review. Int J Geriatr Psychiatry. 2013;28(2):127-134. doi:10.1002/gps.3814
11. Goy E, Ganzini L. Prevalence and natural history of neuropsychiatric syndromes in veteran hospice patients. J Pain Symptom Manage. 2011;41(12):394-401. doi:10.1016/j.jpainsymman.2010.04.015
12. Bush S, Bruera E. The assessment and management of delirium in cancer patients. Oncologist. 2009;4(10):1039-1049. doi:10.1634/theoncologist.2009-0122
13. Clary P, Lawson P. Pharmacologic pearls for end-of-life care. Am Fam Physician. 2009;79(12):1059-1065.
14. Blinderman CD, Billings J. Comfort for patients dying in the hospital. N Engl J Med. 2015;373(26):2549-2561. doi:10.1056/NEJMra1411746
15. Irwin SA, Pirrello RD, Hirst JM, Buckholz GT, Ferris F.D. Clarifying delirium management: practical evidence-based, expert recommendation for clinical practice. J Palliat Med. 2013;16(4):423-435. doi:10.1089/jpm.2012.0319
16. Bobb B. Dyspnea and delirium at the end of life. Clin J Oncol Nurs. 2016;20(3):244-246. doi:10.1188/16.CJON.244-246
17. Morita T, Tei Y, Inoue S. Agitated terminal delirium and association with partial opioid substitution and hydration. J Palliat Med. 2003;6(4):557-563. doi:10.1089/109662103768253669
18. Attard A, Ranjith G, Taylor D. Delirium and its treatment. CNS Drugs. 2008;22(8):631-644-649. doi:10.2165/00023210-200822080-00002
19. Hui D. Benzodiazepines for agitation in patients with delirium: selecting the right patient, right time, and right indication. Curr Opin Support Palliat Care. 2018;12(4):489-494. doi:10.1097/SPC.0000000000000395
20. Irwin P, Murray S, Bilinski A, Chern B, Stafford B. Alcohol withdrawal as an underrated cause of agitated delirium and terminal restlessness in patients with advanced malignancy. J Pain Symptom Manage. 2005;29(1):104-108. doi:10.1016/j.jpainsymman.2004.04.010
21. Lokker ME, van Zuylen L, van der Rijt CCD, van der Heide A. Prevalence, impact, and treatment of death rattle: a systematic review. J Pain Symptom Manage. 2014;48:2-12. doi:10.1016/j.jpainsymman.2013.03.011
22. Sessler C, Gosnell M, Grap M, et al. The Richmond Agitation–Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002:166(10):1338-1344. doi:10.1164/rccm.2107138
Impact of the COVID-19 Pandemic on Multiple Sclerosis Care for Veterans
The following is a lightly edited transcript of a teleconference recorded in February 2021.
How has COVID impacted Veterans with multiple sclerosis?
Mitchell Wallin, MD, MPH: There has been a lot of concern in the multiple sclerosis (MS) patient community about getting infected with COVID-19 and what to do about it. Now that there are vaccines, the concern is whether and how to take a vaccine. At least here, in the Washington DC/Baltimore area where I practice, we have seen many veterans being hospitalized with COVID-19, some with multiple sclerosis (MS), and some who have died of COVID-19. So, there has been a lot of fear, especially in veterans that are older with comorbid diseases.
Rebecca Spain, MD, MSPH: There also has been an impact on our ability to provide care to our veterans with MS. There are challenges having them come into the office or providing virtual care. There are additional challenges and concerns this year about making changes in MS medications because we can’t see patients in person to or understand their needs or current status of their MS. So, providing care has been a challenge this year as well.
There has also been an impact on our day to day lives, like there has been for all of us, from the lockdown particularly not being able to exercise and socialize as much. There have been physical and social and emotional tolls that this disease has taken on veterans with MS.
Jodie Haselkorn, MD, MPH: The survivors of COVID-19, that are transferred to an inpatient multidisciplinary rehabilitation program unit to address impairments related to the cardiopulmonary, immobility, psychological impacts and other medical complications are highly motivated to work with the team to achieve a safe discharge. The US Department of Veterans Affairs (VA) Rehabilitation Services has much to offer them.
Heidi Maloni, PhD, NP: Veterans with MS are not at greater risk because they are diagnosed with MS. But, their comorbidities such as hypertension, obesity, or factors such as older age and increased disability can increase the risk of COVID-19 infection and poorer outcomes if infected. might place them at greater risk.
Veterans have asked “Am I at greater risk? Do I need to do something more to protect myself?” I have had innumerable veterans call and ask whether I can write them letters for their employer to ensure that they work at home longer rather than go into the workplace because they’re very nervous and don’t feel confident that masking and distancing is really going to be protective.
Mitchell Wallin: We are analyzing some of our data in the VA health care system related to COVID-19 infections in the MS population. We can’t say for sure what are numbers are, but our rates of infection and hospitalization are higher than the general population and we will soon have a report. We have a majority male population, which is different from the general MS population, which is predominantly female. The proportion of minority patients in VA mirrors those of the US population. These demographic factors along with a high level of comorbid disease put veterans at high risk for acquiring COVID-19. So, in some ways it’s hard to compare when you look at reports from other countries or the US National MS-COVID-19 Registry, which captures a population that is predominantly female. In the VA, our age range spans from the 20s to almost 100 years. We must understand our population to prevent COVID-19 and better care for the most vulnerable.
Rebecca Spain: Heidi, my understanding, although the numbers are small, that for the most part, Veterans with MS who are older are at higher risk of complications and death, which is also true of the general population. But that there is an additional risk for people with MS who have higher disability levels. My understanding from reading the literature, was that people with MS needing or requiring a cane to walk or greater assistance for mobility were at a higher risk for COVID-19 complications, including mortality. I have been particularly encouraged that in many places this special population of people with MS are getting vaccinated sooner.
Heidi Maloni: I completely agree, you said it very clearly, Becca. Their disability level puts them at risk
Rebecca Spain: Disability is a comorbidity.
Heidi Maloni: Yes. Just sitting in a wheelchair and not being able to get a full breath or having problems with respiratory effort really does put you at risk for doing well if you were to have COVID-19.
Are there other ancillary impacts from COVID-19 for patients with MS?
Jodie Haselkorn: Individuals who are hospitalized with COVID-19 miss social touch and social support from family and friends. They miss familiar conversations, a hug and having someone hold their hand. The acute phase of the infection limits professional face-to-face interaction with patients due to time and protective garments. There are reports of negative consequences with isolation and social reintegration of the COVID-19 survivors is necessary and a necessary part of rehabilitation.
Mitchell Wallin: For certain procedures (eg, magnetic resonance imaging [MRI]) or consultations, we need to bring people into the medical center. Many clinical encounters, however, can be done through telemedicine and both the VA and the US Department of Defense systems were set up to execute this type of visit. We had been doing telemedicine for a long time before the pandemic and we were in a better position than a lot of other health systems to shift to a virtual format with COVID-19. We had to ramp up a little bit and get our tools working a little more effectively for all clinics, but I think we were prepared to broadly execute telemedicine clinics for the pandemic.
Jodie Haselkorn: I agree that the he VA infrastructure was ahead of most other health system in terms of readiness for telehealth and maintaining access to care. Not all health care providers (HCPs) were using it, but the system was there, and included a telehealth coordinator in all of the facilities who could gear health care professionals up quickly. Additionally, a system was in place to provide veterans and caregivers with telehealth home equipment and provide training. Another thing that really helped was the MISSION Act. Veterans who have difficulty travelling for an appointment may have the ability to seek care outside of the VA within their own community. They may be able to go into a local facility to get laboratory or radiologic studies done or continue rehabilitation closer to home.
VA MS Registry Data
Rebecca Spain: Mitch, there are many interesting things we can learn about the interplay between COVID-19 and MS using registries such as how it affects people based on rural vs metropolitan living, whether people are living in single family homes or not as a proxy marker for social support, and so on.
Mitchell Wallin: We have both an MS registry to track and follow patients through our clinical network and a specific COVID-19 registry as well in VA. We have identified the MS cases infected with CoVID-19 and are putting them together.
Jodie Haselkorn: There are a number of efforts in mental health that are moving forward to examine depression and in anxiety during COVID-19. Individuals with MS have increased rates of depression and anxiety above that of the general population during usual times. The literature reports an increase in anxiety and depression in general population associated with the pandemic and veterans with MS seem to be reporting these symptoms more frequently as well. We will be able to track use the registry to assess the impacts of COVID-19 on depression and anxiety in Veterans with MS.
Providing MS Care During COVID-19
Jodie Haselkorn: The transition to telehealth in COVID-19 has been surprisingly seamless with some additional training for veterans and HCPs. I initially experienced an inefficiency in my clinic visit productivity. It took me longer to see a veteran because I wasn’t doing telehealth in our clinic with support staff and residents, my examination had to change, my documentation template needed to be restructured, and the coding was different. Sometimes I saw a veteran in clinic the and my next appointment required me to move back to my office in another building for a telehealth appointment. Teaching virtual trainees who also participated in the clinic encounters had its own challenges and rewards. My ‘motor routine’ was disrupted.
Rebecca Spain: There’s a real learning curve for telehealth in terms of how comfortable you feel with the data you get by telephone or video and how reliable that is. There are issues based on technology factors—like the patient’s bandwidth—because determining how smooth their motions are is challenging if you have a jerky, intermittent signal. I learned quickly to always do the physical examination first because I might lose video connection partway through and have to switch to a phone visit!
It’s still an open question, how much are we missing by using a video and not in-person visits. And what are the long-term health outcomes and implications of that? That is something that needs to be studied in neurology where we pride ourselves on the physical examination. When move to a virtual physical examination, is there cost? There are incredible gains using telehealth in terms of convenience and access to care, which may outweigh some of the drawbacks in particular cases.
There are also pandemic challenge in terms of clinic workflow. At VA Portland Health Care System in Oregon, I have 3 clinics for Friday morning: telephone, virtual, and face-to-face clinics. It’s a real struggle for the schedulers. And because of that transition to new system workflows to accommodate this, some patient visits have been dropped, lost, or scheduled incorrectly.
Heidi Maloni: As the nurse in this group, I agree with everything that Becca and Jodie have said about telehealth. But, I have found some benefits, and one of them is a greater intimacy with my patients. What do I mean by that? For instance, if a patient has taken me to their kitchen and opened their cupboard to show me the breakfast cereal, I’m also observing that there’s nothing else in that cupboard other than cereal. I’m also putting some things together about health and wellness. Or, for the first time, I might meet their significant other who can’t come to clinic because they’re working, but they are at home with the patient. And then having that 3-way conversation with the patient and the significant other, that’s kind of opened up my sense of who that person is.
You are right about the neurological examination. It’s challenging to make exacting assessments. When gathering household objects, ice bags and pronged forks to assess sensation, you remember that this exam is subjective and there is meaning in this remote evaluation. But all in all, I have been blessed with telehealth. Patients don’t mind it at all. They’re completely open to the idea. They like the telehealth for the contact they are able to have with their HCP.
Jodie Haselkorn: As you were saying that, Heidi, I thought, I’ve been inside my veterans’ bathrooms virtually and have seen all of their equipment that they have at home. In a face-to-face clinic visit, you don’t have an opportunity to see all their canes and walkers, braces, and other assistive technology. Some of it’s stashed in a closet, some of it under the bed. In a virtual visit, I get to understand why some is not used, what veterans prefer, and see their own innovations for mobility and self-care.
Mitchell Wallin: There’s a typical ritual that patients talk about when they go to a clinic. They check in, sit down, and wait for the nurse to give them their vital signs and set them up in the room. And then they meet with their HCP, and finally they complete the tasks on the checklist. And part of that may mean scheduling an MRI or going to the lab. But some of these handoffs don’t happen as well on telehealth. Maybe we haven’t integrated these segments of a clinical visit into telehealth platforms. But it could be developed, and there could be new neurologic tools to improve the interview and physical examination. Twenty years ago, you couldn’t deposit a check on your phone; but now you can do everything on your phone you could do in a physical bank. With some creativity, we can improve parts of the neurological exam that are currently difficult to assess remotely.
Jodie Haselkorn: I have not used peripherals in video telehealth to home and I would need to become accustomed to their use with current technology and train patients and caregivers. I would like telehealth peripherals such as a stethoscope to listen to the abdomen of a veteran with neurogenic bowel or a user-friendly ultrasound probe to measure postvoid residual urine in an individual with symptoms of neurogenic bladder, in addition to devices that measure walking speed and pulmonary function. I look forward to the development, use, and the incorporation peripherals that will enable a more extensive virtual exam within the home.
What are the MS Centers of Excellence working on now?
Jodie Haselkorn: We are working to understand the healthcare needs of veterans with MS by evaluating not only care for MS within the VA, but also the types and quantity of MS specialty care VA that is being received in the community during the pandemic. Dr. Wallin is also using the registry to lead a telehealth study to capture the variety of different codes that VA health professionals in MS have used to document workload by telehealth, and face-to-face, and telephone encounters.
Rebecca Spain: The MS Center of Excellence (MSCoE) is coming out with note templates to be available for HCPs, which we can refine as we get experience. This is s one way we can promote high standards in MS care by making these ancillary tools more productive.
Jodie Haselkorn: We are looking at different ways to achieve a high-quality virtual examination using standardized examination strategies and patient and caregiver information to prepare for a specialty MS visit.
Rebecca Spain: I would like to, in more of a research setting, study health outcomes using telehealth vs in person and start tracking that long term.
Mitchell Wallin: We can probably do more in terms of standardization, such as the routine patient reported surveys and implementing the new Consortium of Multiple Sclerosis Centers’ International MRI criteria. The COVID pandemic has affected everything in medical care. But we want to have a regular standardized outcome to assess, and if we can start to do some of the standard data collection through telemedicine, it becomes part of our regular clinic data.
Heidi Maloni: We need better technology. You can do electrocardiograms on your watch. Could we do Dinamaps? Could we figure out strength? That’s a wish list.
Jodie Haselkorn: Since the MSCoE is a national program, we were set up to do what we needed to do for education. We were able to continue on with all of our HCP webinars, including the series with the National MS Society (NMSS). We also have a Specialty Care Access Network-Extension for Community Healthcare Outcomes (SCAN-ECHO) series with the Northwest ECHO VA program and collaborated with the Can Do MS program on patient education as well. We’ve sent out 2 printed newsletters for veterans. The training of HCPs for the future has continued as well. All of our postdoctoral fellows who have finished their programs on time and moved on to either clinical practice or received career development grants to continue their VA careers, a new fellow has joined, and our other fellows are continuing as planned.
The loss that we sustained was in-person meetings. We held MSCoE Regional Program meetings in the East and West that combined education and administrative goals. Both of these were well attended and successful. There was a lot of virtual education available from multiple sources. It was challenging this year was to anticipate what education programming people wanted from MSCoE. Interestingly, a lot of our regional HCPs did not want much more COVID-19 education. They wanted other education and we were able to meet those needs.
Did the pandemic impact the VA MS registry?
Mitchell Wallin: Like any electronic product, the VA MS Surveillance Registry must be maintained, and we have tried to encourage people to use it. Our biggest concern was to identify cases of MS that got infected with COVID-19 and to put those people into the registry. In some cases, Veterans with MS were in locations without a MS clinic. So, we’ve spent a lot more time identifying those cases and adjudicating them to make sure their infection and MS were documented correctly.
During the COVID-19 pandemic, the VA healthcare system has been taxed like others and so HCPs have been a lot busier than normal, forcing new workflows. It has been a hard year that way because a lot of health care providers have been doing many other jobs to help maintain patient care during the COVID-19 pandemic.
Heidi Maloni: The impact of COVID-19 has been positive for the registry because we’ve had more opportunities to populate it.
Jodie Haselkorn: Dr. Wallin and the COVID-19 Registry group began building the combined registry at the onset of the pandemic. We have developed the capacity to identify COVID-19 infections in veterans who have MS and receive care in the VA. We entered these cases in the MS Surveillance Registry and have developed a linkage with the COVID-19 national VA registry. We are in the middle of the grunt work part case entry, but it is a rich resource.
How has the pandemic impacted MS research?
Rebecca Spain: COVID-19 has put a big damper on clinical research progress, including some of our MSCoE studies. It has been difficult to have subjects come in for clinical visits. It’s been difficult to get approval for new studies. It’s shifted timelines dramatically, and then that always increases budgets in a time when there’s not a lot of extra money. So, for clinical research, it’s been a real struggle and a strain and an ever-moving target. For laboratory research most, if not all, centers that have laboratory research at some point were closed and have only slowly reopened. Some still haven’t reopened to any kind of research or laboratory. So, it’s been tough, I think, on research in general.
Heidi Maloni: I would say the word is devastating. The pandemic essentially put a stop to in-person research studies. Our hospital was in research phase I, meaning human subjects can only participate in a research study if they are an inpatient or outpatient with an established clinic visit (clinics open to 25% occupancy) or involved in a study requiring safety monitoring, This plan limits risk of COVID-19 exposure.
Rebecca Spain: There is risk for a higher dropout rate of subjects from studies meaning there’s less chance of success for finding answers if enough people don’t stay in. At a certain point, you have to say, “Is this going to be a successful study?”
Jodie Haselkorn: Dr. Spain has done an amazing job leading a multisite, international clinical trial funded by the VA and the NMSS and kept it afloat, despite challenges. The pandemic has had impacts, but the study continues to move towards completion. I’ve appreciated the efforts of the Research Service at VA Puget Sound to ensure that we could safely obtain many of the 12-month outcomes for all the participants enrolled in that study.
Mitchell Wallin: The funding for some of our nonprofit partners, including the Paralyzed Veterans Association (PVA) and the NMSS, has suffered as well and so a lot of their funding programs have closed or been cut back during the pandemic. Despite that, we still have been able to use televideo technology for our clinical and educational programs with our network.
Jodie Haselkorn: MSCoE also does health services and epidemiological studies in addition to clinical trials and that work has continued. Quite a few of the studies that had human subjects in them were completed in terms of data collection, and so those are being analyzed. There will be a drop in funded studies, publications and posters as the pandemic continues and for a recovery period. We have a robust baseline for research productivity and a talented team. We’ll be able to track drop off and recovery over time.
Rebecca Spain: There’s going to be long-term consequences that we don’t see right now, especially for young researchers who have missed getting pilot data which would have led to additional small grants and then later large grants. There’s going to be an education gap that’s going on with all of the kids who are not able to go to school properly. It’s part of that whole swath of lost time and lost opportunity that we will have to deal with.
However, there are going to be some positive changes. We’re now busy designing clinical trials that can be done virtually to minimize any contact with the health facility, and then looking at things like shifting to research ideas that are more focused around health services.
Jodie Haselkorn: Given the current impacts of the pandemic on delivery of health care there is a strong interest in looking at how we can deliver health care in ways that accommodates the consumers and the providers perspectives. In the future we see marked impacts in our abilities to deliver care to Veterans with MS.
As a final thought, I wanted to put in a plug for this talented team. One of our pandemic resolutions was to innovatively find new possibilities and avoid negative focus on small changes. We are fortunate that all our staff have remained healthy and been supportive and compassionate with each other throughout this period. We have met our goals and are still moving forward.
MSCoE has benefited from the supportive leadership of Sharyl Martini, MD, PhD, and Glenn Graham, MD, PhD, in VA Specialty Care Neurology and leadership and space from VA Puget Sound, VA Portland Health Care System, the Washington DC VA Medical Center and VA Maryland Health Care System in Baltimore.
We also have a national advisory system that is actively involved, sets high standards and performs a rigorous annual review. We have rich inputs from the VA National Regional Programs and Veterans. Additionally, we have had the leadership and opportunities to collaborate with outside organizations including, the Consortium of MS Centers, the NMSS, and the PVA. We have been fortunate.
The following is a lightly edited transcript of a teleconference recorded in February 2021.
How has COVID impacted Veterans with multiple sclerosis?
Mitchell Wallin, MD, MPH: There has been a lot of concern in the multiple sclerosis (MS) patient community about getting infected with COVID-19 and what to do about it. Now that there are vaccines, the concern is whether and how to take a vaccine. At least here, in the Washington DC/Baltimore area where I practice, we have seen many veterans being hospitalized with COVID-19, some with multiple sclerosis (MS), and some who have died of COVID-19. So, there has been a lot of fear, especially in veterans that are older with comorbid diseases.
Rebecca Spain, MD, MSPH: There also has been an impact on our ability to provide care to our veterans with MS. There are challenges having them come into the office or providing virtual care. There are additional challenges and concerns this year about making changes in MS medications because we can’t see patients in person to or understand their needs or current status of their MS. So, providing care has been a challenge this year as well.
There has also been an impact on our day to day lives, like there has been for all of us, from the lockdown particularly not being able to exercise and socialize as much. There have been physical and social and emotional tolls that this disease has taken on veterans with MS.
Jodie Haselkorn, MD, MPH: The survivors of COVID-19, that are transferred to an inpatient multidisciplinary rehabilitation program unit to address impairments related to the cardiopulmonary, immobility, psychological impacts and other medical complications are highly motivated to work with the team to achieve a safe discharge. The US Department of Veterans Affairs (VA) Rehabilitation Services has much to offer them.
Heidi Maloni, PhD, NP: Veterans with MS are not at greater risk because they are diagnosed with MS. But, their comorbidities such as hypertension, obesity, or factors such as older age and increased disability can increase the risk of COVID-19 infection and poorer outcomes if infected. might place them at greater risk.
Veterans have asked “Am I at greater risk? Do I need to do something more to protect myself?” I have had innumerable veterans call and ask whether I can write them letters for their employer to ensure that they work at home longer rather than go into the workplace because they’re very nervous and don’t feel confident that masking and distancing is really going to be protective.
Mitchell Wallin: We are analyzing some of our data in the VA health care system related to COVID-19 infections in the MS population. We can’t say for sure what are numbers are, but our rates of infection and hospitalization are higher than the general population and we will soon have a report. We have a majority male population, which is different from the general MS population, which is predominantly female. The proportion of minority patients in VA mirrors those of the US population. These demographic factors along with a high level of comorbid disease put veterans at high risk for acquiring COVID-19. So, in some ways it’s hard to compare when you look at reports from other countries or the US National MS-COVID-19 Registry, which captures a population that is predominantly female. In the VA, our age range spans from the 20s to almost 100 years. We must understand our population to prevent COVID-19 and better care for the most vulnerable.
Rebecca Spain: Heidi, my understanding, although the numbers are small, that for the most part, Veterans with MS who are older are at higher risk of complications and death, which is also true of the general population. But that there is an additional risk for people with MS who have higher disability levels. My understanding from reading the literature, was that people with MS needing or requiring a cane to walk or greater assistance for mobility were at a higher risk for COVID-19 complications, including mortality. I have been particularly encouraged that in many places this special population of people with MS are getting vaccinated sooner.
Heidi Maloni: I completely agree, you said it very clearly, Becca. Their disability level puts them at risk
Rebecca Spain: Disability is a comorbidity.
Heidi Maloni: Yes. Just sitting in a wheelchair and not being able to get a full breath or having problems with respiratory effort really does put you at risk for doing well if you were to have COVID-19.
Are there other ancillary impacts from COVID-19 for patients with MS?
Jodie Haselkorn: Individuals who are hospitalized with COVID-19 miss social touch and social support from family and friends. They miss familiar conversations, a hug and having someone hold their hand. The acute phase of the infection limits professional face-to-face interaction with patients due to time and protective garments. There are reports of negative consequences with isolation and social reintegration of the COVID-19 survivors is necessary and a necessary part of rehabilitation.
Mitchell Wallin: For certain procedures (eg, magnetic resonance imaging [MRI]) or consultations, we need to bring people into the medical center. Many clinical encounters, however, can be done through telemedicine and both the VA and the US Department of Defense systems were set up to execute this type of visit. We had been doing telemedicine for a long time before the pandemic and we were in a better position than a lot of other health systems to shift to a virtual format with COVID-19. We had to ramp up a little bit and get our tools working a little more effectively for all clinics, but I think we were prepared to broadly execute telemedicine clinics for the pandemic.
Jodie Haselkorn: I agree that the he VA infrastructure was ahead of most other health system in terms of readiness for telehealth and maintaining access to care. Not all health care providers (HCPs) were using it, but the system was there, and included a telehealth coordinator in all of the facilities who could gear health care professionals up quickly. Additionally, a system was in place to provide veterans and caregivers with telehealth home equipment and provide training. Another thing that really helped was the MISSION Act. Veterans who have difficulty travelling for an appointment may have the ability to seek care outside of the VA within their own community. They may be able to go into a local facility to get laboratory or radiologic studies done or continue rehabilitation closer to home.
VA MS Registry Data
Rebecca Spain: Mitch, there are many interesting things we can learn about the interplay between COVID-19 and MS using registries such as how it affects people based on rural vs metropolitan living, whether people are living in single family homes or not as a proxy marker for social support, and so on.
Mitchell Wallin: We have both an MS registry to track and follow patients through our clinical network and a specific COVID-19 registry as well in VA. We have identified the MS cases infected with CoVID-19 and are putting them together.
Jodie Haselkorn: There are a number of efforts in mental health that are moving forward to examine depression and in anxiety during COVID-19. Individuals with MS have increased rates of depression and anxiety above that of the general population during usual times. The literature reports an increase in anxiety and depression in general population associated with the pandemic and veterans with MS seem to be reporting these symptoms more frequently as well. We will be able to track use the registry to assess the impacts of COVID-19 on depression and anxiety in Veterans with MS.
Providing MS Care During COVID-19
Jodie Haselkorn: The transition to telehealth in COVID-19 has been surprisingly seamless with some additional training for veterans and HCPs. I initially experienced an inefficiency in my clinic visit productivity. It took me longer to see a veteran because I wasn’t doing telehealth in our clinic with support staff and residents, my examination had to change, my documentation template needed to be restructured, and the coding was different. Sometimes I saw a veteran in clinic the and my next appointment required me to move back to my office in another building for a telehealth appointment. Teaching virtual trainees who also participated in the clinic encounters had its own challenges and rewards. My ‘motor routine’ was disrupted.
Rebecca Spain: There’s a real learning curve for telehealth in terms of how comfortable you feel with the data you get by telephone or video and how reliable that is. There are issues based on technology factors—like the patient’s bandwidth—because determining how smooth their motions are is challenging if you have a jerky, intermittent signal. I learned quickly to always do the physical examination first because I might lose video connection partway through and have to switch to a phone visit!
It’s still an open question, how much are we missing by using a video and not in-person visits. And what are the long-term health outcomes and implications of that? That is something that needs to be studied in neurology where we pride ourselves on the physical examination. When move to a virtual physical examination, is there cost? There are incredible gains using telehealth in terms of convenience and access to care, which may outweigh some of the drawbacks in particular cases.
There are also pandemic challenge in terms of clinic workflow. At VA Portland Health Care System in Oregon, I have 3 clinics for Friday morning: telephone, virtual, and face-to-face clinics. It’s a real struggle for the schedulers. And because of that transition to new system workflows to accommodate this, some patient visits have been dropped, lost, or scheduled incorrectly.
Heidi Maloni: As the nurse in this group, I agree with everything that Becca and Jodie have said about telehealth. But, I have found some benefits, and one of them is a greater intimacy with my patients. What do I mean by that? For instance, if a patient has taken me to their kitchen and opened their cupboard to show me the breakfast cereal, I’m also observing that there’s nothing else in that cupboard other than cereal. I’m also putting some things together about health and wellness. Or, for the first time, I might meet their significant other who can’t come to clinic because they’re working, but they are at home with the patient. And then having that 3-way conversation with the patient and the significant other, that’s kind of opened up my sense of who that person is.
You are right about the neurological examination. It’s challenging to make exacting assessments. When gathering household objects, ice bags and pronged forks to assess sensation, you remember that this exam is subjective and there is meaning in this remote evaluation. But all in all, I have been blessed with telehealth. Patients don’t mind it at all. They’re completely open to the idea. They like the telehealth for the contact they are able to have with their HCP.
Jodie Haselkorn: As you were saying that, Heidi, I thought, I’ve been inside my veterans’ bathrooms virtually and have seen all of their equipment that they have at home. In a face-to-face clinic visit, you don’t have an opportunity to see all their canes and walkers, braces, and other assistive technology. Some of it’s stashed in a closet, some of it under the bed. In a virtual visit, I get to understand why some is not used, what veterans prefer, and see their own innovations for mobility and self-care.
Mitchell Wallin: There’s a typical ritual that patients talk about when they go to a clinic. They check in, sit down, and wait for the nurse to give them their vital signs and set them up in the room. And then they meet with their HCP, and finally they complete the tasks on the checklist. And part of that may mean scheduling an MRI or going to the lab. But some of these handoffs don’t happen as well on telehealth. Maybe we haven’t integrated these segments of a clinical visit into telehealth platforms. But it could be developed, and there could be new neurologic tools to improve the interview and physical examination. Twenty years ago, you couldn’t deposit a check on your phone; but now you can do everything on your phone you could do in a physical bank. With some creativity, we can improve parts of the neurological exam that are currently difficult to assess remotely.
Jodie Haselkorn: I have not used peripherals in video telehealth to home and I would need to become accustomed to their use with current technology and train patients and caregivers. I would like telehealth peripherals such as a stethoscope to listen to the abdomen of a veteran with neurogenic bowel or a user-friendly ultrasound probe to measure postvoid residual urine in an individual with symptoms of neurogenic bladder, in addition to devices that measure walking speed and pulmonary function. I look forward to the development, use, and the incorporation peripherals that will enable a more extensive virtual exam within the home.
What are the MS Centers of Excellence working on now?
Jodie Haselkorn: We are working to understand the healthcare needs of veterans with MS by evaluating not only care for MS within the VA, but also the types and quantity of MS specialty care VA that is being received in the community during the pandemic. Dr. Wallin is also using the registry to lead a telehealth study to capture the variety of different codes that VA health professionals in MS have used to document workload by telehealth, and face-to-face, and telephone encounters.
Rebecca Spain: The MS Center of Excellence (MSCoE) is coming out with note templates to be available for HCPs, which we can refine as we get experience. This is s one way we can promote high standards in MS care by making these ancillary tools more productive.
Jodie Haselkorn: We are looking at different ways to achieve a high-quality virtual examination using standardized examination strategies and patient and caregiver information to prepare for a specialty MS visit.
Rebecca Spain: I would like to, in more of a research setting, study health outcomes using telehealth vs in person and start tracking that long term.
Mitchell Wallin: We can probably do more in terms of standardization, such as the routine patient reported surveys and implementing the new Consortium of Multiple Sclerosis Centers’ International MRI criteria. The COVID pandemic has affected everything in medical care. But we want to have a regular standardized outcome to assess, and if we can start to do some of the standard data collection through telemedicine, it becomes part of our regular clinic data.
Heidi Maloni: We need better technology. You can do electrocardiograms on your watch. Could we do Dinamaps? Could we figure out strength? That’s a wish list.
Jodie Haselkorn: Since the MSCoE is a national program, we were set up to do what we needed to do for education. We were able to continue on with all of our HCP webinars, including the series with the National MS Society (NMSS). We also have a Specialty Care Access Network-Extension for Community Healthcare Outcomes (SCAN-ECHO) series with the Northwest ECHO VA program and collaborated with the Can Do MS program on patient education as well. We’ve sent out 2 printed newsletters for veterans. The training of HCPs for the future has continued as well. All of our postdoctoral fellows who have finished their programs on time and moved on to either clinical practice or received career development grants to continue their VA careers, a new fellow has joined, and our other fellows are continuing as planned.
The loss that we sustained was in-person meetings. We held MSCoE Regional Program meetings in the East and West that combined education and administrative goals. Both of these were well attended and successful. There was a lot of virtual education available from multiple sources. It was challenging this year was to anticipate what education programming people wanted from MSCoE. Interestingly, a lot of our regional HCPs did not want much more COVID-19 education. They wanted other education and we were able to meet those needs.
Did the pandemic impact the VA MS registry?
Mitchell Wallin: Like any electronic product, the VA MS Surveillance Registry must be maintained, and we have tried to encourage people to use it. Our biggest concern was to identify cases of MS that got infected with COVID-19 and to put those people into the registry. In some cases, Veterans with MS were in locations without a MS clinic. So, we’ve spent a lot more time identifying those cases and adjudicating them to make sure their infection and MS were documented correctly.
During the COVID-19 pandemic, the VA healthcare system has been taxed like others and so HCPs have been a lot busier than normal, forcing new workflows. It has been a hard year that way because a lot of health care providers have been doing many other jobs to help maintain patient care during the COVID-19 pandemic.
Heidi Maloni: The impact of COVID-19 has been positive for the registry because we’ve had more opportunities to populate it.
Jodie Haselkorn: Dr. Wallin and the COVID-19 Registry group began building the combined registry at the onset of the pandemic. We have developed the capacity to identify COVID-19 infections in veterans who have MS and receive care in the VA. We entered these cases in the MS Surveillance Registry and have developed a linkage with the COVID-19 national VA registry. We are in the middle of the grunt work part case entry, but it is a rich resource.
How has the pandemic impacted MS research?
Rebecca Spain: COVID-19 has put a big damper on clinical research progress, including some of our MSCoE studies. It has been difficult to have subjects come in for clinical visits. It’s been difficult to get approval for new studies. It’s shifted timelines dramatically, and then that always increases budgets in a time when there’s not a lot of extra money. So, for clinical research, it’s been a real struggle and a strain and an ever-moving target. For laboratory research most, if not all, centers that have laboratory research at some point were closed and have only slowly reopened. Some still haven’t reopened to any kind of research or laboratory. So, it’s been tough, I think, on research in general.
Heidi Maloni: I would say the word is devastating. The pandemic essentially put a stop to in-person research studies. Our hospital was in research phase I, meaning human subjects can only participate in a research study if they are an inpatient or outpatient with an established clinic visit (clinics open to 25% occupancy) or involved in a study requiring safety monitoring, This plan limits risk of COVID-19 exposure.
Rebecca Spain: There is risk for a higher dropout rate of subjects from studies meaning there’s less chance of success for finding answers if enough people don’t stay in. At a certain point, you have to say, “Is this going to be a successful study?”
Jodie Haselkorn: Dr. Spain has done an amazing job leading a multisite, international clinical trial funded by the VA and the NMSS and kept it afloat, despite challenges. The pandemic has had impacts, but the study continues to move towards completion. I’ve appreciated the efforts of the Research Service at VA Puget Sound to ensure that we could safely obtain many of the 12-month outcomes for all the participants enrolled in that study.
Mitchell Wallin: The funding for some of our nonprofit partners, including the Paralyzed Veterans Association (PVA) and the NMSS, has suffered as well and so a lot of their funding programs have closed or been cut back during the pandemic. Despite that, we still have been able to use televideo technology for our clinical and educational programs with our network.
Jodie Haselkorn: MSCoE also does health services and epidemiological studies in addition to clinical trials and that work has continued. Quite a few of the studies that had human subjects in them were completed in terms of data collection, and so those are being analyzed. There will be a drop in funded studies, publications and posters as the pandemic continues and for a recovery period. We have a robust baseline for research productivity and a talented team. We’ll be able to track drop off and recovery over time.
Rebecca Spain: There’s going to be long-term consequences that we don’t see right now, especially for young researchers who have missed getting pilot data which would have led to additional small grants and then later large grants. There’s going to be an education gap that’s going on with all of the kids who are not able to go to school properly. It’s part of that whole swath of lost time and lost opportunity that we will have to deal with.
However, there are going to be some positive changes. We’re now busy designing clinical trials that can be done virtually to minimize any contact with the health facility, and then looking at things like shifting to research ideas that are more focused around health services.
Jodie Haselkorn: Given the current impacts of the pandemic on delivery of health care there is a strong interest in looking at how we can deliver health care in ways that accommodates the consumers and the providers perspectives. In the future we see marked impacts in our abilities to deliver care to Veterans with MS.
As a final thought, I wanted to put in a plug for this talented team. One of our pandemic resolutions was to innovatively find new possibilities and avoid negative focus on small changes. We are fortunate that all our staff have remained healthy and been supportive and compassionate with each other throughout this period. We have met our goals and are still moving forward.
MSCoE has benefited from the supportive leadership of Sharyl Martini, MD, PhD, and Glenn Graham, MD, PhD, in VA Specialty Care Neurology and leadership and space from VA Puget Sound, VA Portland Health Care System, the Washington DC VA Medical Center and VA Maryland Health Care System in Baltimore.
We also have a national advisory system that is actively involved, sets high standards and performs a rigorous annual review. We have rich inputs from the VA National Regional Programs and Veterans. Additionally, we have had the leadership and opportunities to collaborate with outside organizations including, the Consortium of MS Centers, the NMSS, and the PVA. We have been fortunate.
The following is a lightly edited transcript of a teleconference recorded in February 2021.
How has COVID impacted Veterans with multiple sclerosis?
Mitchell Wallin, MD, MPH: There has been a lot of concern in the multiple sclerosis (MS) patient community about getting infected with COVID-19 and what to do about it. Now that there are vaccines, the concern is whether and how to take a vaccine. At least here, in the Washington DC/Baltimore area where I practice, we have seen many veterans being hospitalized with COVID-19, some with multiple sclerosis (MS), and some who have died of COVID-19. So, there has been a lot of fear, especially in veterans that are older with comorbid diseases.
Rebecca Spain, MD, MSPH: There also has been an impact on our ability to provide care to our veterans with MS. There are challenges having them come into the office or providing virtual care. There are additional challenges and concerns this year about making changes in MS medications because we can’t see patients in person to or understand their needs or current status of their MS. So, providing care has been a challenge this year as well.
There has also been an impact on our day to day lives, like there has been for all of us, from the lockdown particularly not being able to exercise and socialize as much. There have been physical and social and emotional tolls that this disease has taken on veterans with MS.
Jodie Haselkorn, MD, MPH: The survivors of COVID-19, that are transferred to an inpatient multidisciplinary rehabilitation program unit to address impairments related to the cardiopulmonary, immobility, psychological impacts and other medical complications are highly motivated to work with the team to achieve a safe discharge. The US Department of Veterans Affairs (VA) Rehabilitation Services has much to offer them.
Heidi Maloni, PhD, NP: Veterans with MS are not at greater risk because they are diagnosed with MS. But, their comorbidities such as hypertension, obesity, or factors such as older age and increased disability can increase the risk of COVID-19 infection and poorer outcomes if infected. might place them at greater risk.
Veterans have asked “Am I at greater risk? Do I need to do something more to protect myself?” I have had innumerable veterans call and ask whether I can write them letters for their employer to ensure that they work at home longer rather than go into the workplace because they’re very nervous and don’t feel confident that masking and distancing is really going to be protective.
Mitchell Wallin: We are analyzing some of our data in the VA health care system related to COVID-19 infections in the MS population. We can’t say for sure what are numbers are, but our rates of infection and hospitalization are higher than the general population and we will soon have a report. We have a majority male population, which is different from the general MS population, which is predominantly female. The proportion of minority patients in VA mirrors those of the US population. These demographic factors along with a high level of comorbid disease put veterans at high risk for acquiring COVID-19. So, in some ways it’s hard to compare when you look at reports from other countries or the US National MS-COVID-19 Registry, which captures a population that is predominantly female. In the VA, our age range spans from the 20s to almost 100 years. We must understand our population to prevent COVID-19 and better care for the most vulnerable.
Rebecca Spain: Heidi, my understanding, although the numbers are small, that for the most part, Veterans with MS who are older are at higher risk of complications and death, which is also true of the general population. But that there is an additional risk for people with MS who have higher disability levels. My understanding from reading the literature, was that people with MS needing or requiring a cane to walk or greater assistance for mobility were at a higher risk for COVID-19 complications, including mortality. I have been particularly encouraged that in many places this special population of people with MS are getting vaccinated sooner.
Heidi Maloni: I completely agree, you said it very clearly, Becca. Their disability level puts them at risk
Rebecca Spain: Disability is a comorbidity.
Heidi Maloni: Yes. Just sitting in a wheelchair and not being able to get a full breath or having problems with respiratory effort really does put you at risk for doing well if you were to have COVID-19.
Are there other ancillary impacts from COVID-19 for patients with MS?
Jodie Haselkorn: Individuals who are hospitalized with COVID-19 miss social touch and social support from family and friends. They miss familiar conversations, a hug and having someone hold their hand. The acute phase of the infection limits professional face-to-face interaction with patients due to time and protective garments. There are reports of negative consequences with isolation and social reintegration of the COVID-19 survivors is necessary and a necessary part of rehabilitation.
Mitchell Wallin: For certain procedures (eg, magnetic resonance imaging [MRI]) or consultations, we need to bring people into the medical center. Many clinical encounters, however, can be done through telemedicine and both the VA and the US Department of Defense systems were set up to execute this type of visit. We had been doing telemedicine for a long time before the pandemic and we were in a better position than a lot of other health systems to shift to a virtual format with COVID-19. We had to ramp up a little bit and get our tools working a little more effectively for all clinics, but I think we were prepared to broadly execute telemedicine clinics for the pandemic.
Jodie Haselkorn: I agree that the he VA infrastructure was ahead of most other health system in terms of readiness for telehealth and maintaining access to care. Not all health care providers (HCPs) were using it, but the system was there, and included a telehealth coordinator in all of the facilities who could gear health care professionals up quickly. Additionally, a system was in place to provide veterans and caregivers with telehealth home equipment and provide training. Another thing that really helped was the MISSION Act. Veterans who have difficulty travelling for an appointment may have the ability to seek care outside of the VA within their own community. They may be able to go into a local facility to get laboratory or radiologic studies done or continue rehabilitation closer to home.
VA MS Registry Data
Rebecca Spain: Mitch, there are many interesting things we can learn about the interplay between COVID-19 and MS using registries such as how it affects people based on rural vs metropolitan living, whether people are living in single family homes or not as a proxy marker for social support, and so on.
Mitchell Wallin: We have both an MS registry to track and follow patients through our clinical network and a specific COVID-19 registry as well in VA. We have identified the MS cases infected with CoVID-19 and are putting them together.
Jodie Haselkorn: There are a number of efforts in mental health that are moving forward to examine depression and in anxiety during COVID-19. Individuals with MS have increased rates of depression and anxiety above that of the general population during usual times. The literature reports an increase in anxiety and depression in general population associated with the pandemic and veterans with MS seem to be reporting these symptoms more frequently as well. We will be able to track use the registry to assess the impacts of COVID-19 on depression and anxiety in Veterans with MS.
Providing MS Care During COVID-19
Jodie Haselkorn: The transition to telehealth in COVID-19 has been surprisingly seamless with some additional training for veterans and HCPs. I initially experienced an inefficiency in my clinic visit productivity. It took me longer to see a veteran because I wasn’t doing telehealth in our clinic with support staff and residents, my examination had to change, my documentation template needed to be restructured, and the coding was different. Sometimes I saw a veteran in clinic the and my next appointment required me to move back to my office in another building for a telehealth appointment. Teaching virtual trainees who also participated in the clinic encounters had its own challenges and rewards. My ‘motor routine’ was disrupted.
Rebecca Spain: There’s a real learning curve for telehealth in terms of how comfortable you feel with the data you get by telephone or video and how reliable that is. There are issues based on technology factors—like the patient’s bandwidth—because determining how smooth their motions are is challenging if you have a jerky, intermittent signal. I learned quickly to always do the physical examination first because I might lose video connection partway through and have to switch to a phone visit!
It’s still an open question, how much are we missing by using a video and not in-person visits. And what are the long-term health outcomes and implications of that? That is something that needs to be studied in neurology where we pride ourselves on the physical examination. When move to a virtual physical examination, is there cost? There are incredible gains using telehealth in terms of convenience and access to care, which may outweigh some of the drawbacks in particular cases.
There are also pandemic challenge in terms of clinic workflow. At VA Portland Health Care System in Oregon, I have 3 clinics for Friday morning: telephone, virtual, and face-to-face clinics. It’s a real struggle for the schedulers. And because of that transition to new system workflows to accommodate this, some patient visits have been dropped, lost, or scheduled incorrectly.
Heidi Maloni: As the nurse in this group, I agree with everything that Becca and Jodie have said about telehealth. But, I have found some benefits, and one of them is a greater intimacy with my patients. What do I mean by that? For instance, if a patient has taken me to their kitchen and opened their cupboard to show me the breakfast cereal, I’m also observing that there’s nothing else in that cupboard other than cereal. I’m also putting some things together about health and wellness. Or, for the first time, I might meet their significant other who can’t come to clinic because they’re working, but they are at home with the patient. And then having that 3-way conversation with the patient and the significant other, that’s kind of opened up my sense of who that person is.
You are right about the neurological examination. It’s challenging to make exacting assessments. When gathering household objects, ice bags and pronged forks to assess sensation, you remember that this exam is subjective and there is meaning in this remote evaluation. But all in all, I have been blessed with telehealth. Patients don’t mind it at all. They’re completely open to the idea. They like the telehealth for the contact they are able to have with their HCP.
Jodie Haselkorn: As you were saying that, Heidi, I thought, I’ve been inside my veterans’ bathrooms virtually and have seen all of their equipment that they have at home. In a face-to-face clinic visit, you don’t have an opportunity to see all their canes and walkers, braces, and other assistive technology. Some of it’s stashed in a closet, some of it under the bed. In a virtual visit, I get to understand why some is not used, what veterans prefer, and see their own innovations for mobility and self-care.
Mitchell Wallin: There’s a typical ritual that patients talk about when they go to a clinic. They check in, sit down, and wait for the nurse to give them their vital signs and set them up in the room. And then they meet with their HCP, and finally they complete the tasks on the checklist. And part of that may mean scheduling an MRI or going to the lab. But some of these handoffs don’t happen as well on telehealth. Maybe we haven’t integrated these segments of a clinical visit into telehealth platforms. But it could be developed, and there could be new neurologic tools to improve the interview and physical examination. Twenty years ago, you couldn’t deposit a check on your phone; but now you can do everything on your phone you could do in a physical bank. With some creativity, we can improve parts of the neurological exam that are currently difficult to assess remotely.
Jodie Haselkorn: I have not used peripherals in video telehealth to home and I would need to become accustomed to their use with current technology and train patients and caregivers. I would like telehealth peripherals such as a stethoscope to listen to the abdomen of a veteran with neurogenic bowel or a user-friendly ultrasound probe to measure postvoid residual urine in an individual with symptoms of neurogenic bladder, in addition to devices that measure walking speed and pulmonary function. I look forward to the development, use, and the incorporation peripherals that will enable a more extensive virtual exam within the home.
What are the MS Centers of Excellence working on now?
Jodie Haselkorn: We are working to understand the healthcare needs of veterans with MS by evaluating not only care for MS within the VA, but also the types and quantity of MS specialty care VA that is being received in the community during the pandemic. Dr. Wallin is also using the registry to lead a telehealth study to capture the variety of different codes that VA health professionals in MS have used to document workload by telehealth, and face-to-face, and telephone encounters.
Rebecca Spain: The MS Center of Excellence (MSCoE) is coming out with note templates to be available for HCPs, which we can refine as we get experience. This is s one way we can promote high standards in MS care by making these ancillary tools more productive.
Jodie Haselkorn: We are looking at different ways to achieve a high-quality virtual examination using standardized examination strategies and patient and caregiver information to prepare for a specialty MS visit.
Rebecca Spain: I would like to, in more of a research setting, study health outcomes using telehealth vs in person and start tracking that long term.
Mitchell Wallin: We can probably do more in terms of standardization, such as the routine patient reported surveys and implementing the new Consortium of Multiple Sclerosis Centers’ International MRI criteria. The COVID pandemic has affected everything in medical care. But we want to have a regular standardized outcome to assess, and if we can start to do some of the standard data collection through telemedicine, it becomes part of our regular clinic data.
Heidi Maloni: We need better technology. You can do electrocardiograms on your watch. Could we do Dinamaps? Could we figure out strength? That’s a wish list.
Jodie Haselkorn: Since the MSCoE is a national program, we were set up to do what we needed to do for education. We were able to continue on with all of our HCP webinars, including the series with the National MS Society (NMSS). We also have a Specialty Care Access Network-Extension for Community Healthcare Outcomes (SCAN-ECHO) series with the Northwest ECHO VA program and collaborated with the Can Do MS program on patient education as well. We’ve sent out 2 printed newsletters for veterans. The training of HCPs for the future has continued as well. All of our postdoctoral fellows who have finished their programs on time and moved on to either clinical practice or received career development grants to continue their VA careers, a new fellow has joined, and our other fellows are continuing as planned.
The loss that we sustained was in-person meetings. We held MSCoE Regional Program meetings in the East and West that combined education and administrative goals. Both of these were well attended and successful. There was a lot of virtual education available from multiple sources. It was challenging this year was to anticipate what education programming people wanted from MSCoE. Interestingly, a lot of our regional HCPs did not want much more COVID-19 education. They wanted other education and we were able to meet those needs.
Did the pandemic impact the VA MS registry?
Mitchell Wallin: Like any electronic product, the VA MS Surveillance Registry must be maintained, and we have tried to encourage people to use it. Our biggest concern was to identify cases of MS that got infected with COVID-19 and to put those people into the registry. In some cases, Veterans with MS were in locations without a MS clinic. So, we’ve spent a lot more time identifying those cases and adjudicating them to make sure their infection and MS were documented correctly.
During the COVID-19 pandemic, the VA healthcare system has been taxed like others and so HCPs have been a lot busier than normal, forcing new workflows. It has been a hard year that way because a lot of health care providers have been doing many other jobs to help maintain patient care during the COVID-19 pandemic.
Heidi Maloni: The impact of COVID-19 has been positive for the registry because we’ve had more opportunities to populate it.
Jodie Haselkorn: Dr. Wallin and the COVID-19 Registry group began building the combined registry at the onset of the pandemic. We have developed the capacity to identify COVID-19 infections in veterans who have MS and receive care in the VA. We entered these cases in the MS Surveillance Registry and have developed a linkage with the COVID-19 national VA registry. We are in the middle of the grunt work part case entry, but it is a rich resource.
How has the pandemic impacted MS research?
Rebecca Spain: COVID-19 has put a big damper on clinical research progress, including some of our MSCoE studies. It has been difficult to have subjects come in for clinical visits. It’s been difficult to get approval for new studies. It’s shifted timelines dramatically, and then that always increases budgets in a time when there’s not a lot of extra money. So, for clinical research, it’s been a real struggle and a strain and an ever-moving target. For laboratory research most, if not all, centers that have laboratory research at some point were closed and have only slowly reopened. Some still haven’t reopened to any kind of research or laboratory. So, it’s been tough, I think, on research in general.
Heidi Maloni: I would say the word is devastating. The pandemic essentially put a stop to in-person research studies. Our hospital was in research phase I, meaning human subjects can only participate in a research study if they are an inpatient or outpatient with an established clinic visit (clinics open to 25% occupancy) or involved in a study requiring safety monitoring, This plan limits risk of COVID-19 exposure.
Rebecca Spain: There is risk for a higher dropout rate of subjects from studies meaning there’s less chance of success for finding answers if enough people don’t stay in. At a certain point, you have to say, “Is this going to be a successful study?”
Jodie Haselkorn: Dr. Spain has done an amazing job leading a multisite, international clinical trial funded by the VA and the NMSS and kept it afloat, despite challenges. The pandemic has had impacts, but the study continues to move towards completion. I’ve appreciated the efforts of the Research Service at VA Puget Sound to ensure that we could safely obtain many of the 12-month outcomes for all the participants enrolled in that study.
Mitchell Wallin: The funding for some of our nonprofit partners, including the Paralyzed Veterans Association (PVA) and the NMSS, has suffered as well and so a lot of their funding programs have closed or been cut back during the pandemic. Despite that, we still have been able to use televideo technology for our clinical and educational programs with our network.
Jodie Haselkorn: MSCoE also does health services and epidemiological studies in addition to clinical trials and that work has continued. Quite a few of the studies that had human subjects in them were completed in terms of data collection, and so those are being analyzed. There will be a drop in funded studies, publications and posters as the pandemic continues and for a recovery period. We have a robust baseline for research productivity and a talented team. We’ll be able to track drop off and recovery over time.
Rebecca Spain: There’s going to be long-term consequences that we don’t see right now, especially for young researchers who have missed getting pilot data which would have led to additional small grants and then later large grants. There’s going to be an education gap that’s going on with all of the kids who are not able to go to school properly. It’s part of that whole swath of lost time and lost opportunity that we will have to deal with.
However, there are going to be some positive changes. We’re now busy designing clinical trials that can be done virtually to minimize any contact with the health facility, and then looking at things like shifting to research ideas that are more focused around health services.
Jodie Haselkorn: Given the current impacts of the pandemic on delivery of health care there is a strong interest in looking at how we can deliver health care in ways that accommodates the consumers and the providers perspectives. In the future we see marked impacts in our abilities to deliver care to Veterans with MS.
As a final thought, I wanted to put in a plug for this talented team. One of our pandemic resolutions was to innovatively find new possibilities and avoid negative focus on small changes. We are fortunate that all our staff have remained healthy and been supportive and compassionate with each other throughout this period. We have met our goals and are still moving forward.
MSCoE has benefited from the supportive leadership of Sharyl Martini, MD, PhD, and Glenn Graham, MD, PhD, in VA Specialty Care Neurology and leadership and space from VA Puget Sound, VA Portland Health Care System, the Washington DC VA Medical Center and VA Maryland Health Care System in Baltimore.
We also have a national advisory system that is actively involved, sets high standards and performs a rigorous annual review. We have rich inputs from the VA National Regional Programs and Veterans. Additionally, we have had the leadership and opportunities to collaborate with outside organizations including, the Consortium of MS Centers, the NMSS, and the PVA. We have been fortunate.