User login
Use of HbA1c in the Diagnosis of Diabetes in Adolescents
Study Overview
Objective. To examine the screening practices of family practitioners (FPs) and pediatricians for type 2 diabetes (T2D) in adolescents.
Design. Cross-sectional study.
Setting and participants. The researchers randomly sampled 700 pediatricians and 700 FPs who participated in direct patient care using the American Medical Association Physician Masterfile using a mail survey. Exclusion criteria included providers who were residents, hospital staff, retirees, or employed by federally owned medical facilities, certified with a subspecialty, or over age 70.
Main outcome measures. Providers were given a hypothetical case of an obese, female, teenaged patient with concurrent associated risk factors for T2D (family history of T2D, minority race, signs of insulin resistance) and asked what initial screening tests they would order. Respondents were then informed of the updated American Diabetes Association (ADA) guidelines that added hemoglobin A1c as a screening test to diagnose diabetes. The survey then asked if knowing this change in recommendation has changed or will change their screening practices in adolescents.
Main results. 1400 surveys were mailed. After 2 were excluded due to mailing issues, 52% of providers provided responses. Of these, 129 providers reported that they did not care for adolescents (age 10–17), resulting in 604 providers in the final sample, 398 pediatricians and 335 FPs.
The vast majority (92%) said they would screen the hypothetical case for diabetes, with most initially ordering a fasting test (fasting plasma glucose or 2-hour glucose tolerance test) (63%) or A1c test (58%). Of the 58% who planned to order HbA1c, only 35% ordered it in combination with a fasting test. HbA1c was significantly more likely to be ordered by pediatricians than by FPs (P = 0.001). After being presented with the new guidelines, 84% said then would now order HbA1c, a 27% increase.
Conclusion. In response to information about the new guidelines, providers were more likely to order A1c as part of initial testing. Due to the lower test performance in children and increased cost of the test, the use of HbA1c without fasting tests may result in missed diagnosis of T2D in adolescents as well as increased health care costs.
Commentary
Rates of childhood obesity continue to rise throughout the United States. Obese children are at risk for numerous comorbidities such as hypertension, hyperlipidemia, and T2D [1,2]. It is important for providers to use effective screening tools for risk assessment of prediabetes/T2DM in children.
The standard tests for diagnosing diabetes are the fasting plasma glucose test and the 2-hour plasma glucose test. While accurate, these tests are not convenient. In 2010, the ADA added an easier method of testing for T2D: an HbA1c, with results greater than or equal to 6.5% indicating diabetes [3]. However, this recommendation is controversial, given studies suggesting that HbA1c is not as reliable in children as it is in adults [4–6]. The ADA itself acknowledges that there are limited data in the pediatric population.
In this study, most providers were unaware of the 4-year-old revised guidelines offering the A1c option but are planning to apply the guidelines going forward. According to the study, this would result in a 27% increase in providers utilizing HbA1c.
Should increased uptake of A1c as an initial screening test be a concern? Using it in combination with other tests may be useful for assessing which adolescents will need further testing [3–6]. Additionally, by starting with a test that can be performed in the office with no regard to fasting time, it is possible that more cases of T2D will be found by primary care providers treating adolescents.
A weakness of the study is the potential for response bias related to mailed surveys. An additional weakness is that the researchers utilized only 1 hypothetical situation. Providing additional hypothetical situations may have allowed for further understanding of screening practices. The investigators also did not include nurse practitioners or physician assistants in their sample, a growing percentage of whom may care for adolescent populations at risk for T2D or be primary referral sources.
Applications for Clinical Practice
Providers can use HbA1c to screen for diabetes in nonfasting adolescents at risk for diabetes. While the test may not be as accurate in pediatric patients, utilizing HbA1C as directed by the ADA may aid in diagnosing patients that may otherwise miss follow-up appointments to complete a fasting test.
—Jennifer L. Nahum, MSN, CPNP-AC, PPCNP-BC, and Allison Squires, PhD, RN
1. Freedman DS, Dietz WH, Srinivasan SR, Berenson GS. The relation of overweight to cardiovascular risk factors among children and adolescents: the Bogalusa Heart Study. Pediatrics 1999;103(6 Pt 1):1175–82.
2. Pinhas-Hamiel O, Dolan LM, Daniels SR, Standiford D, Khoury PR, Zeitler P. Increased incidence of non-insulin-dependent diabetes mellitus among adolescents. J Pediatr 1996;128(5 Pt 1):608–15.
3. American Diabetes Association. Type 2 diabetes in children and adolescents. Pediatrics 2000 Mar;105(3 Pt 1):671–80.
4. Lee JM, Gebremariam A, Wu EL, et al. Evaluation of nonfasting tests to screen for childhood and adolescent dysglycemia. Diabetes Care 2011;34:2597–602.
5. Nowicka P, Santoro N, Liu H, et al. Utility of hemoglobin A(1c) for diagnosing prediabetes and diabetes in obese children and adolescents. Diabetes Care 2011;34:1306–11.
6. Lee JM, Wu EL, Tarini B, et al Diagnosis of diabetes using hemoglobin A1c: should recommendations in adults be extrapolated to adolescents? J Pediatr 2011;158:947–952.
Study Overview
Objective. To examine the screening practices of family practitioners (FPs) and pediatricians for type 2 diabetes (T2D) in adolescents.
Design. Cross-sectional study.
Setting and participants. The researchers randomly sampled 700 pediatricians and 700 FPs who participated in direct patient care using the American Medical Association Physician Masterfile using a mail survey. Exclusion criteria included providers who were residents, hospital staff, retirees, or employed by federally owned medical facilities, certified with a subspecialty, or over age 70.
Main outcome measures. Providers were given a hypothetical case of an obese, female, teenaged patient with concurrent associated risk factors for T2D (family history of T2D, minority race, signs of insulin resistance) and asked what initial screening tests they would order. Respondents were then informed of the updated American Diabetes Association (ADA) guidelines that added hemoglobin A1c as a screening test to diagnose diabetes. The survey then asked if knowing this change in recommendation has changed or will change their screening practices in adolescents.
Main results. 1400 surveys were mailed. After 2 were excluded due to mailing issues, 52% of providers provided responses. Of these, 129 providers reported that they did not care for adolescents (age 10–17), resulting in 604 providers in the final sample, 398 pediatricians and 335 FPs.
The vast majority (92%) said they would screen the hypothetical case for diabetes, with most initially ordering a fasting test (fasting plasma glucose or 2-hour glucose tolerance test) (63%) or A1c test (58%). Of the 58% who planned to order HbA1c, only 35% ordered it in combination with a fasting test. HbA1c was significantly more likely to be ordered by pediatricians than by FPs (P = 0.001). After being presented with the new guidelines, 84% said then would now order HbA1c, a 27% increase.
Conclusion. In response to information about the new guidelines, providers were more likely to order A1c as part of initial testing. Due to the lower test performance in children and increased cost of the test, the use of HbA1c without fasting tests may result in missed diagnosis of T2D in adolescents as well as increased health care costs.
Commentary
Rates of childhood obesity continue to rise throughout the United States. Obese children are at risk for numerous comorbidities such as hypertension, hyperlipidemia, and T2D [1,2]. It is important for providers to use effective screening tools for risk assessment of prediabetes/T2DM in children.
The standard tests for diagnosing diabetes are the fasting plasma glucose test and the 2-hour plasma glucose test. While accurate, these tests are not convenient. In 2010, the ADA added an easier method of testing for T2D: an HbA1c, with results greater than or equal to 6.5% indicating diabetes [3]. However, this recommendation is controversial, given studies suggesting that HbA1c is not as reliable in children as it is in adults [4–6]. The ADA itself acknowledges that there are limited data in the pediatric population.
In this study, most providers were unaware of the 4-year-old revised guidelines offering the A1c option but are planning to apply the guidelines going forward. According to the study, this would result in a 27% increase in providers utilizing HbA1c.
Should increased uptake of A1c as an initial screening test be a concern? Using it in combination with other tests may be useful for assessing which adolescents will need further testing [3–6]. Additionally, by starting with a test that can be performed in the office with no regard to fasting time, it is possible that more cases of T2D will be found by primary care providers treating adolescents.
A weakness of the study is the potential for response bias related to mailed surveys. An additional weakness is that the researchers utilized only 1 hypothetical situation. Providing additional hypothetical situations may have allowed for further understanding of screening practices. The investigators also did not include nurse practitioners or physician assistants in their sample, a growing percentage of whom may care for adolescent populations at risk for T2D or be primary referral sources.
Applications for Clinical Practice
Providers can use HbA1c to screen for diabetes in nonfasting adolescents at risk for diabetes. While the test may not be as accurate in pediatric patients, utilizing HbA1C as directed by the ADA may aid in diagnosing patients that may otherwise miss follow-up appointments to complete a fasting test.
—Jennifer L. Nahum, MSN, CPNP-AC, PPCNP-BC, and Allison Squires, PhD, RN
Study Overview
Objective. To examine the screening practices of family practitioners (FPs) and pediatricians for type 2 diabetes (T2D) in adolescents.
Design. Cross-sectional study.
Setting and participants. The researchers randomly sampled 700 pediatricians and 700 FPs who participated in direct patient care using the American Medical Association Physician Masterfile using a mail survey. Exclusion criteria included providers who were residents, hospital staff, retirees, or employed by federally owned medical facilities, certified with a subspecialty, or over age 70.
Main outcome measures. Providers were given a hypothetical case of an obese, female, teenaged patient with concurrent associated risk factors for T2D (family history of T2D, minority race, signs of insulin resistance) and asked what initial screening tests they would order. Respondents were then informed of the updated American Diabetes Association (ADA) guidelines that added hemoglobin A1c as a screening test to diagnose diabetes. The survey then asked if knowing this change in recommendation has changed or will change their screening practices in adolescents.
Main results. 1400 surveys were mailed. After 2 were excluded due to mailing issues, 52% of providers provided responses. Of these, 129 providers reported that they did not care for adolescents (age 10–17), resulting in 604 providers in the final sample, 398 pediatricians and 335 FPs.
The vast majority (92%) said they would screen the hypothetical case for diabetes, with most initially ordering a fasting test (fasting plasma glucose or 2-hour glucose tolerance test) (63%) or A1c test (58%). Of the 58% who planned to order HbA1c, only 35% ordered it in combination with a fasting test. HbA1c was significantly more likely to be ordered by pediatricians than by FPs (P = 0.001). After being presented with the new guidelines, 84% said then would now order HbA1c, a 27% increase.
Conclusion. In response to information about the new guidelines, providers were more likely to order A1c as part of initial testing. Due to the lower test performance in children and increased cost of the test, the use of HbA1c without fasting tests may result in missed diagnosis of T2D in adolescents as well as increased health care costs.
Commentary
Rates of childhood obesity continue to rise throughout the United States. Obese children are at risk for numerous comorbidities such as hypertension, hyperlipidemia, and T2D [1,2]. It is important for providers to use effective screening tools for risk assessment of prediabetes/T2DM in children.
The standard tests for diagnosing diabetes are the fasting plasma glucose test and the 2-hour plasma glucose test. While accurate, these tests are not convenient. In 2010, the ADA added an easier method of testing for T2D: an HbA1c, with results greater than or equal to 6.5% indicating diabetes [3]. However, this recommendation is controversial, given studies suggesting that HbA1c is not as reliable in children as it is in adults [4–6]. The ADA itself acknowledges that there are limited data in the pediatric population.
In this study, most providers were unaware of the 4-year-old revised guidelines offering the A1c option but are planning to apply the guidelines going forward. According to the study, this would result in a 27% increase in providers utilizing HbA1c.
Should increased uptake of A1c as an initial screening test be a concern? Using it in combination with other tests may be useful for assessing which adolescents will need further testing [3–6]. Additionally, by starting with a test that can be performed in the office with no regard to fasting time, it is possible that more cases of T2D will be found by primary care providers treating adolescents.
A weakness of the study is the potential for response bias related to mailed surveys. An additional weakness is that the researchers utilized only 1 hypothetical situation. Providing additional hypothetical situations may have allowed for further understanding of screening practices. The investigators also did not include nurse practitioners or physician assistants in their sample, a growing percentage of whom may care for adolescent populations at risk for T2D or be primary referral sources.
Applications for Clinical Practice
Providers can use HbA1c to screen for diabetes in nonfasting adolescents at risk for diabetes. While the test may not be as accurate in pediatric patients, utilizing HbA1C as directed by the ADA may aid in diagnosing patients that may otherwise miss follow-up appointments to complete a fasting test.
—Jennifer L. Nahum, MSN, CPNP-AC, PPCNP-BC, and Allison Squires, PhD, RN
1. Freedman DS, Dietz WH, Srinivasan SR, Berenson GS. The relation of overweight to cardiovascular risk factors among children and adolescents: the Bogalusa Heart Study. Pediatrics 1999;103(6 Pt 1):1175–82.
2. Pinhas-Hamiel O, Dolan LM, Daniels SR, Standiford D, Khoury PR, Zeitler P. Increased incidence of non-insulin-dependent diabetes mellitus among adolescents. J Pediatr 1996;128(5 Pt 1):608–15.
3. American Diabetes Association. Type 2 diabetes in children and adolescents. Pediatrics 2000 Mar;105(3 Pt 1):671–80.
4. Lee JM, Gebremariam A, Wu EL, et al. Evaluation of nonfasting tests to screen for childhood and adolescent dysglycemia. Diabetes Care 2011;34:2597–602.
5. Nowicka P, Santoro N, Liu H, et al. Utility of hemoglobin A(1c) for diagnosing prediabetes and diabetes in obese children and adolescents. Diabetes Care 2011;34:1306–11.
6. Lee JM, Wu EL, Tarini B, et al Diagnosis of diabetes using hemoglobin A1c: should recommendations in adults be extrapolated to adolescents? J Pediatr 2011;158:947–952.
1. Freedman DS, Dietz WH, Srinivasan SR, Berenson GS. The relation of overweight to cardiovascular risk factors among children and adolescents: the Bogalusa Heart Study. Pediatrics 1999;103(6 Pt 1):1175–82.
2. Pinhas-Hamiel O, Dolan LM, Daniels SR, Standiford D, Khoury PR, Zeitler P. Increased incidence of non-insulin-dependent diabetes mellitus among adolescents. J Pediatr 1996;128(5 Pt 1):608–15.
3. American Diabetes Association. Type 2 diabetes in children and adolescents. Pediatrics 2000 Mar;105(3 Pt 1):671–80.
4. Lee JM, Gebremariam A, Wu EL, et al. Evaluation of nonfasting tests to screen for childhood and adolescent dysglycemia. Diabetes Care 2011;34:2597–602.
5. Nowicka P, Santoro N, Liu H, et al. Utility of hemoglobin A(1c) for diagnosing prediabetes and diabetes in obese children and adolescents. Diabetes Care 2011;34:1306–11.
6. Lee JM, Wu EL, Tarini B, et al Diagnosis of diabetes using hemoglobin A1c: should recommendations in adults be extrapolated to adolescents? J Pediatr 2011;158:947–952.
Quality of Life in Aging Multiple Sclerosis Patients
Study Overview
Objective. To evaluate the association between clinical and demographic factors and health-related quality of life (HRQOL) among older people with multiple sclerosis (MS).
Design. Cross-sectional survey-based study.
Setting and participants. Patients with MS aged 60 years or older were recruited from 4 MS centers in Long Island, NY. Patients with severe cognitive impairment as determined by the health care practitioner were excluded. Participants were asked to complete 3 surveys at 3 different time-points. In the first survey, participants completed the Morisky Medication Adherence Scale and the Patient Multiple Sclerosis Neuropsychological Screening Questionnaire (P-MSNQ). The second survey was the Multiple Sclerosis Quality of Life-54 (MSQOL-54), and the third survey included the Beck Depression Inventory-II (BDI-II) and a disability status self-assessment scale. Cognitive function was measured at the time of recruitment using the Symbol Digit Modalities Test (SDMT).
Analysis. The Andersen Healthcare Utilization model was used to structure the multivariate regression analysis. This model identifies multiple domains affecting quality of life, and the variables from the surveys were categorized according to domain: predisposing characteristics (demographic variables), enabling resources (caregiver support and living situation), needs (eg, health-related measures), and health behaviors (medication use, adherence).
Main results. A total of 211 completed the first survey, 188 the second, and 179 the third. 80% were female and 95% were white. Average age was 65.5 (SD 5.6) years. 56% of respondents’ self-reported scores on the SDMT classified them as cognitively impaired. Risk of neuropsychological impairment, depression, and disability status were significantly associated with a decreased mental and physical HRQOL. Significantly, there was a strong association between predisposing characteristics and QOL. Being widowed and remaining employed were the strongest predictors of better physical QOL and having an education level of high school or less was a predictor of lower mental HRQOL.
Conclusion. Clinicians should measure HRQOL in older MS patients regularly and assess for depression and cognitive impairment.
Commentary
Quality of life is an important marker of MS patients’ well-being as they cope with this chronic illness [1]. The progression of the disease and its symptomatology often negatively affect HRQOL. However, multiple psychosocial factors, such as coping, mood, self-efficacy, and perceived support, affect QOL of patients with MS more than biological variables such as weakness or burden of radiologic disease [2]. For example, many self-report HRQOL indices are strongly predicted by measures of depression [3]. In addition, many studies have found a positive association between physical disability and reduced QOL [4,5]. Further, while perceived HRQOL may be a meaningful outcome in itself, it may also be a predictor for outcomes such as disability-related changes [6].
MS leads to disability and loss of function in all age-groups, but only a few studies have focused on HRQOL among elderly patients with MS. As patients with MS age, they may develop comorbidities such as hypertension and diabetes that may affect HRQOL. However, in a previous study comparing QOL between older and younger patients with MS, elderly and younger patients with MS had similar QOL even though the elderly patients had more physical limitations [7].
The strength of the current study was using the Andersen Healthcare Utilization regression model in the analysis, since it factors in multiple influences on health status. The striking evidence that employment and being widowed were linked to better physical QOL suggest that older MS patients may have better adaptation and adjustment to their illness. Researchers have shown that the widowed elderly often take on more responsibilities and tasks when they lose their partner, which leads to increased self-esteem and QOL [8]. Another advantage of the study was the fact that the investigators evaluated the different exposure variables and their associations with mental and physical QOL while identifying multiple confounding variables. Additionally, the use of 2 cognitive assessment tools provided a stronger assessment of patients’ cognitive function.
The main weakness of the study was using a cross-sectional study design with convenience sampling. The convenience sample was based on voluntary participation, which may result in self-selection bias. In addition, the self-report design is subject to the usual limitations of self-reporting for data collection: participants may exaggerate symptoms in order to make their situation seem worse or may under-report the severity or frequency of symptoms in order to minimize their problems. While the overall sample size was 211, not all respondents completed all the surveys, and response rates varied by questions. Thus, missing data may have affected results, but which data are missing is not discernable from the paper. That patients were from a single geographic area and had relatively high education levels (44% with college or above) are among the factors that limit the generalizability of the study. Another limitation is the use of the Beck Depression Inventory, which was not specifically designed for use in the elderly. In addition, the results of this study might have been affected by unmeasured confounding variables, for example daily physical activity, which can be a factor that modifies between depression, cognition, and QOL.
Applications for Clinical Practice
This study reinforces the importance of monitoring older MS patients for factors that may influence their HRQOL. The presence of depression, disability, and cognitive impairment should be assessed for regularly. Clinicians should encourage and empower elderly patients to continue with activities, including employment, that promote their mental and physical well-being and help maintain their independence. Assessing patients with geriatric-specific tools may provide more reliable and accurate assessment data that better accounts for aging dynamics. In addition, comobidities must be managed appropriately.
—Aliza Bitton Ben-Zacharia, DNP, ANP, and Allison Squires, PhD, RN, New York University College of Nursing
1. Opara JA, Jaracz K, Brola W. Quality of life in multiple sclerosis. J Med Life 2010;3:352–8.
2. Mitchell AJ, Benito-León J, González JM, Rivera-Navarro J. Quality of life and its assessment in multiple sclerosis: integrating physical and psychological components of wellbeing. Lancet Neurol 2005;4:556–66.
3. Benedict RH, Wahlig E, Bakshi R, et al. Predicting quality of life in multiple sclerosis: accounting for physical disability, fatigue, cognition, mood disorder, personality, and behavior change. J Neurol Sci 2005;231:29–34.
4. Göksel Karatepe A, Kaya T, Günaydn R, et al. Quality of life in patients with multiple sclerosis: the impact of depression, fatigue, and disability. Int J Rehabil Res 2011;34:290–8.
5. Nortvedt MW, Riise T, Myhr KM, Nyland HI. Quality of life in multiple sclerosis: measuring the disease effects more broadly. Neurology 1999;53:1098–103.
6. Visschedijk MA, Uitdehaag BM, Klein M, et al. Value of health-related quality of life to predict disability course in multiple sclerosis. Neurology 2004;63:2046–50.
7. Ploughman M, Austin MW, Murdoch M, et al. Factors influencing healthy aging with multiple sclerosis: a qualitative study. Disabil Rehabil 2012;34:26–33.
8. Minden SL, Frankel D, Hadden LS, et al. Disability in elderly people with multiple sclerosis: An analysis of baseline data from the Sonya Slifka Longitudinal Multiple Sclerosis Study. NeuroRehabilitation. 2004;19:55–67.
Study Overview
Objective. To evaluate the association between clinical and demographic factors and health-related quality of life (HRQOL) among older people with multiple sclerosis (MS).
Design. Cross-sectional survey-based study.
Setting and participants. Patients with MS aged 60 years or older were recruited from 4 MS centers in Long Island, NY. Patients with severe cognitive impairment as determined by the health care practitioner were excluded. Participants were asked to complete 3 surveys at 3 different time-points. In the first survey, participants completed the Morisky Medication Adherence Scale and the Patient Multiple Sclerosis Neuropsychological Screening Questionnaire (P-MSNQ). The second survey was the Multiple Sclerosis Quality of Life-54 (MSQOL-54), and the third survey included the Beck Depression Inventory-II (BDI-II) and a disability status self-assessment scale. Cognitive function was measured at the time of recruitment using the Symbol Digit Modalities Test (SDMT).
Analysis. The Andersen Healthcare Utilization model was used to structure the multivariate regression analysis. This model identifies multiple domains affecting quality of life, and the variables from the surveys were categorized according to domain: predisposing characteristics (demographic variables), enabling resources (caregiver support and living situation), needs (eg, health-related measures), and health behaviors (medication use, adherence).
Main results. A total of 211 completed the first survey, 188 the second, and 179 the third. 80% were female and 95% were white. Average age was 65.5 (SD 5.6) years. 56% of respondents’ self-reported scores on the SDMT classified them as cognitively impaired. Risk of neuropsychological impairment, depression, and disability status were significantly associated with a decreased mental and physical HRQOL. Significantly, there was a strong association between predisposing characteristics and QOL. Being widowed and remaining employed were the strongest predictors of better physical QOL and having an education level of high school or less was a predictor of lower mental HRQOL.
Conclusion. Clinicians should measure HRQOL in older MS patients regularly and assess for depression and cognitive impairment.
Commentary
Quality of life is an important marker of MS patients’ well-being as they cope with this chronic illness [1]. The progression of the disease and its symptomatology often negatively affect HRQOL. However, multiple psychosocial factors, such as coping, mood, self-efficacy, and perceived support, affect QOL of patients with MS more than biological variables such as weakness or burden of radiologic disease [2]. For example, many self-report HRQOL indices are strongly predicted by measures of depression [3]. In addition, many studies have found a positive association between physical disability and reduced QOL [4,5]. Further, while perceived HRQOL may be a meaningful outcome in itself, it may also be a predictor for outcomes such as disability-related changes [6].
MS leads to disability and loss of function in all age-groups, but only a few studies have focused on HRQOL among elderly patients with MS. As patients with MS age, they may develop comorbidities such as hypertension and diabetes that may affect HRQOL. However, in a previous study comparing QOL between older and younger patients with MS, elderly and younger patients with MS had similar QOL even though the elderly patients had more physical limitations [7].
The strength of the current study was using the Andersen Healthcare Utilization regression model in the analysis, since it factors in multiple influences on health status. The striking evidence that employment and being widowed were linked to better physical QOL suggest that older MS patients may have better adaptation and adjustment to their illness. Researchers have shown that the widowed elderly often take on more responsibilities and tasks when they lose their partner, which leads to increased self-esteem and QOL [8]. Another advantage of the study was the fact that the investigators evaluated the different exposure variables and their associations with mental and physical QOL while identifying multiple confounding variables. Additionally, the use of 2 cognitive assessment tools provided a stronger assessment of patients’ cognitive function.
The main weakness of the study was using a cross-sectional study design with convenience sampling. The convenience sample was based on voluntary participation, which may result in self-selection bias. In addition, the self-report design is subject to the usual limitations of self-reporting for data collection: participants may exaggerate symptoms in order to make their situation seem worse or may under-report the severity or frequency of symptoms in order to minimize their problems. While the overall sample size was 211, not all respondents completed all the surveys, and response rates varied by questions. Thus, missing data may have affected results, but which data are missing is not discernable from the paper. That patients were from a single geographic area and had relatively high education levels (44% with college or above) are among the factors that limit the generalizability of the study. Another limitation is the use of the Beck Depression Inventory, which was not specifically designed for use in the elderly. In addition, the results of this study might have been affected by unmeasured confounding variables, for example daily physical activity, which can be a factor that modifies between depression, cognition, and QOL.
Applications for Clinical Practice
This study reinforces the importance of monitoring older MS patients for factors that may influence their HRQOL. The presence of depression, disability, and cognitive impairment should be assessed for regularly. Clinicians should encourage and empower elderly patients to continue with activities, including employment, that promote their mental and physical well-being and help maintain their independence. Assessing patients with geriatric-specific tools may provide more reliable and accurate assessment data that better accounts for aging dynamics. In addition, comobidities must be managed appropriately.
—Aliza Bitton Ben-Zacharia, DNP, ANP, and Allison Squires, PhD, RN, New York University College of Nursing
Study Overview
Objective. To evaluate the association between clinical and demographic factors and health-related quality of life (HRQOL) among older people with multiple sclerosis (MS).
Design. Cross-sectional survey-based study.
Setting and participants. Patients with MS aged 60 years or older were recruited from 4 MS centers in Long Island, NY. Patients with severe cognitive impairment as determined by the health care practitioner were excluded. Participants were asked to complete 3 surveys at 3 different time-points. In the first survey, participants completed the Morisky Medication Adherence Scale and the Patient Multiple Sclerosis Neuropsychological Screening Questionnaire (P-MSNQ). The second survey was the Multiple Sclerosis Quality of Life-54 (MSQOL-54), and the third survey included the Beck Depression Inventory-II (BDI-II) and a disability status self-assessment scale. Cognitive function was measured at the time of recruitment using the Symbol Digit Modalities Test (SDMT).
Analysis. The Andersen Healthcare Utilization model was used to structure the multivariate regression analysis. This model identifies multiple domains affecting quality of life, and the variables from the surveys were categorized according to domain: predisposing characteristics (demographic variables), enabling resources (caregiver support and living situation), needs (eg, health-related measures), and health behaviors (medication use, adherence).
Main results. A total of 211 completed the first survey, 188 the second, and 179 the third. 80% were female and 95% were white. Average age was 65.5 (SD 5.6) years. 56% of respondents’ self-reported scores on the SDMT classified them as cognitively impaired. Risk of neuropsychological impairment, depression, and disability status were significantly associated with a decreased mental and physical HRQOL. Significantly, there was a strong association between predisposing characteristics and QOL. Being widowed and remaining employed were the strongest predictors of better physical QOL and having an education level of high school or less was a predictor of lower mental HRQOL.
Conclusion. Clinicians should measure HRQOL in older MS patients regularly and assess for depression and cognitive impairment.
Commentary
Quality of life is an important marker of MS patients’ well-being as they cope with this chronic illness [1]. The progression of the disease and its symptomatology often negatively affect HRQOL. However, multiple psychosocial factors, such as coping, mood, self-efficacy, and perceived support, affect QOL of patients with MS more than biological variables such as weakness or burden of radiologic disease [2]. For example, many self-report HRQOL indices are strongly predicted by measures of depression [3]. In addition, many studies have found a positive association between physical disability and reduced QOL [4,5]. Further, while perceived HRQOL may be a meaningful outcome in itself, it may also be a predictor for outcomes such as disability-related changes [6].
MS leads to disability and loss of function in all age-groups, but only a few studies have focused on HRQOL among elderly patients with MS. As patients with MS age, they may develop comorbidities such as hypertension and diabetes that may affect HRQOL. However, in a previous study comparing QOL between older and younger patients with MS, elderly and younger patients with MS had similar QOL even though the elderly patients had more physical limitations [7].
The strength of the current study was using the Andersen Healthcare Utilization regression model in the analysis, since it factors in multiple influences on health status. The striking evidence that employment and being widowed were linked to better physical QOL suggest that older MS patients may have better adaptation and adjustment to their illness. Researchers have shown that the widowed elderly often take on more responsibilities and tasks when they lose their partner, which leads to increased self-esteem and QOL [8]. Another advantage of the study was the fact that the investigators evaluated the different exposure variables and their associations with mental and physical QOL while identifying multiple confounding variables. Additionally, the use of 2 cognitive assessment tools provided a stronger assessment of patients’ cognitive function.
The main weakness of the study was using a cross-sectional study design with convenience sampling. The convenience sample was based on voluntary participation, which may result in self-selection bias. In addition, the self-report design is subject to the usual limitations of self-reporting for data collection: participants may exaggerate symptoms in order to make their situation seem worse or may under-report the severity or frequency of symptoms in order to minimize their problems. While the overall sample size was 211, not all respondents completed all the surveys, and response rates varied by questions. Thus, missing data may have affected results, but which data are missing is not discernable from the paper. That patients were from a single geographic area and had relatively high education levels (44% with college or above) are among the factors that limit the generalizability of the study. Another limitation is the use of the Beck Depression Inventory, which was not specifically designed for use in the elderly. In addition, the results of this study might have been affected by unmeasured confounding variables, for example daily physical activity, which can be a factor that modifies between depression, cognition, and QOL.
Applications for Clinical Practice
This study reinforces the importance of monitoring older MS patients for factors that may influence their HRQOL. The presence of depression, disability, and cognitive impairment should be assessed for regularly. Clinicians should encourage and empower elderly patients to continue with activities, including employment, that promote their mental and physical well-being and help maintain their independence. Assessing patients with geriatric-specific tools may provide more reliable and accurate assessment data that better accounts for aging dynamics. In addition, comobidities must be managed appropriately.
—Aliza Bitton Ben-Zacharia, DNP, ANP, and Allison Squires, PhD, RN, New York University College of Nursing
1. Opara JA, Jaracz K, Brola W. Quality of life in multiple sclerosis. J Med Life 2010;3:352–8.
2. Mitchell AJ, Benito-León J, González JM, Rivera-Navarro J. Quality of life and its assessment in multiple sclerosis: integrating physical and psychological components of wellbeing. Lancet Neurol 2005;4:556–66.
3. Benedict RH, Wahlig E, Bakshi R, et al. Predicting quality of life in multiple sclerosis: accounting for physical disability, fatigue, cognition, mood disorder, personality, and behavior change. J Neurol Sci 2005;231:29–34.
4. Göksel Karatepe A, Kaya T, Günaydn R, et al. Quality of life in patients with multiple sclerosis: the impact of depression, fatigue, and disability. Int J Rehabil Res 2011;34:290–8.
5. Nortvedt MW, Riise T, Myhr KM, Nyland HI. Quality of life in multiple sclerosis: measuring the disease effects more broadly. Neurology 1999;53:1098–103.
6. Visschedijk MA, Uitdehaag BM, Klein M, et al. Value of health-related quality of life to predict disability course in multiple sclerosis. Neurology 2004;63:2046–50.
7. Ploughman M, Austin MW, Murdoch M, et al. Factors influencing healthy aging with multiple sclerosis: a qualitative study. Disabil Rehabil 2012;34:26–33.
8. Minden SL, Frankel D, Hadden LS, et al. Disability in elderly people with multiple sclerosis: An analysis of baseline data from the Sonya Slifka Longitudinal Multiple Sclerosis Study. NeuroRehabilitation. 2004;19:55–67.
1. Opara JA, Jaracz K, Brola W. Quality of life in multiple sclerosis. J Med Life 2010;3:352–8.
2. Mitchell AJ, Benito-León J, González JM, Rivera-Navarro J. Quality of life and its assessment in multiple sclerosis: integrating physical and psychological components of wellbeing. Lancet Neurol 2005;4:556–66.
3. Benedict RH, Wahlig E, Bakshi R, et al. Predicting quality of life in multiple sclerosis: accounting for physical disability, fatigue, cognition, mood disorder, personality, and behavior change. J Neurol Sci 2005;231:29–34.
4. Göksel Karatepe A, Kaya T, Günaydn R, et al. Quality of life in patients with multiple sclerosis: the impact of depression, fatigue, and disability. Int J Rehabil Res 2011;34:290–8.
5. Nortvedt MW, Riise T, Myhr KM, Nyland HI. Quality of life in multiple sclerosis: measuring the disease effects more broadly. Neurology 1999;53:1098–103.
6. Visschedijk MA, Uitdehaag BM, Klein M, et al. Value of health-related quality of life to predict disability course in multiple sclerosis. Neurology 2004;63:2046–50.
7. Ploughman M, Austin MW, Murdoch M, et al. Factors influencing healthy aging with multiple sclerosis: a qualitative study. Disabil Rehabil 2012;34:26–33.
8. Minden SL, Frankel D, Hadden LS, et al. Disability in elderly people with multiple sclerosis: An analysis of baseline data from the Sonya Slifka Longitudinal Multiple Sclerosis Study. NeuroRehabilitation. 2004;19:55–67.
Co-Infection with HIV Increases Risk for Decompensation in Patients with HCV
Study Overview
Objective. To compare the incidence of hepatic decompensation in patients who are co-infected with HIV and hepatitis C (HCV) and who underwent antiretroviral treatment and patients who are HCV-monoinfected.
Design. Retrospective cohort study.
Participants and setting. This study used the Veterans Aging Cohort Study Virtual Cohort (VACS-VC), which includes electronic medical record data from patients who are HIV-infected and are receiving care at Veterans Affairs (VA) medical facilities in the United States. Inclusion criteria for patients who were co-infected were: detectable HCV RNA, recently initiated antiretroviral therapy (ART), defined as use of ≥ 3 antiretroviral drugs from 2 classes or ≥ 3 nucleoside analogues within the VA system, HIV RNA level > 500 copies/mL within 180 days before starting ART, and were seen in the VACS-VC for at least 12 months after initiating ART. Inclusion criteria for patients who were monoinfected with HCV were detectable HCV RNA, no HIV diagnosis or antiretroviral prescriptions, and seen in the VACS-VC for at least 12 months prior to inclusion into the study. Exclusion criteria were hepatic decompensation, hepatocellular carcinoma, and liver transplant during the 12-month baseline period or receipt of interferon-based HCV therapy.
Main outcome measure. The primary outcome was incident hepatic decompensation, defined as diagnosis of ascites, spontaneous bacterial peritonitis, or esophageal variceal hemorrhage at hospital discharge or 2 such outpatient diagnoses.
Main results. A total of 10,359 patients met inclusion criteria and were enrolled between 1997 and 2010. Of these, 4280 were patients co-infected with HIV and HCV and treated with antiretroviral agents and 6079 were patients who were HCV-monoinfected. Age, race/ethnicity, and history of diabetes, alcohol dependence or abuse, and injection or non-injection drug were similar between the 2 groups. The majority of participants were men. HCV genotype 1 was most prevalent in both groups. There were more patients who had HCV RNA levels ≥ 400,000 IU/mL and/or ≥ 1x106 copies/mL in the co-infected group versus the monoinfected group.
Hepatic decompensation occurred more frequently among those who were co-infected and receiving ART (271 [6.3%]) than among those who were monoinfected (305 [5.0%], P = 0.004). The incidence rate was 9.5 events per 1000 person-years (95% CI, 7.6–11.9) among patients co-infected with HIV and HCV and treated with ART and 5.7 events per 1000 person-years (95% CI, 4.4–7.4) among patients who were monoinfected. Variceal hemorrhage was less common among patients who were co-infected as compared to those who were monoinfected (71 [26.2%] vs. 168 [55.1%], P < 0.001). The proportion of patients with ascites (226 [83.4%] in the co-infected group vs. 236 [77.4%] in the monoinfected, P = 0.070) and spontaneous bacterial peritonitis (48 [17.7%] in the co-infected group vs. 68 [22.3%] in the monoinfected, P = 0.171) were similar. After adjustment for age, race/ethnicity, diabetes, BMI, history of alcohol abuse, injection or non-injection drug use, and VA center patient volume, patients who were co-infected and receiving ART had a higher rate of hepatic decompensation than monoinfected patients (hazard ratio, 1.83 [95% CI, 1.54–2.18]).
In subgroup analysis, rates of decompensation remained higher even among co-infected patients who maintained HIV RNA levels < 1000 copies/mL (hazard ratio 1.65 [95% CI 1.20–2.27])
Conclusion. Patients who were co-infected with HIV and HCV and treated with ART had higher rates of hepatic decompensation compared with patients monoinfected with HCV. Good control of HIV viral loads in co-infected patients may not be sufficient to improve health outcomes.
Commentary
Currently, it is estimated that there are 3.5 to 5.5 million people in the United States infected with HCV, accounting for about 1.5% of the population. Approximately 20% to 40% of those infected will develop chronic infection and 10% to 25% of these patients will progress to experience severe liver disease [1]. Yet of the 3.5 million people who are thought be chronically infected with HCV, only 50% are diagnosed and are aware of the infection and a mere 16% are treated for HCV [2].
Estimates suggest that about 10% of those with HCV are also infected with HIV. In the era prior to ART for HIV infections, patients with HIV and HCV most commonly died of HIV-related causes. In the post-ART era, patients are surviving longer and are now experiencing HCV-related comorbidities [3].
This study compares the incidence of hepatic decompensation in patients with HIV and HCV co-infection who are undergoing treatment with ART and those with HCV monoinfection. The results show that patients who were co-infected and treated with ART had higher incidence of hepatic decompensation as compared with those who were monoinfected. This study’s strengths are the large enrollment numbers (> 10,000 patients) and the long follow-up periods (6.8 and 9.9 years for the co-infected and monoinfected cohorts, respectively). As the authors indicate, the weakness of this study is the exclusion of the diagnosis of hepatic encephalopathy and jaundice from their definition of hepatic decompensation. Their reasoning for doing so is that these frequently occur due to unrelated causes, such as narcotic overdose and biliary obstruction. It is possible that this resulted in an underestimation of hepatic decompensation. Finally, 98.8% of the enrolled patients were male. The study results cannot be generalized to women.
Since 2011, the availability of direct-acting antivirals for the treatment of HCV has rapidly increased. These new agents have improved treatment outcomes with better sustained virological response, shorter treatment duration, and better adverse event rates [4]. Telaprevir and boceprevir were first-generation protease inhibitors, and these were followed by simeprevir in 2013. Sofosbuvir also became available in 2013 as the first polymerase inhibitor. These agents were and continue to be evaluated for use in HIV/HCV co-infected patients both in treatment-naive and previously treated patients with good outcomes. A fifth agent, faldaprevir, another protease inhibitor, is expected to become available this year and others are in clinical trials [5]. Sustained virologic response rates of 67% to 88% depending on genotype with regimens using sofosbuvir in co-infected patients for example, have been achieved, which are similar to rates in monoinfected patients [6].
Applications for Clinical Practice
The authors found that management of HIV viral loads to less than 1000 copies/mL reduced the risk for hepatic decompensation. However, the difference in incidence rates between those whose HIV load was < 1000 copies/mL and those whose viral load was ≥ 1000 copies/mL was small (9.4 [95% CI, 5.4–16.2] vs. 9.6 [95% CI, 7.5–12.2]). The findings suggest that control of HIV viral loads in co-infected patients is not sufficient to reduce the rate of liver complications. The authors propose that earlier consideration be given to treatment of HCV infection in co-infected patients to improve health outcomes. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America have published guidelines for the diagnosis and management of HCV [7]. The difference in hepatic decompensation rates between mono- and co-infected patients should become less relevant as use of direct-acting antivirals expands.
—Mayu O. Frank, MS, ANP-BC and Allison Squires, PhD, RN, New York University College of Nursing
1. Action plan for the prevention, care, and treatment of viral hepatitis (2014-2016). US Department of Health and Human Services; 2014. Available at http://aids.gov/news-and-events/hepatitis/.
2. Yehia BR, Schranz AJ, Umscheid CA, Lo Re V. The treatment cascade for chronic hepatitis C virus infection in the United States: a systematic review and meta-analysis. PLOS One 2014;9:1–7.
3. Highleyman L. HIV/HCV coinfection: a new era of treatment. BETA 2001; Fall/Winter: 30–47.
4. Shiffman ML. Hepatitis C virus therapy in the direct acting antiviral era. Curr Opin Gastroenterol 2014;30:217–22.
5. Bichoupan K, Dieterich DT, Martel-Laferriere V. HIV-Hepatitis C virus co-infection in the era of direct-acting antivirals. Curr HIV/AIDS Rep. 2014 July 5. [Epub ahead of print]
6. Sulkowski M, Rodriguez-Torres M, Lalezari J, et al. All-oral therapy with sofosbuvir plus ribavirin for the treatment of HCV genotype 1,2, and 3 infection in patients co-infected with HIV (PHOTON-1). 64th annual meeting of the American Association for the Study of Liver Diseases. Washington, DC; Nov 2013.
7. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America. Recommendations for testing, managing, and treating hepatitis C. Accessed 1 Aug 2014 at www.hcvguidelines.org.
Study Overview
Objective. To compare the incidence of hepatic decompensation in patients who are co-infected with HIV and hepatitis C (HCV) and who underwent antiretroviral treatment and patients who are HCV-monoinfected.
Design. Retrospective cohort study.
Participants and setting. This study used the Veterans Aging Cohort Study Virtual Cohort (VACS-VC), which includes electronic medical record data from patients who are HIV-infected and are receiving care at Veterans Affairs (VA) medical facilities in the United States. Inclusion criteria for patients who were co-infected were: detectable HCV RNA, recently initiated antiretroviral therapy (ART), defined as use of ≥ 3 antiretroviral drugs from 2 classes or ≥ 3 nucleoside analogues within the VA system, HIV RNA level > 500 copies/mL within 180 days before starting ART, and were seen in the VACS-VC for at least 12 months after initiating ART. Inclusion criteria for patients who were monoinfected with HCV were detectable HCV RNA, no HIV diagnosis or antiretroviral prescriptions, and seen in the VACS-VC for at least 12 months prior to inclusion into the study. Exclusion criteria were hepatic decompensation, hepatocellular carcinoma, and liver transplant during the 12-month baseline period or receipt of interferon-based HCV therapy.
Main outcome measure. The primary outcome was incident hepatic decompensation, defined as diagnosis of ascites, spontaneous bacterial peritonitis, or esophageal variceal hemorrhage at hospital discharge or 2 such outpatient diagnoses.
Main results. A total of 10,359 patients met inclusion criteria and were enrolled between 1997 and 2010. Of these, 4280 were patients co-infected with HIV and HCV and treated with antiretroviral agents and 6079 were patients who were HCV-monoinfected. Age, race/ethnicity, and history of diabetes, alcohol dependence or abuse, and injection or non-injection drug were similar between the 2 groups. The majority of participants were men. HCV genotype 1 was most prevalent in both groups. There were more patients who had HCV RNA levels ≥ 400,000 IU/mL and/or ≥ 1x106 copies/mL in the co-infected group versus the monoinfected group.
Hepatic decompensation occurred more frequently among those who were co-infected and receiving ART (271 [6.3%]) than among those who were monoinfected (305 [5.0%], P = 0.004). The incidence rate was 9.5 events per 1000 person-years (95% CI, 7.6–11.9) among patients co-infected with HIV and HCV and treated with ART and 5.7 events per 1000 person-years (95% CI, 4.4–7.4) among patients who were monoinfected. Variceal hemorrhage was less common among patients who were co-infected as compared to those who were monoinfected (71 [26.2%] vs. 168 [55.1%], P < 0.001). The proportion of patients with ascites (226 [83.4%] in the co-infected group vs. 236 [77.4%] in the monoinfected, P = 0.070) and spontaneous bacterial peritonitis (48 [17.7%] in the co-infected group vs. 68 [22.3%] in the monoinfected, P = 0.171) were similar. After adjustment for age, race/ethnicity, diabetes, BMI, history of alcohol abuse, injection or non-injection drug use, and VA center patient volume, patients who were co-infected and receiving ART had a higher rate of hepatic decompensation than monoinfected patients (hazard ratio, 1.83 [95% CI, 1.54–2.18]).
In subgroup analysis, rates of decompensation remained higher even among co-infected patients who maintained HIV RNA levels < 1000 copies/mL (hazard ratio 1.65 [95% CI 1.20–2.27])
Conclusion. Patients who were co-infected with HIV and HCV and treated with ART had higher rates of hepatic decompensation compared with patients monoinfected with HCV. Good control of HIV viral loads in co-infected patients may not be sufficient to improve health outcomes.
Commentary
Currently, it is estimated that there are 3.5 to 5.5 million people in the United States infected with HCV, accounting for about 1.5% of the population. Approximately 20% to 40% of those infected will develop chronic infection and 10% to 25% of these patients will progress to experience severe liver disease [1]. Yet of the 3.5 million people who are thought be chronically infected with HCV, only 50% are diagnosed and are aware of the infection and a mere 16% are treated for HCV [2].
Estimates suggest that about 10% of those with HCV are also infected with HIV. In the era prior to ART for HIV infections, patients with HIV and HCV most commonly died of HIV-related causes. In the post-ART era, patients are surviving longer and are now experiencing HCV-related comorbidities [3].
This study compares the incidence of hepatic decompensation in patients with HIV and HCV co-infection who are undergoing treatment with ART and those with HCV monoinfection. The results show that patients who were co-infected and treated with ART had higher incidence of hepatic decompensation as compared with those who were monoinfected. This study’s strengths are the large enrollment numbers (> 10,000 patients) and the long follow-up periods (6.8 and 9.9 years for the co-infected and monoinfected cohorts, respectively). As the authors indicate, the weakness of this study is the exclusion of the diagnosis of hepatic encephalopathy and jaundice from their definition of hepatic decompensation. Their reasoning for doing so is that these frequently occur due to unrelated causes, such as narcotic overdose and biliary obstruction. It is possible that this resulted in an underestimation of hepatic decompensation. Finally, 98.8% of the enrolled patients were male. The study results cannot be generalized to women.
Since 2011, the availability of direct-acting antivirals for the treatment of HCV has rapidly increased. These new agents have improved treatment outcomes with better sustained virological response, shorter treatment duration, and better adverse event rates [4]. Telaprevir and boceprevir were first-generation protease inhibitors, and these were followed by simeprevir in 2013. Sofosbuvir also became available in 2013 as the first polymerase inhibitor. These agents were and continue to be evaluated for use in HIV/HCV co-infected patients both in treatment-naive and previously treated patients with good outcomes. A fifth agent, faldaprevir, another protease inhibitor, is expected to become available this year and others are in clinical trials [5]. Sustained virologic response rates of 67% to 88% depending on genotype with regimens using sofosbuvir in co-infected patients for example, have been achieved, which are similar to rates in monoinfected patients [6].
Applications for Clinical Practice
The authors found that management of HIV viral loads to less than 1000 copies/mL reduced the risk for hepatic decompensation. However, the difference in incidence rates between those whose HIV load was < 1000 copies/mL and those whose viral load was ≥ 1000 copies/mL was small (9.4 [95% CI, 5.4–16.2] vs. 9.6 [95% CI, 7.5–12.2]). The findings suggest that control of HIV viral loads in co-infected patients is not sufficient to reduce the rate of liver complications. The authors propose that earlier consideration be given to treatment of HCV infection in co-infected patients to improve health outcomes. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America have published guidelines for the diagnosis and management of HCV [7]. The difference in hepatic decompensation rates between mono- and co-infected patients should become less relevant as use of direct-acting antivirals expands.
—Mayu O. Frank, MS, ANP-BC and Allison Squires, PhD, RN, New York University College of Nursing
Study Overview
Objective. To compare the incidence of hepatic decompensation in patients who are co-infected with HIV and hepatitis C (HCV) and who underwent antiretroviral treatment and patients who are HCV-monoinfected.
Design. Retrospective cohort study.
Participants and setting. This study used the Veterans Aging Cohort Study Virtual Cohort (VACS-VC), which includes electronic medical record data from patients who are HIV-infected and are receiving care at Veterans Affairs (VA) medical facilities in the United States. Inclusion criteria for patients who were co-infected were: detectable HCV RNA, recently initiated antiretroviral therapy (ART), defined as use of ≥ 3 antiretroviral drugs from 2 classes or ≥ 3 nucleoside analogues within the VA system, HIV RNA level > 500 copies/mL within 180 days before starting ART, and were seen in the VACS-VC for at least 12 months after initiating ART. Inclusion criteria for patients who were monoinfected with HCV were detectable HCV RNA, no HIV diagnosis or antiretroviral prescriptions, and seen in the VACS-VC for at least 12 months prior to inclusion into the study. Exclusion criteria were hepatic decompensation, hepatocellular carcinoma, and liver transplant during the 12-month baseline period or receipt of interferon-based HCV therapy.
Main outcome measure. The primary outcome was incident hepatic decompensation, defined as diagnosis of ascites, spontaneous bacterial peritonitis, or esophageal variceal hemorrhage at hospital discharge or 2 such outpatient diagnoses.
Main results. A total of 10,359 patients met inclusion criteria and were enrolled between 1997 and 2010. Of these, 4280 were patients co-infected with HIV and HCV and treated with antiretroviral agents and 6079 were patients who were HCV-monoinfected. Age, race/ethnicity, and history of diabetes, alcohol dependence or abuse, and injection or non-injection drug were similar between the 2 groups. The majority of participants were men. HCV genotype 1 was most prevalent in both groups. There were more patients who had HCV RNA levels ≥ 400,000 IU/mL and/or ≥ 1x106 copies/mL in the co-infected group versus the monoinfected group.
Hepatic decompensation occurred more frequently among those who were co-infected and receiving ART (271 [6.3%]) than among those who were monoinfected (305 [5.0%], P = 0.004). The incidence rate was 9.5 events per 1000 person-years (95% CI, 7.6–11.9) among patients co-infected with HIV and HCV and treated with ART and 5.7 events per 1000 person-years (95% CI, 4.4–7.4) among patients who were monoinfected. Variceal hemorrhage was less common among patients who were co-infected as compared to those who were monoinfected (71 [26.2%] vs. 168 [55.1%], P < 0.001). The proportion of patients with ascites (226 [83.4%] in the co-infected group vs. 236 [77.4%] in the monoinfected, P = 0.070) and spontaneous bacterial peritonitis (48 [17.7%] in the co-infected group vs. 68 [22.3%] in the monoinfected, P = 0.171) were similar. After adjustment for age, race/ethnicity, diabetes, BMI, history of alcohol abuse, injection or non-injection drug use, and VA center patient volume, patients who were co-infected and receiving ART had a higher rate of hepatic decompensation than monoinfected patients (hazard ratio, 1.83 [95% CI, 1.54–2.18]).
In subgroup analysis, rates of decompensation remained higher even among co-infected patients who maintained HIV RNA levels < 1000 copies/mL (hazard ratio 1.65 [95% CI 1.20–2.27])
Conclusion. Patients who were co-infected with HIV and HCV and treated with ART had higher rates of hepatic decompensation compared with patients monoinfected with HCV. Good control of HIV viral loads in co-infected patients may not be sufficient to improve health outcomes.
Commentary
Currently, it is estimated that there are 3.5 to 5.5 million people in the United States infected with HCV, accounting for about 1.5% of the population. Approximately 20% to 40% of those infected will develop chronic infection and 10% to 25% of these patients will progress to experience severe liver disease [1]. Yet of the 3.5 million people who are thought be chronically infected with HCV, only 50% are diagnosed and are aware of the infection and a mere 16% are treated for HCV [2].
Estimates suggest that about 10% of those with HCV are also infected with HIV. In the era prior to ART for HIV infections, patients with HIV and HCV most commonly died of HIV-related causes. In the post-ART era, patients are surviving longer and are now experiencing HCV-related comorbidities [3].
This study compares the incidence of hepatic decompensation in patients with HIV and HCV co-infection who are undergoing treatment with ART and those with HCV monoinfection. The results show that patients who were co-infected and treated with ART had higher incidence of hepatic decompensation as compared with those who were monoinfected. This study’s strengths are the large enrollment numbers (> 10,000 patients) and the long follow-up periods (6.8 and 9.9 years for the co-infected and monoinfected cohorts, respectively). As the authors indicate, the weakness of this study is the exclusion of the diagnosis of hepatic encephalopathy and jaundice from their definition of hepatic decompensation. Their reasoning for doing so is that these frequently occur due to unrelated causes, such as narcotic overdose and biliary obstruction. It is possible that this resulted in an underestimation of hepatic decompensation. Finally, 98.8% of the enrolled patients were male. The study results cannot be generalized to women.
Since 2011, the availability of direct-acting antivirals for the treatment of HCV has rapidly increased. These new agents have improved treatment outcomes with better sustained virological response, shorter treatment duration, and better adverse event rates [4]. Telaprevir and boceprevir were first-generation protease inhibitors, and these were followed by simeprevir in 2013. Sofosbuvir also became available in 2013 as the first polymerase inhibitor. These agents were and continue to be evaluated for use in HIV/HCV co-infected patients both in treatment-naive and previously treated patients with good outcomes. A fifth agent, faldaprevir, another protease inhibitor, is expected to become available this year and others are in clinical trials [5]. Sustained virologic response rates of 67% to 88% depending on genotype with regimens using sofosbuvir in co-infected patients for example, have been achieved, which are similar to rates in monoinfected patients [6].
Applications for Clinical Practice
The authors found that management of HIV viral loads to less than 1000 copies/mL reduced the risk for hepatic decompensation. However, the difference in incidence rates between those whose HIV load was < 1000 copies/mL and those whose viral load was ≥ 1000 copies/mL was small (9.4 [95% CI, 5.4–16.2] vs. 9.6 [95% CI, 7.5–12.2]). The findings suggest that control of HIV viral loads in co-infected patients is not sufficient to reduce the rate of liver complications. The authors propose that earlier consideration be given to treatment of HCV infection in co-infected patients to improve health outcomes. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America have published guidelines for the diagnosis and management of HCV [7]. The difference in hepatic decompensation rates between mono- and co-infected patients should become less relevant as use of direct-acting antivirals expands.
—Mayu O. Frank, MS, ANP-BC and Allison Squires, PhD, RN, New York University College of Nursing
1. Action plan for the prevention, care, and treatment of viral hepatitis (2014-2016). US Department of Health and Human Services; 2014. Available at http://aids.gov/news-and-events/hepatitis/.
2. Yehia BR, Schranz AJ, Umscheid CA, Lo Re V. The treatment cascade for chronic hepatitis C virus infection in the United States: a systematic review and meta-analysis. PLOS One 2014;9:1–7.
3. Highleyman L. HIV/HCV coinfection: a new era of treatment. BETA 2001; Fall/Winter: 30–47.
4. Shiffman ML. Hepatitis C virus therapy in the direct acting antiviral era. Curr Opin Gastroenterol 2014;30:217–22.
5. Bichoupan K, Dieterich DT, Martel-Laferriere V. HIV-Hepatitis C virus co-infection in the era of direct-acting antivirals. Curr HIV/AIDS Rep. 2014 July 5. [Epub ahead of print]
6. Sulkowski M, Rodriguez-Torres M, Lalezari J, et al. All-oral therapy with sofosbuvir plus ribavirin for the treatment of HCV genotype 1,2, and 3 infection in patients co-infected with HIV (PHOTON-1). 64th annual meeting of the American Association for the Study of Liver Diseases. Washington, DC; Nov 2013.
7. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America. Recommendations for testing, managing, and treating hepatitis C. Accessed 1 Aug 2014 at www.hcvguidelines.org.
1. Action plan for the prevention, care, and treatment of viral hepatitis (2014-2016). US Department of Health and Human Services; 2014. Available at http://aids.gov/news-and-events/hepatitis/.
2. Yehia BR, Schranz AJ, Umscheid CA, Lo Re V. The treatment cascade for chronic hepatitis C virus infection in the United States: a systematic review and meta-analysis. PLOS One 2014;9:1–7.
3. Highleyman L. HIV/HCV coinfection: a new era of treatment. BETA 2001; Fall/Winter: 30–47.
4. Shiffman ML. Hepatitis C virus therapy in the direct acting antiviral era. Curr Opin Gastroenterol 2014;30:217–22.
5. Bichoupan K, Dieterich DT, Martel-Laferriere V. HIV-Hepatitis C virus co-infection in the era of direct-acting antivirals. Curr HIV/AIDS Rep. 2014 July 5. [Epub ahead of print]
6. Sulkowski M, Rodriguez-Torres M, Lalezari J, et al. All-oral therapy with sofosbuvir plus ribavirin for the treatment of HCV genotype 1,2, and 3 infection in patients co-infected with HIV (PHOTON-1). 64th annual meeting of the American Association for the Study of Liver Diseases. Washington, DC; Nov 2013.
7. The American Association for the Study of Liver Diseases and the Infectious Diseases Society of America. Recommendations for testing, managing, and treating hepatitis C. Accessed 1 Aug 2014 at www.hcvguidelines.org.
Frailty as a Predictive Factor in Geriatric Trauma Patient Outcomes
Study Overview
Objective. To evaluate the usefulness of the Frailty Index (FI) as a prognostic indicator of adverse outcomes in geriatric trauma patients.
Design. Prospective cohort study.
Setting and participants. Geriatric (aged 65 and over) trauma patients admitted to inpatient units at a Level 1 trauma center in Arizona were enrolled. Patients were excluded if they were intubated/nonresponsive with no family members present or transferred from another institution (eg, skilled nursing facility). The following categories of data were collected: (a) patient demographics, (b) type and mechanism of injury, (c) vital signs (eg, Glasgow coma scale score, systolic blood pressure, heart rate, body temperature), (d) need for operative intervention, (e) in-hospital complications, (f) hospital and intensive care unit (ICU) lengths of stay, and (g) discharge disposition.
Patients or, in the case of nonresponsive patients, their closest relative, responded to the 50-item Frailty Index questionnaire, which includes questions regarding age, comorbid conditions, medications, activities of daily living (ADLs), social activities, mood, and nutrition. FI score ranges from 0 (non-frail) to 1 (frail), with an FI of 0.25 or more indicative of frailty based on established guidelines. Patients were categorized as frail or non-frail according to their FI scores and were followed during the course of their hospitalization.
Main outcome measure. The primary outcome measure was in-hospital complications. In-hospital complications included myocardial infarction, cardiopulmonary arrest, pneumonia, pulmonary embolism, sepsis, urinary tract infection, deep venous thrombosis, disseminated intravascular coagulation, renal insufficiency, and reoperation. The secondary outcome measure was adverse discharge disposition, which was defined as death during the course of hospitalization or discharge to a skilled nursing facility.
Main results. The sample consisted of 250 patients with a mean age of 77.9 years. Among these, 44.0% were considered frail. Patients with frailty were more likely to have a higher Injury Severity Score (P = 0.04) and a higher mean FI (P = 0.01) than those without frailty. There were no statistically significant differences with respect to age (P = 0.21), mechanism of injury (P = 0.09), systolic blood pressure (P = 0.30), or Glasgow Coma Scale score (P = 0.91) between the groups.
Patients with frailty were more likely to develop in-hospital complications (37.3% vs 21.4%, P = 0.001) than those without frailty. Among these complications, pneumonia and urinary tract infection were the most common. There were no differences in the rate of re-operation (P = 0.54) between the 2 groups. An FI of 0.25 or higher was associated with the development of in-hospital complications (P = 0.001) even after adjust-ing for age, systolic blood pressure, heart rate, and Injury Severity Score.
Frail patients had longer hospital length of stay (P = 0.01) and ICU length of stay (P = 0.01), and were more likely to have adverse discharge disposition (37.3% vs. 12.9%, P = 0.001). All patients who died during the course of hospitalization (n = 5) were considered frail. Frailty was also found to be a predictor of adverse discharge disposition (P = 0.001) after adjustment for age, male sex, Injury Severity Score, and mechanism of injury.
Conclusion. The FI is effective in identifying geriatric trauma patients who are vulnerable to poor health outcomes.
Commentary
The diagnosis and treatment of elderly patients is complicated by the presence of multiple geriatric syndromes, including frailty [1]. Frailty is defined as increased vulnerability to negative health outcomes, marked by physical and functional decline, that eventually leads to disability, dependency, and mortality [2]. Factors such as age, malnutrition, and disease give way to dysregulations of bodily systems that eventually lead to reductions in mobility, strength, and cognition in frail older adults [3]. In turn, frail patients, who lack the physiological reserves to withstand illness and adapt to stressors, experience high incidences of hospitalizations, mortality, and reduced quality of life. Unsurprisingly, mortality rates among geriatric trauma patients are higher than those found in ordinary adult trauma patients [4]. It is, therefore, essential to identify patients with frailty at the outset of hospitalization in order to improve health outcomes and reduce mortality rates in this population. Yet, there is a dearth of assessment tools to predict outcomes in frail trauma patients [5].
This study has several strengths. Outcome measures are plainly stated. The inclusion criteria was broad enough to include most geriatric trauma patients, but the authors eliminated a number of confounders by excluding patients admitted from institutional settings, who may have been more susceptible to negative health outcomes at baseline than noninstitutionalized adults. Recruitment strategies were acceptable and reflect ethical standards. Groups were defined based on an accepted and previously validated FI cutoff. Lack of blinding did not threaten the study’s design given that most outcomes were beyond the control of study participants. Multivariate regression adjusted for a number of potential confounders including age, length of hospitalization, and injury severity. The Injury Severity Score, the Abbreviated Injury Scale score, and the Glasgow Coma Scale score are validated instruments that are widely used and enable standardized assessments of cognition and degree of injury.
The study methodology also possesses a number of weaknesses. The authors followed patients from admission to discharge; however, they did not re-evaluate patients following their release from the inpatient setting. It is, therefore, not clear whether the FI is predictive of quality of life, functional status, or hospital readmissions upon discharge into the community. The cohort was largely male (69.2%) and predominately Caucasian. Participants were recruited from only one medical center. All of these limit the study’s generalizability. In addition, the authors do not clarify how they came to define the criteria for in-hospital complications or adverse discharge disposition. For example, the study does not consider skin breakdown, a common concern among older patients who are hospitalized, as an in-hospital complication. In addition, the authors did not adjust for the number of diagnoses at baseline or the presence of chronic comorbid conditions, which are also associated with negative health outcomes.
Applications for Clinical Practice
Although lengthy, with over 50 variables in 5 categories, the FI has the potential to help health care providers improve risk stratification, assess patient acuity, and formulate treatment plans to improve the health of frail elderly patients. The FI will enable hospitals to direct appropriate resources, including staff, to the most vulnerable subsets of patients in order to improve outcomes and reduce costs. Moreover, awareness of frailty enables greater discussion between patients and families of trauma patients about the risks and benefits of complex intervention, increases referrals to palliative care, and improves quality of life in this population [6].
—Tina Sadarangani, MSN, APRN, and Allison Squires, PhD, RN, New York University College of Nursing
1. Rich MW. Heart failure in the oldest patients: the impact of comorbid conditions. Am J Geriatr Cardiol 2005;14:134–41.
2. Fried LP, Ferrucci L, Darer J, et al. Untangling the concepts of disability, frailty, and comorbidity: implications for improved targeting and care. J Gerontol A Biol Sci Med Sci 2004;59:255–63.
3. Lang PO, Michel JP, Zekry D. Frailty syndrome: a transitional state in a dynamic process. Gerontology 2009;55:539–49.
4. Hashmi A, Ibrahim-Zada I, Rhee P, et al. Predictors of mortality in geriatric trauma patients: a systematic review and meta-analysis. J Trauma Acute Care Surg 2014;76:894–901.
5. American College of Surgeons Trauma Quality Improvement Program. ACS TQIP geriatric trauma management guidelines. Available at https://mtqip.org/docs/.
6. Koller K, Rockwood K. Frailty in older adults: implications for end-of-life care. Cleve Clin J Med 2013;80:168–74.
Study Overview
Objective. To evaluate the usefulness of the Frailty Index (FI) as a prognostic indicator of adverse outcomes in geriatric trauma patients.
Design. Prospective cohort study.
Setting and participants. Geriatric (aged 65 and over) trauma patients admitted to inpatient units at a Level 1 trauma center in Arizona were enrolled. Patients were excluded if they were intubated/nonresponsive with no family members present or transferred from another institution (eg, skilled nursing facility). The following categories of data were collected: (a) patient demographics, (b) type and mechanism of injury, (c) vital signs (eg, Glasgow coma scale score, systolic blood pressure, heart rate, body temperature), (d) need for operative intervention, (e) in-hospital complications, (f) hospital and intensive care unit (ICU) lengths of stay, and (g) discharge disposition.
Patients or, in the case of nonresponsive patients, their closest relative, responded to the 50-item Frailty Index questionnaire, which includes questions regarding age, comorbid conditions, medications, activities of daily living (ADLs), social activities, mood, and nutrition. FI score ranges from 0 (non-frail) to 1 (frail), with an FI of 0.25 or more indicative of frailty based on established guidelines. Patients were categorized as frail or non-frail according to their FI scores and were followed during the course of their hospitalization.
Main outcome measure. The primary outcome measure was in-hospital complications. In-hospital complications included myocardial infarction, cardiopulmonary arrest, pneumonia, pulmonary embolism, sepsis, urinary tract infection, deep venous thrombosis, disseminated intravascular coagulation, renal insufficiency, and reoperation. The secondary outcome measure was adverse discharge disposition, which was defined as death during the course of hospitalization or discharge to a skilled nursing facility.
Main results. The sample consisted of 250 patients with a mean age of 77.9 years. Among these, 44.0% were considered frail. Patients with frailty were more likely to have a higher Injury Severity Score (P = 0.04) and a higher mean FI (P = 0.01) than those without frailty. There were no statistically significant differences with respect to age (P = 0.21), mechanism of injury (P = 0.09), systolic blood pressure (P = 0.30), or Glasgow Coma Scale score (P = 0.91) between the groups.
Patients with frailty were more likely to develop in-hospital complications (37.3% vs 21.4%, P = 0.001) than those without frailty. Among these complications, pneumonia and urinary tract infection were the most common. There were no differences in the rate of re-operation (P = 0.54) between the 2 groups. An FI of 0.25 or higher was associated with the development of in-hospital complications (P = 0.001) even after adjust-ing for age, systolic blood pressure, heart rate, and Injury Severity Score.
Frail patients had longer hospital length of stay (P = 0.01) and ICU length of stay (P = 0.01), and were more likely to have adverse discharge disposition (37.3% vs. 12.9%, P = 0.001). All patients who died during the course of hospitalization (n = 5) were considered frail. Frailty was also found to be a predictor of adverse discharge disposition (P = 0.001) after adjustment for age, male sex, Injury Severity Score, and mechanism of injury.
Conclusion. The FI is effective in identifying geriatric trauma patients who are vulnerable to poor health outcomes.
Commentary
The diagnosis and treatment of elderly patients is complicated by the presence of multiple geriatric syndromes, including frailty [1]. Frailty is defined as increased vulnerability to negative health outcomes, marked by physical and functional decline, that eventually leads to disability, dependency, and mortality [2]. Factors such as age, malnutrition, and disease give way to dysregulations of bodily systems that eventually lead to reductions in mobility, strength, and cognition in frail older adults [3]. In turn, frail patients, who lack the physiological reserves to withstand illness and adapt to stressors, experience high incidences of hospitalizations, mortality, and reduced quality of life. Unsurprisingly, mortality rates among geriatric trauma patients are higher than those found in ordinary adult trauma patients [4]. It is, therefore, essential to identify patients with frailty at the outset of hospitalization in order to improve health outcomes and reduce mortality rates in this population. Yet, there is a dearth of assessment tools to predict outcomes in frail trauma patients [5].
This study has several strengths. Outcome measures are plainly stated. The inclusion criteria was broad enough to include most geriatric trauma patients, but the authors eliminated a number of confounders by excluding patients admitted from institutional settings, who may have been more susceptible to negative health outcomes at baseline than noninstitutionalized adults. Recruitment strategies were acceptable and reflect ethical standards. Groups were defined based on an accepted and previously validated FI cutoff. Lack of blinding did not threaten the study’s design given that most outcomes were beyond the control of study participants. Multivariate regression adjusted for a number of potential confounders including age, length of hospitalization, and injury severity. The Injury Severity Score, the Abbreviated Injury Scale score, and the Glasgow Coma Scale score are validated instruments that are widely used and enable standardized assessments of cognition and degree of injury.
The study methodology also possesses a number of weaknesses. The authors followed patients from admission to discharge; however, they did not re-evaluate patients following their release from the inpatient setting. It is, therefore, not clear whether the FI is predictive of quality of life, functional status, or hospital readmissions upon discharge into the community. The cohort was largely male (69.2%) and predominately Caucasian. Participants were recruited from only one medical center. All of these limit the study’s generalizability. In addition, the authors do not clarify how they came to define the criteria for in-hospital complications or adverse discharge disposition. For example, the study does not consider skin breakdown, a common concern among older patients who are hospitalized, as an in-hospital complication. In addition, the authors did not adjust for the number of diagnoses at baseline or the presence of chronic comorbid conditions, which are also associated with negative health outcomes.
Applications for Clinical Practice
Although lengthy, with over 50 variables in 5 categories, the FI has the potential to help health care providers improve risk stratification, assess patient acuity, and formulate treatment plans to improve the health of frail elderly patients. The FI will enable hospitals to direct appropriate resources, including staff, to the most vulnerable subsets of patients in order to improve outcomes and reduce costs. Moreover, awareness of frailty enables greater discussion between patients and families of trauma patients about the risks and benefits of complex intervention, increases referrals to palliative care, and improves quality of life in this population [6].
—Tina Sadarangani, MSN, APRN, and Allison Squires, PhD, RN, New York University College of Nursing
Study Overview
Objective. To evaluate the usefulness of the Frailty Index (FI) as a prognostic indicator of adverse outcomes in geriatric trauma patients.
Design. Prospective cohort study.
Setting and participants. Geriatric (aged 65 and over) trauma patients admitted to inpatient units at a Level 1 trauma center in Arizona were enrolled. Patients were excluded if they were intubated/nonresponsive with no family members present or transferred from another institution (eg, skilled nursing facility). The following categories of data were collected: (a) patient demographics, (b) type and mechanism of injury, (c) vital signs (eg, Glasgow coma scale score, systolic blood pressure, heart rate, body temperature), (d) need for operative intervention, (e) in-hospital complications, (f) hospital and intensive care unit (ICU) lengths of stay, and (g) discharge disposition.
Patients or, in the case of nonresponsive patients, their closest relative, responded to the 50-item Frailty Index questionnaire, which includes questions regarding age, comorbid conditions, medications, activities of daily living (ADLs), social activities, mood, and nutrition. FI score ranges from 0 (non-frail) to 1 (frail), with an FI of 0.25 or more indicative of frailty based on established guidelines. Patients were categorized as frail or non-frail according to their FI scores and were followed during the course of their hospitalization.
Main outcome measure. The primary outcome measure was in-hospital complications. In-hospital complications included myocardial infarction, cardiopulmonary arrest, pneumonia, pulmonary embolism, sepsis, urinary tract infection, deep venous thrombosis, disseminated intravascular coagulation, renal insufficiency, and reoperation. The secondary outcome measure was adverse discharge disposition, which was defined as death during the course of hospitalization or discharge to a skilled nursing facility.
Main results. The sample consisted of 250 patients with a mean age of 77.9 years. Among these, 44.0% were considered frail. Patients with frailty were more likely to have a higher Injury Severity Score (P = 0.04) and a higher mean FI (P = 0.01) than those without frailty. There were no statistically significant differences with respect to age (P = 0.21), mechanism of injury (P = 0.09), systolic blood pressure (P = 0.30), or Glasgow Coma Scale score (P = 0.91) between the groups.
Patients with frailty were more likely to develop in-hospital complications (37.3% vs 21.4%, P = 0.001) than those without frailty. Among these complications, pneumonia and urinary tract infection were the most common. There were no differences in the rate of re-operation (P = 0.54) between the 2 groups. An FI of 0.25 or higher was associated with the development of in-hospital complications (P = 0.001) even after adjust-ing for age, systolic blood pressure, heart rate, and Injury Severity Score.
Frail patients had longer hospital length of stay (P = 0.01) and ICU length of stay (P = 0.01), and were more likely to have adverse discharge disposition (37.3% vs. 12.9%, P = 0.001). All patients who died during the course of hospitalization (n = 5) were considered frail. Frailty was also found to be a predictor of adverse discharge disposition (P = 0.001) after adjustment for age, male sex, Injury Severity Score, and mechanism of injury.
Conclusion. The FI is effective in identifying geriatric trauma patients who are vulnerable to poor health outcomes.
Commentary
The diagnosis and treatment of elderly patients is complicated by the presence of multiple geriatric syndromes, including frailty [1]. Frailty is defined as increased vulnerability to negative health outcomes, marked by physical and functional decline, that eventually leads to disability, dependency, and mortality [2]. Factors such as age, malnutrition, and disease give way to dysregulations of bodily systems that eventually lead to reductions in mobility, strength, and cognition in frail older adults [3]. In turn, frail patients, who lack the physiological reserves to withstand illness and adapt to stressors, experience high incidences of hospitalizations, mortality, and reduced quality of life. Unsurprisingly, mortality rates among geriatric trauma patients are higher than those found in ordinary adult trauma patients [4]. It is, therefore, essential to identify patients with frailty at the outset of hospitalization in order to improve health outcomes and reduce mortality rates in this population. Yet, there is a dearth of assessment tools to predict outcomes in frail trauma patients [5].
This study has several strengths. Outcome measures are plainly stated. The inclusion criteria was broad enough to include most geriatric trauma patients, but the authors eliminated a number of confounders by excluding patients admitted from institutional settings, who may have been more susceptible to negative health outcomes at baseline than noninstitutionalized adults. Recruitment strategies were acceptable and reflect ethical standards. Groups were defined based on an accepted and previously validated FI cutoff. Lack of blinding did not threaten the study’s design given that most outcomes were beyond the control of study participants. Multivariate regression adjusted for a number of potential confounders including age, length of hospitalization, and injury severity. The Injury Severity Score, the Abbreviated Injury Scale score, and the Glasgow Coma Scale score are validated instruments that are widely used and enable standardized assessments of cognition and degree of injury.
The study methodology also possesses a number of weaknesses. The authors followed patients from admission to discharge; however, they did not re-evaluate patients following their release from the inpatient setting. It is, therefore, not clear whether the FI is predictive of quality of life, functional status, or hospital readmissions upon discharge into the community. The cohort was largely male (69.2%) and predominately Caucasian. Participants were recruited from only one medical center. All of these limit the study’s generalizability. In addition, the authors do not clarify how they came to define the criteria for in-hospital complications or adverse discharge disposition. For example, the study does not consider skin breakdown, a common concern among older patients who are hospitalized, as an in-hospital complication. In addition, the authors did not adjust for the number of diagnoses at baseline or the presence of chronic comorbid conditions, which are also associated with negative health outcomes.
Applications for Clinical Practice
Although lengthy, with over 50 variables in 5 categories, the FI has the potential to help health care providers improve risk stratification, assess patient acuity, and formulate treatment plans to improve the health of frail elderly patients. The FI will enable hospitals to direct appropriate resources, including staff, to the most vulnerable subsets of patients in order to improve outcomes and reduce costs. Moreover, awareness of frailty enables greater discussion between patients and families of trauma patients about the risks and benefits of complex intervention, increases referrals to palliative care, and improves quality of life in this population [6].
—Tina Sadarangani, MSN, APRN, and Allison Squires, PhD, RN, New York University College of Nursing
1. Rich MW. Heart failure in the oldest patients: the impact of comorbid conditions. Am J Geriatr Cardiol 2005;14:134–41.
2. Fried LP, Ferrucci L, Darer J, et al. Untangling the concepts of disability, frailty, and comorbidity: implications for improved targeting and care. J Gerontol A Biol Sci Med Sci 2004;59:255–63.
3. Lang PO, Michel JP, Zekry D. Frailty syndrome: a transitional state in a dynamic process. Gerontology 2009;55:539–49.
4. Hashmi A, Ibrahim-Zada I, Rhee P, et al. Predictors of mortality in geriatric trauma patients: a systematic review and meta-analysis. J Trauma Acute Care Surg 2014;76:894–901.
5. American College of Surgeons Trauma Quality Improvement Program. ACS TQIP geriatric trauma management guidelines. Available at https://mtqip.org/docs/.
6. Koller K, Rockwood K. Frailty in older adults: implications for end-of-life care. Cleve Clin J Med 2013;80:168–74.
1. Rich MW. Heart failure in the oldest patients: the impact of comorbid conditions. Am J Geriatr Cardiol 2005;14:134–41.
2. Fried LP, Ferrucci L, Darer J, et al. Untangling the concepts of disability, frailty, and comorbidity: implications for improved targeting and care. J Gerontol A Biol Sci Med Sci 2004;59:255–63.
3. Lang PO, Michel JP, Zekry D. Frailty syndrome: a transitional state in a dynamic process. Gerontology 2009;55:539–49.
4. Hashmi A, Ibrahim-Zada I, Rhee P, et al. Predictors of mortality in geriatric trauma patients: a systematic review and meta-analysis. J Trauma Acute Care Surg 2014;76:894–901.
5. American College of Surgeons Trauma Quality Improvement Program. ACS TQIP geriatric trauma management guidelines. Available at https://mtqip.org/docs/.
6. Koller K, Rockwood K. Frailty in older adults: implications for end-of-life care. Cleve Clin J Med 2013;80:168–74.
English Ability and Glycemic Control in Latinos with Diabetes
Study Overview
Objective. To determine if there is an association between self-reported English language ability and glycemic control in Latinos with type 2 diabetes.
Design. Descriptive correlational study using data from a larger cross-sectional study.
Setting and participants. 167 adults with diabetes who self-identified as Latino or Hispanic recruited at clinics in the Chicago area from May 2004 to May 2006. The dataset was collected using face-to-face interviews with diabetic patients aged ≥ 18 years. All participants attended clinics affiliated with an academic medical center or physician offices affiliated with a suburban hospital. Patients with type 1 diabetes and those with < 17 points on the Mini-Mental State Examination were excluded. English speaking ability was categorized as speaking English “not at all,” “not well,” “well,” or “very well” based on patient self-report. A multivariable logistic regression model was used to examine the predictive relationship between English language skills and HbA1c levels, with covariates selected if they were significantly correlated with English language ability. The final regression model accounted for age, sex, education, annual income, health insurance status, duration of diabetes, birth in the United States, and years in the United States.
Main outcome measure. HbA1c ≥ 7.0% as captured by chart review.
Main results. Of the 167 patients, 38% reported speaking English very well, 21% reported speaking well, 26% reported speaking not very well, and 14% did not speak English at all. Reflecting immigration-sensitive patterns, patients who spoke English very well were younger and more likely to have graduated high school and have an annual income over $25,000 per year. Comorbidities and complications did not differ by English speaking ability except for diabetic eye disease, which was was more prevalent among those who did not speak English at all (42%, p = 0.04). Whether speaking ability was treated as a continuous or dichotomous variable, HbA1c levels formed a U-shaped curve: those who spoke English very well (odds ratio [OR] 2.32, 95% CI, 1.00–5.41) or not at all (OR 4.11, 95% CI 1.35–12.54) had higher odds of having an elevated HbA1c than those who spoke English well, although this was only statistically significant for those who spoke no English. In adjusted analyses, the U-shaped curve persisted with the highest odds among those who spoke English very well (OR 3.20, 95% CI 1.05–9.79) or not at all (OR 4.95, 95% CI 1.29–18.92).
Conclusion. The relationship between English speaking ability and diabetes management is more complex than previously described. Interventions aimed at improving diabetes outcomes may need to be tailored to specific subgroups within the Latino population.
Commentary
Immigrant health is complex and language is an understudied factor in health transitions of those who migrate for new lives or temporary work. For Latinos, migration abroad was once thought to improve health, but a recent systematic review by Teruya et al [1] suggests that the migration experience has a wide variety of effects on health, many of which can be negative.
The notion that English fluency confers health care benefits is questionable, as the authors state. Those unfamiliar with the acculturation literature might think that English speaking ability is a good marker of acculturation, but recent research on the subject suggests otherwise. Acculturation is a complex phenomenon that cannot be measured or gauged by a single variable [2–5]. Among the many factors influencing acculturation, the migration experience and country of origin will play a major role in acculturation and how it occurs in the arrival country. Health care providers seeking to understand the complexity of acculturation better to improve care for their immigrant patients would benefit from examining the extensive social science literature on the subject. The results of this study suggest that providers should not take for granted someone’s English speaking ability as a marker of acculturation and thus assume that their health outcomes would be equivalent to native born populations.
This study has number of weaknesses. The main concern is that the study did not consider a number of important health service delivery factors. The researchers did not assess for the number of visits the patient had with appropriate interpretation services, whether or not there were language concordant visits between patients and providers (limited English proficiency patients are more likely to form consistent service relationships with language concordant providers [6–10]), or whether the patient had diabetes education classes or individual counseling sessions to facilitate self-management. These service-based factors could potentially explain some of the results seen. The small sample size, age of the data in the study, and failure to distinguish the country of origin of the Latino patients are other weaknesses.
Applications for Clinical Practice
Providers can improve their clinical practice with limited English proficiency Latino patients with diabetes by being more sensitive to the potential effects of language on diabetes outcomes in this population. The results suggest that providers should not assume that a Latino patient’s English language skills mean that they are better at self-managing their diabetes and will have better outcomes. Asking patients about their country of origin and migration experiences may help differentiate the effects of language in concert with other potentially confounding variables that can help elucidate the effects of language on diabetes related outcomes.
—Allison Squires, PhD, RN
1. Teruya SA, Bazargan-Hejazi S. The immigrant and Hispanic paradoxes: a systematic review of their predictions and effects. Hisp J Behav Sci 2013 Sep 5;35:486–509.
2. Rudmin FW. Phenomenology of acculturation: retrospective reports from the Philippines, Japan, Quebec, and Norway. Cult Psychol 2010;16:313–32.
3. Matsunaga M, Hecht ML, Elek E, Ndiaye K. Ethnic identity development and acculturation: a longitudinal analysis of Mexican-heritage youth in the Southwest United States. J Cross Cult Psychol 2010;41:410–27.
4. Siatkowski A. Hispanic acculturation: a concept analysis. J Transcult Nurs 2007;18:316–23.
5. Horevitz E, Organista KC. The Mexican health paradox: expanding the explanatory power of the acculturation construct. Hisp J Behav Sci 2012;35:3–34.
6. Gany F, Leng J, Shapiro E, et al. Patient satisfaction with different interpreting methods: a randomized controlled trial. J Gen Intern Med 2007;22 Suppl 2:312–8.
7. Grover A, Deakyne S, Bajaj L, Roosevelt GE. Comparison of throughput times for limited English proficiency patient visits in the emergency department between different interpreter modalities. J Immigr Minor Health 2012;14:602–7.
8. Ngo-Metzger Q, Sorkin DH, Phillips RS, et al. Providing high-quality care for limited English proficient patients: the importance of language concordance and interpreter use. J Gen Intern Med 2007;22 Suppl 2:324–30.
9. Karliner LS, Jacobs EA, Chen AH, Mutha S. Do professional interpreters improve clinical care for patients with limited English proficiency? A systematic review of the literature. Health Serv Res 2007;42:727–54.
10. Arauz Boudreau AD, Fluet CF, Reuland CP, et al. Associations of providers’ language and cultural skills with Latino parents’ perceptions of well-child care. Acad Pediatr 2010;10:172–8.
Study Overview
Objective. To determine if there is an association between self-reported English language ability and glycemic control in Latinos with type 2 diabetes.
Design. Descriptive correlational study using data from a larger cross-sectional study.
Setting and participants. 167 adults with diabetes who self-identified as Latino or Hispanic recruited at clinics in the Chicago area from May 2004 to May 2006. The dataset was collected using face-to-face interviews with diabetic patients aged ≥ 18 years. All participants attended clinics affiliated with an academic medical center or physician offices affiliated with a suburban hospital. Patients with type 1 diabetes and those with < 17 points on the Mini-Mental State Examination were excluded. English speaking ability was categorized as speaking English “not at all,” “not well,” “well,” or “very well” based on patient self-report. A multivariable logistic regression model was used to examine the predictive relationship between English language skills and HbA1c levels, with covariates selected if they were significantly correlated with English language ability. The final regression model accounted for age, sex, education, annual income, health insurance status, duration of diabetes, birth in the United States, and years in the United States.
Main outcome measure. HbA1c ≥ 7.0% as captured by chart review.
Main results. Of the 167 patients, 38% reported speaking English very well, 21% reported speaking well, 26% reported speaking not very well, and 14% did not speak English at all. Reflecting immigration-sensitive patterns, patients who spoke English very well were younger and more likely to have graduated high school and have an annual income over $25,000 per year. Comorbidities and complications did not differ by English speaking ability except for diabetic eye disease, which was was more prevalent among those who did not speak English at all (42%, p = 0.04). Whether speaking ability was treated as a continuous or dichotomous variable, HbA1c levels formed a U-shaped curve: those who spoke English very well (odds ratio [OR] 2.32, 95% CI, 1.00–5.41) or not at all (OR 4.11, 95% CI 1.35–12.54) had higher odds of having an elevated HbA1c than those who spoke English well, although this was only statistically significant for those who spoke no English. In adjusted analyses, the U-shaped curve persisted with the highest odds among those who spoke English very well (OR 3.20, 95% CI 1.05–9.79) or not at all (OR 4.95, 95% CI 1.29–18.92).
Conclusion. The relationship between English speaking ability and diabetes management is more complex than previously described. Interventions aimed at improving diabetes outcomes may need to be tailored to specific subgroups within the Latino population.
Commentary
Immigrant health is complex and language is an understudied factor in health transitions of those who migrate for new lives or temporary work. For Latinos, migration abroad was once thought to improve health, but a recent systematic review by Teruya et al [1] suggests that the migration experience has a wide variety of effects on health, many of which can be negative.
The notion that English fluency confers health care benefits is questionable, as the authors state. Those unfamiliar with the acculturation literature might think that English speaking ability is a good marker of acculturation, but recent research on the subject suggests otherwise. Acculturation is a complex phenomenon that cannot be measured or gauged by a single variable [2–5]. Among the many factors influencing acculturation, the migration experience and country of origin will play a major role in acculturation and how it occurs in the arrival country. Health care providers seeking to understand the complexity of acculturation better to improve care for their immigrant patients would benefit from examining the extensive social science literature on the subject. The results of this study suggest that providers should not take for granted someone’s English speaking ability as a marker of acculturation and thus assume that their health outcomes would be equivalent to native born populations.
This study has number of weaknesses. The main concern is that the study did not consider a number of important health service delivery factors. The researchers did not assess for the number of visits the patient had with appropriate interpretation services, whether or not there were language concordant visits between patients and providers (limited English proficiency patients are more likely to form consistent service relationships with language concordant providers [6–10]), or whether the patient had diabetes education classes or individual counseling sessions to facilitate self-management. These service-based factors could potentially explain some of the results seen. The small sample size, age of the data in the study, and failure to distinguish the country of origin of the Latino patients are other weaknesses.
Applications for Clinical Practice
Providers can improve their clinical practice with limited English proficiency Latino patients with diabetes by being more sensitive to the potential effects of language on diabetes outcomes in this population. The results suggest that providers should not assume that a Latino patient’s English language skills mean that they are better at self-managing their diabetes and will have better outcomes. Asking patients about their country of origin and migration experiences may help differentiate the effects of language in concert with other potentially confounding variables that can help elucidate the effects of language on diabetes related outcomes.
—Allison Squires, PhD, RN
Study Overview
Objective. To determine if there is an association between self-reported English language ability and glycemic control in Latinos with type 2 diabetes.
Design. Descriptive correlational study using data from a larger cross-sectional study.
Setting and participants. 167 adults with diabetes who self-identified as Latino or Hispanic recruited at clinics in the Chicago area from May 2004 to May 2006. The dataset was collected using face-to-face interviews with diabetic patients aged ≥ 18 years. All participants attended clinics affiliated with an academic medical center or physician offices affiliated with a suburban hospital. Patients with type 1 diabetes and those with < 17 points on the Mini-Mental State Examination were excluded. English speaking ability was categorized as speaking English “not at all,” “not well,” “well,” or “very well” based on patient self-report. A multivariable logistic regression model was used to examine the predictive relationship between English language skills and HbA1c levels, with covariates selected if they were significantly correlated with English language ability. The final regression model accounted for age, sex, education, annual income, health insurance status, duration of diabetes, birth in the United States, and years in the United States.
Main outcome measure. HbA1c ≥ 7.0% as captured by chart review.
Main results. Of the 167 patients, 38% reported speaking English very well, 21% reported speaking well, 26% reported speaking not very well, and 14% did not speak English at all. Reflecting immigration-sensitive patterns, patients who spoke English very well were younger and more likely to have graduated high school and have an annual income over $25,000 per year. Comorbidities and complications did not differ by English speaking ability except for diabetic eye disease, which was was more prevalent among those who did not speak English at all (42%, p = 0.04). Whether speaking ability was treated as a continuous or dichotomous variable, HbA1c levels formed a U-shaped curve: those who spoke English very well (odds ratio [OR] 2.32, 95% CI, 1.00–5.41) or not at all (OR 4.11, 95% CI 1.35–12.54) had higher odds of having an elevated HbA1c than those who spoke English well, although this was only statistically significant for those who spoke no English. In adjusted analyses, the U-shaped curve persisted with the highest odds among those who spoke English very well (OR 3.20, 95% CI 1.05–9.79) or not at all (OR 4.95, 95% CI 1.29–18.92).
Conclusion. The relationship between English speaking ability and diabetes management is more complex than previously described. Interventions aimed at improving diabetes outcomes may need to be tailored to specific subgroups within the Latino population.
Commentary
Immigrant health is complex and language is an understudied factor in health transitions of those who migrate for new lives or temporary work. For Latinos, migration abroad was once thought to improve health, but a recent systematic review by Teruya et al [1] suggests that the migration experience has a wide variety of effects on health, many of which can be negative.
The notion that English fluency confers health care benefits is questionable, as the authors state. Those unfamiliar with the acculturation literature might think that English speaking ability is a good marker of acculturation, but recent research on the subject suggests otherwise. Acculturation is a complex phenomenon that cannot be measured or gauged by a single variable [2–5]. Among the many factors influencing acculturation, the migration experience and country of origin will play a major role in acculturation and how it occurs in the arrival country. Health care providers seeking to understand the complexity of acculturation better to improve care for their immigrant patients would benefit from examining the extensive social science literature on the subject. The results of this study suggest that providers should not take for granted someone’s English speaking ability as a marker of acculturation and thus assume that their health outcomes would be equivalent to native born populations.
This study has number of weaknesses. The main concern is that the study did not consider a number of important health service delivery factors. The researchers did not assess for the number of visits the patient had with appropriate interpretation services, whether or not there were language concordant visits between patients and providers (limited English proficiency patients are more likely to form consistent service relationships with language concordant providers [6–10]), or whether the patient had diabetes education classes or individual counseling sessions to facilitate self-management. These service-based factors could potentially explain some of the results seen. The small sample size, age of the data in the study, and failure to distinguish the country of origin of the Latino patients are other weaknesses.
Applications for Clinical Practice
Providers can improve their clinical practice with limited English proficiency Latino patients with diabetes by being more sensitive to the potential effects of language on diabetes outcomes in this population. The results suggest that providers should not assume that a Latino patient’s English language skills mean that they are better at self-managing their diabetes and will have better outcomes. Asking patients about their country of origin and migration experiences may help differentiate the effects of language in concert with other potentially confounding variables that can help elucidate the effects of language on diabetes related outcomes.
—Allison Squires, PhD, RN
1. Teruya SA, Bazargan-Hejazi S. The immigrant and Hispanic paradoxes: a systematic review of their predictions and effects. Hisp J Behav Sci 2013 Sep 5;35:486–509.
2. Rudmin FW. Phenomenology of acculturation: retrospective reports from the Philippines, Japan, Quebec, and Norway. Cult Psychol 2010;16:313–32.
3. Matsunaga M, Hecht ML, Elek E, Ndiaye K. Ethnic identity development and acculturation: a longitudinal analysis of Mexican-heritage youth in the Southwest United States. J Cross Cult Psychol 2010;41:410–27.
4. Siatkowski A. Hispanic acculturation: a concept analysis. J Transcult Nurs 2007;18:316–23.
5. Horevitz E, Organista KC. The Mexican health paradox: expanding the explanatory power of the acculturation construct. Hisp J Behav Sci 2012;35:3–34.
6. Gany F, Leng J, Shapiro E, et al. Patient satisfaction with different interpreting methods: a randomized controlled trial. J Gen Intern Med 2007;22 Suppl 2:312–8.
7. Grover A, Deakyne S, Bajaj L, Roosevelt GE. Comparison of throughput times for limited English proficiency patient visits in the emergency department between different interpreter modalities. J Immigr Minor Health 2012;14:602–7.
8. Ngo-Metzger Q, Sorkin DH, Phillips RS, et al. Providing high-quality care for limited English proficient patients: the importance of language concordance and interpreter use. J Gen Intern Med 2007;22 Suppl 2:324–30.
9. Karliner LS, Jacobs EA, Chen AH, Mutha S. Do professional interpreters improve clinical care for patients with limited English proficiency? A systematic review of the literature. Health Serv Res 2007;42:727–54.
10. Arauz Boudreau AD, Fluet CF, Reuland CP, et al. Associations of providers’ language and cultural skills with Latino parents’ perceptions of well-child care. Acad Pediatr 2010;10:172–8.
1. Teruya SA, Bazargan-Hejazi S. The immigrant and Hispanic paradoxes: a systematic review of their predictions and effects. Hisp J Behav Sci 2013 Sep 5;35:486–509.
2. Rudmin FW. Phenomenology of acculturation: retrospective reports from the Philippines, Japan, Quebec, and Norway. Cult Psychol 2010;16:313–32.
3. Matsunaga M, Hecht ML, Elek E, Ndiaye K. Ethnic identity development and acculturation: a longitudinal analysis of Mexican-heritage youth in the Southwest United States. J Cross Cult Psychol 2010;41:410–27.
4. Siatkowski A. Hispanic acculturation: a concept analysis. J Transcult Nurs 2007;18:316–23.
5. Horevitz E, Organista KC. The Mexican health paradox: expanding the explanatory power of the acculturation construct. Hisp J Behav Sci 2012;35:3–34.
6. Gany F, Leng J, Shapiro E, et al. Patient satisfaction with different interpreting methods: a randomized controlled trial. J Gen Intern Med 2007;22 Suppl 2:312–8.
7. Grover A, Deakyne S, Bajaj L, Roosevelt GE. Comparison of throughput times for limited English proficiency patient visits in the emergency department between different interpreter modalities. J Immigr Minor Health 2012;14:602–7.
8. Ngo-Metzger Q, Sorkin DH, Phillips RS, et al. Providing high-quality care for limited English proficient patients: the importance of language concordance and interpreter use. J Gen Intern Med 2007;22 Suppl 2:324–30.
9. Karliner LS, Jacobs EA, Chen AH, Mutha S. Do professional interpreters improve clinical care for patients with limited English proficiency? A systematic review of the literature. Health Serv Res 2007;42:727–54.
10. Arauz Boudreau AD, Fluet CF, Reuland CP, et al. Associations of providers’ language and cultural skills with Latino parents’ perceptions of well-child care. Acad Pediatr 2010;10:172–8.
Long-Term Outcomes of Bariatric Surgery in Obese Adults
Study Overview
Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.
Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.
Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m2 , were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.
Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.
Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.
Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.
Commentary
Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.
In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.
This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.
There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.
A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.
Applications for Clinical Practice
This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.
—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN
1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.
2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.
3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.
4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.
Study Overview
Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.
Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.
Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m2 , were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.
Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.
Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.
Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.
Commentary
Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.
In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.
This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.
There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.
A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.
Applications for Clinical Practice
This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.
—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN
Study Overview
Objective. To identify the long-term outcomes of bariatric surgery in adults with severe obesity.
Design. Prospective longitudinal observational cohort study (the Longitudinal Assessment of Bariatric Surgery Consortium [LABS]). LABS was established to collect long-term data on safety and efficacy of bariatric surgeries.
Participants and setting. 2458 patients who underwent Roux-en-Y gastric bypass (RYGB) or laparoscopic adjustable gastric banding (LAGB) at 10 hospitals in 6 clinical centers in the United States. Participants were included if they had a body mass index (BMI) greater than 35 kg/m2 , were over the age of 18 years, and had not undergone prior bariatric surgeries. Participants were recruited between 2006 and 2009, and follow-up continued until September 2012. Data collection occurred at baseline prior to surgery and then at 6 months, 12 months, and annually until 3 years following surgery.
Main outcomes measures. 3-year change in weight and resolution of diabetes, hypertension, and dyslipidemia.
Main results. Participants were between the ages of 18 and 78 years. The majority of participants were female (79%) and white (86%). Median BMI was 45.9 (interquartile range [IQR], 41.7–51.5). At baseline, 774 (33%) had diabetes, 1252 (63%) had dyslipidemia, and 1601 (68%) had hypertension. Three years after surgery, the LAGB group exhibited greater weight loss (median 41 kg vs. 20 kg). Participants experienced most of their total weight loss during the first year following surgery. As for the health parameters assessed, at 3 years 67.5% of RYGB patients and 28.6% of LAGB patients had at least partial diabetes remission, 61.9% of RYGB patients and 27.1% of LAGB patients had dyslipidemia remission, and 38.2% of RYGB patients and 17.4 % of LAGB patients had hypertension remission.
Conclusion. Three years following bariatric surgery, participants with severe obesity exhibited significant weight loss. There was variability in the amount of weight loss and in resolution of diabetes, hypertension and dyslipidemia observed.
Commentary
Obesity in the United States increased threefold between 1950 and 2000 [1]. Currently, more than one-third of adult Americans are obese [2]. The relationship between obesity and risk for morbidity from type 2 diabetes, hypertension, stroke, sleep apnea, osteoarthritis, and several cancers is well documented [3]. Finkelstein et al [4] estimated that health care costs related to obesity and consequent morbidity were approximately $148 billion in 2008. The use of bariatric surgery to address obesity has grown in recent years. However, there is a dearth of knowledge regarding the long-term outcomes of these procedures.
In this study of RYGB and LAGB patients, 5 weight change patterns were identified in each group for a total of 10 trajectories. Although most weight loss was observed during the first year following surgery, 76% of RYGB patients had continued weight loss for 2 years with a small weight increase the subsequent year. Only 4% of LAGB patients experienced consistent weight loss after 3 years. Overall, participants who underwent LAGB had greater variability in outcomes than RYGB patients. RYGB patients experienced greater remission of all chronic conditions examined and fewer new diagnoses of hypertension and dyslipidemia. The RYGB group experienced 3 deaths occurring within 30 days post-surgery while the LAGB group had none.
This study has several strengths, including its longitudinal design and the generalizability of study findings. Several factors contribute to the generalizability, including the large sample size (n = 2458), which includes participants from 10 hospitals in 6 clinical centers and was more diverse than prior longitudinal studies of patients following bariatric surgery. In addition, the study had clear inclusion criteria, and attrition rates were low; data were collected for 79% and 85% of the RYGB and LAGB patients, respectively. Additionally, study personnel were trained on data collection, which occurred at several time-points.
There are also a few limitations, including that researchers used several methods for collecting data on associated physical and physiologic indicators. Most weights were collected using a standardized scale; however, weights recorded on other scales and self-reported weights were collected if an in-person weight was not obtained. Similarly, different measures were used to identify chronic conditions. Diabetes was identified by 3 different measures: taking a diabetes medication, glycated hemoglobin of 6.5% or greater, and fasting plasma glucose of 126 mg/dL or greater. Hypertension was defined as either taking an antihypertensive medication, elevated systolic (≥ 140 mm Hg) or elevated diastolic blood pressure (≥ 90 mm Hg). Likewise, high low-density lipoprotein (≥ 160 mg/dL ) and taking a lipid-lowering medication were used as indicators of hyperlipidemia. Therefore, chronic conditions were not identified or measured in a uniform manner. Accordingly, the authors observed high variability in remission rates among participants in the LAGB group, which may be directly attributed to the inconsistencies in identification of disease status. Although the sample is identified as diverse compared with similar studies, it primarily consisted of white females.
A significant finding was that non-white and younger participants had more missing data, as they were less likely to return for follow-up visits. Additionally, large discrepancies in weight loss were noted. Authors assert that both these findings suggest more education and support are needed for lasting adherence in some subgroups of patients undergoing bariatric surgery. Further evaluation of which factors contribute to these differences in weight loss is also needed.
Applications for Clinical Practice
This study is relevant to practitioners caring for patients with multiple chronic conditions related to severe obesity. The results indicate that bariatric surgery is associated with significant improvements in weight and remission of several chronic conditions. Practitioners can inform patients about the safety and efficacy of bariatric surgery procedures and discuss the evidence supporting its long-term efficacy as an intervention. As obesity rates continue to increase, it is important to understand the long-term benefits and risks of bariatric surgery.
—Billy A. Caceres, MSN, RN, and Allison Squires, PhD, RN
1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.
2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.
3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.
4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.
1. Picot J, Jones J, Colquitt JL, et al. The clinical effectiveness and cost-effectiveness of bariatric (weight loss) surgery for obesity: A systematic review and economic evaluation, Health Tech Assess 2009;13: 1–190, 215–357.
2. Ogden CL, Carroll MD, Kit BK, et al. Prevalence of childhood and adult obesity in the United States, 2011-2012. JAMA 2014;311:806–14.
3. National Institutes of Health. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults 1998. Available at www.nhlbi.nih.gov/guidelines/obesity/ob_gdlns.pdf.
4. Finkelstein EA, Trogdon JG, Cohen JW, et al. Annual medical spending attributable to obesity: Payer-and service-specific estimates. Health Aff 2009;28:822–31.
Capturing the Impact of Language Barriers on Asthma Management During an Emergency Department Visit
Study Overview
Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.
Design. Cross-sectional survey.
Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.
Main outcome measure. Caregiver use of an asthma action plan.
Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years. Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.
Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.
Commentary
With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.
Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].
This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.
The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.
There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.
Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.
Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.
Applications for Clinical Practice
Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.
—Allison Squires, PhD, RN
1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.
2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.
3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.
4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.
5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.
6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.
7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.
8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.
Study Overview
Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.
Design. Cross-sectional survey.
Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.
Main outcome measure. Caregiver use of an asthma action plan.
Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years. Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.
Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.
Commentary
With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.
Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].
This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.
The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.
There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.
Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.
Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.
Applications for Clinical Practice
Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.
—Allison Squires, PhD, RN
Study Overview
Objective. To compare rates of asthma action plan use in limited English proficiency (LEP) caregivers compared with English proficient (EP) caregivers.
Design. Cross-sectional survey.
Participants and setting. A convenience sample of 107 Latino caregivers of children with asthma at an urban academic emergency department (ED). Surveys in the preferred language of the patient (English or Spanish, with the translated version previously validated) were distributed at the time of the ED visit. Interpreters were utilized when requested.
Main outcome measure. Caregiver use of an asthma action plan.
Main results. 51 LEP caregivers and 56 EP caregivers completed the survey. Mothers completed the surveys 87% of the time and the average age of patients was 4 years. Among the EP caregivers, 64% reported using an asthma action plan, while only 39% of the LEP caregivers reported using one. The difference was statistally significant (P = 0.01). Through both correlations and regressions, English proficiency was the only variable (others included health insurance status and level of caregiver education) that showed a significant effect on asthma action plan use.
Conclusions. Children whose caregiver had LEP were significantly less likely to have and use an asthma action plan. Asthma education in the language of choice of the patient may help improve asthma care.
Commentary
With 20% of US households now speaking a language other than English at home [1], language barriers between providers and patients present multiple challenges to health services delivery and can significantly contribute to immigrant health disparities. Despite US laws and multiple federal agency policies requiring the use of interpreters during health care encounters, organizations continue to fall short of providing interpreter services and often lack adequate or equivalent materials for patient education. Too often, providers overestimate their language skills [2,3], use colleagues as ad hoc interpreters out of convenience [4], or rely on family members for interpretation [4]—a practice that is universally discouraged.
Recent research does suggest that the timing of interpreter use is critical. In planned encounters such as primary care visits, interpreters can and should be scheduled for visits when a language-concordant provider is not available. During hospitalizations, including ED visits, interpreters are most effective when used on admission, during patient teaching, and upon discharge, and the timing of these visits has been shown to affect length of stay and readmission rates [5,6].
This study magnifies the consequences of failing to provide language-concordant services to patients and their caregivers. It also helps to identify one of the sources of pediatric asthma health disparities in Latino populations. The emphasis on the role of the caregiver in action plan utilization is a unique aspect of this study and it is one of the first to examine the issue in this way. It highlights the importance of caregivers in health system transitions and illustrates how a language barrier can potentially impact transitions.
The authors’ explicit use of a power analysis to calculate their sample size is a strength of the study. Furthermore, the authors differentiated their respondents by country of origin, something that rarely occurs in studies of Latinos [7], and allows the reader to differentiate the impact of the intervention at a micro level within this population. The presentation of Spanish language quotes with their translations within the manuscript provides transparency for bilingual readers to verify the accuracy of the authors’ translation.
There are, however, a number of methodological issues that should be noted. The authors acknowledge that they did not account for asthma severity in the survey nor control for it in the analysis, did not assess health literacy, and did not differentiate their results based on country of origin. The latter point is important because the immigration experience and demographic profiles of Latinos differs significantly by country of origin and could factor in to action plan use. The translation process used for survey instrument translation also did not illustrate how it accounted for the well-established linguistic variation that occurs in the Spanish language. Additionally, US census data shows that the main countries of origin of Latinos in the service area of the study are Puerto Rico, Ecuador, and Mexico [1]. The survey itself had Ecuador as a write in and Dominican as a response option. The combination presented in the survey reflects the Latino demographic composition in the nearest large urban area. Thus, when collecting country of origin data on immigrant patients, country choices should reflect local demographics and not national trends for maximum precision.
Another concern is that Spanish language literacy was not assessed. Many Latino immigrants may have limited reading ability in Spanish. For Mexican immigrants in particular, Spanish may be a second language after their indigenous language. This is also true for some South American Latino immigrants from the Andean region. Many Latino immigrants come to the United States with less than an 8th grade education and likely come from educational systems of poor quality, which subsequently affects their Spanish language reading and writing skills [8]. Assessing education level based on US equivalents is not an accurate way to gauge literacy. Thus, assessing reading literacy in Spanish before surveying patients would have been a useful step that could have further refined the results. These factors will have implications for action plan utilization and implementation for any chronic disease.
Providers often think that language barriers are an obvious factor in health disparities and service delivery, but few studies have actually captured or quantified the effects of language barriers on health outcomes. Most studies only identify language barriers as an access issue. This study provides a good illustration of the impact of a language barrier on a known and effective intervention for pediatric asthma management. Practitioners can take the consequences illustrated in this study and easily extrapolate the contribution to health disparities on a broader scale.
Applications for Clinical Practice
Practitioners caring for patients in EDs where the patient or caregiver has a language barrier should make every effort to use appropriate interpreter services when patient teaching occurs. Assessing not only for health literacy but reading ability in the LEP patient or caregiver is also important, since it will affect dyad’s ability to implement self-care measures recommended in patient teaching sessions or action plan implementation. Asking the patient what their country of origin is, regardless of their legal status, will help practitioners refine patient teaching and the language they (and the interpreter when appropriate) use to illustrate what needs to be done to manage their condition.
—Allison Squires, PhD, RN
1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.
2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.
3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.
4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.
5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.
6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.
7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.
8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.
1. Ryan C. Language use in the United States : 2011. Migration Policy Institute: Washington, DC; 2013.
2. Diamond LC, Luft HS, Chung S, Jacobs EA. “Does this doctor speak my language?” Improving the characterization of physician non-English language skills. Health Serv Res 2012;47(1 Pt 2):556–69.
3. Jacobs EA. Patient centeredness in medical encounters requiring an interpreter. Am J Med 2000;109:515.
4. Hsieh E. Understanding medical interpreters: reconceptualizing bilingual health communication. Health Commun 2006;20:177–86.
5. Karliner LS, Kim SE, Meltzer DO, Auerbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med 2010;5:276–82.
6. Lindholm M, Hargraves JL, Ferguson WJ, Reed G. Professional language interpretation and inpatient length of stay and readmission rates. J Gen Intern Med 2012;27:1294–9.
7. Gerchow L, Tagliaferro B, Squires A, et al. Latina food patterns in the United States: a qualitative metasynthesis. Nurs Res 2014;63:182–93.
8. Sudore RL, Landefeld CS, Pérez-Stable EJ, et al. Unraveling the relationship between literacy, language proficiency, and patient-physician communication. Patient Educ Couns 2009;75:398–402.
Does Exercise Help Reduce Cancer-Related Fatigue?
Study Overview
Objective. To systematically review randomized controlled trials (RCTs) examining the effects of exercise interventions on cancer-related fatigue (CRF) in patients during and after treatment to determine differential effects.
Design. Meta-analysis.
Data. 70 RCTs with a combined sample of 4881 oncology patients during active treatment (eg, chemotherapy, radiation therapy, hormone therapy) or after completion of treatment published before August 2011 that analyzed the effect on CRF of an exercise program compared with a non-exercise control. Excluded from analysis were RCTs that compared exercise with other types of interventions (ie, education, pharmacotherapy, different methods of exercise). 43 studies examined exercise during treatment while 27 studied the effects after treatment.
Measurement. Effect size was calculated to determine the magnitude of the effect of exercise on improving CRF.
Main results. The effect size (Δ = 0.34, P < 0.001) for the total sample of 70 RCTs indicated that exercise has a moderate effect on CRF regardless of treatment status. When effect sizes were calculated for the 43 RCTs that examined patients during treatment, exercise was found to significantly decrease CRF (Δ = 0.32, P < 0.001). Based on calculated effect size for the 27 RCTs that examined exercise after treatment completion, exercise continues to significantly decrease CRF (Δ = 0.38, P < 0.001). The effect of exercise on CRF was consistent not only during or after treatment, but also across cancer diagnosis, patient age, and sex.
Exercise reduces CRF both during and after treatment. In patients who exercise, CRF severity decreases by 4.9% compared to a 29.1% increase in CRF in patients who do not exercise. After treatment, exercise decreases CRF by 20.5% compared to a decrease of 1.3% in patients who do not exercise.
Both during and after treatment, patients with higher exercise adherence experienced the most improvement (P < 0.001). Patients in active treatment with less severe baseline CRF demonstrated greater adherence to the exercise program and saw greater improvements in CRF. Patients who were further from active treatment saw greater CRF severity reduction than patients closer to active treatment. After treatment, the longer the exercise program, the more effective it was in decreasing CRF. No specific type of exercise program (eg, home-based, supervised, vigorous, moderate) was shown to be more effective than another.
Conclusion. Exercise decreases CRF in patients during and after treatment. The type of exercise does not change the positive effect of exercise, so it is important to encourage patients to be active.
Commentary
Cancer-related fatigue (CRF) is the most disturbing symptom associated with cancer diagnosis and its treatment [1]. Defined as a persistent, subjective sense of tiredness that is not proportional to activity and not relieved by rest, CRF is reported in over 80% of oncology patients during active treatment [1]. This symptom is not limited to the active treatment phase, with over 30% of cancer survivors reporting CRF lasting at least 5 years [2]. CRF is associated with decreased quality of life (QOL), decreased functional status, and decreased participation in social activities [1]. The pathogenesis of CRF is not fully understood [3,4]. Disruptions in biochemical pathways [5], genome expression [6] chemotherapy or radiation treatments [7,8], cancer pathogenesis [4], or a combination of factors [9] are hypothesized as contributing to the development and severity of CRF. The complexity of CRF pathogenesis makes clinical management difficult.
The current meta-analysis suggests that exercise is an effective nonpharmacologic intervention to ameliorate the impact of this devastating symptom and improve patients’ QOL [10–12]. The meta-analysis demonstrated strong rigor, using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [13]. Multiple electronic databases were accessed and additional evidence was obtained by review of the retrieved article reference lists. No language limitations were placed on the search, adding to the potential generalizability of the results. The procedures to extract data and evaluate the quality of each retrieved article are detailed providing evidence of the rigor of the authors’ methodology.
The limitations of the meta-analysis are related to the difficulties extracting data from multiple studies without consistent reporting of exercise mode, duration or evaluation methods. Inconsistent CRF assessment methods across studies limits the validity of the results quantifying the magnitude of CRF change identified. Despite the limitations, this is the first known meta-analysis of the effect of exercise on CRF during and after treatment synthesizing current research to provide clinical reccomendations.
As with all exercise prescriptions for any patient, the patient’s level of adherence is a moderating factor for its effectiveness. A recent study describes an interesting exercise intervention that utilizes a resource some cancer patients may already have in their homes. Seven patients with early-stage non-small cell lung cancer performed light-intensity walking and balance exercises in a virtual reality environment with the Nintendo Wii Fit Plus for 6 weeks after thora-cotomy [14]. Exercise started the first week after hospitalization and continued for 6 weeks. Outcomes seen included a decrease in CRF severity, a high level of satisfaction, high adherence rate, and an increase in self-efficacy for managing their CRF [14]. While the small sample size and homogeneous cancer diagnosis and stage limit generalizability, the study describes a promising approach to supporting patient adherence to exercise.
Applications for Clinical Practice
The results of this meta-analysis support exercise as an effective intervention to decrease CRF in oncology patients during and after treatment. Based on the results, exercise should be prescribed as a nonpharmacologic intervention to decrease CRF. Patients’ adherence to the exercise intervention is needed for effective CRF reduction. Thus, exercise prescription should be tailored to patients individual preferences, abilities, and available resources.
—Fay Wright, MSN, APRN, and Allison Squires, PhD, RN
1. Berger AM, Abernethy A, Atkinson A, et al. NCCN guidelines: cancer-related fatigue. Version 1. National Comprehensive Cancer Network; 2013.
2. Cella D, Lai J-S, Chang C-H, et al. Fatigue in cancer patients compared with fatigue in the general United States population. Cancer 2002;94:528–38.
3. Mustian K, Morrow G, Carroll J, et al. Integrative nonpharmacologic behavioral interventions for the management of cancer-related fatigue. Oncologist 2007;12 Suppl 1:52–67.
4. Ryan J, Carroll J, Ryan E, et al. Mechanisms of cancer-related fatigue. Oncologist 2007;12 Suppl 1:22–34.
5. Hoffman AJ, Given B, von Eye A, et al. Relationships among pain, fatigue, insomnia, and gender in persons with lung cancer. Oncol Nurs Forum 2007;34:785–92.
6. Miaskowski C, Dodd MJ, Lee KA, et al. Preliminary evidence of an association between a functional interleukin-6 polymorphism and fatigue and sleep disturbance in oncology patients and their family caregivers. J Pain Symptom Manage 2010;40:531–44.
7. Hwang SY, Chang V, Rue M, Kasimis B. Multidimensional independent predictors of cancer-related fatigue. J Pain Symptom Manage 2003;26:604–14.
8. Cleeland C, Mendoza T, Wang X, et al. Levels of symptom burden during chemotherapy for advanced lung cancer: Differences between public hospitals and a tertiary cancer center. J Clin Oncol 2011;29:2859–65.
9. Cleeland C, Bennett G, Dantzer R, et al. Are the symptoms of cancer and cancer treatment due to a shared biologic mechanism? A cytokine-immunologic model of cancer symptoms. Cancer 2003;97:2919–25.
10. Al Majid S, Gray DP. A biobehavioral model for the study of exercise interventions in cancer-related fatigue. Biol Res Nurs 2009;10:381–91.
11. Cramp F, Byron-Daniel J. Exercise for the management of cancer-related fatigue in adults. Cochrane Database Syst Rev 2012;11:CD006145.
12. Puetz TW, Herring MP. Differential effects of exercise on cancer-related fatigue during and following treatment: a meta-analysis. Am J Prev Med 2012;43:e1–24.
13. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6(7):e1000097.
14. Hoffman AJ, Brintnall RA, Brown JK, et al. Too sick not to exercise: Using a 6-week, home-based exercise intervention for cancer-related fatigue self-management for postsurgical non-small cell lung cancer patients. Cancer Nurs 2013;36:175–88.
Study Overview
Objective. To systematically review randomized controlled trials (RCTs) examining the effects of exercise interventions on cancer-related fatigue (CRF) in patients during and after treatment to determine differential effects.
Design. Meta-analysis.
Data. 70 RCTs with a combined sample of 4881 oncology patients during active treatment (eg, chemotherapy, radiation therapy, hormone therapy) or after completion of treatment published before August 2011 that analyzed the effect on CRF of an exercise program compared with a non-exercise control. Excluded from analysis were RCTs that compared exercise with other types of interventions (ie, education, pharmacotherapy, different methods of exercise). 43 studies examined exercise during treatment while 27 studied the effects after treatment.
Measurement. Effect size was calculated to determine the magnitude of the effect of exercise on improving CRF.
Main results. The effect size (Δ = 0.34, P < 0.001) for the total sample of 70 RCTs indicated that exercise has a moderate effect on CRF regardless of treatment status. When effect sizes were calculated for the 43 RCTs that examined patients during treatment, exercise was found to significantly decrease CRF (Δ = 0.32, P < 0.001). Based on calculated effect size for the 27 RCTs that examined exercise after treatment completion, exercise continues to significantly decrease CRF (Δ = 0.38, P < 0.001). The effect of exercise on CRF was consistent not only during or after treatment, but also across cancer diagnosis, patient age, and sex.
Exercise reduces CRF both during and after treatment. In patients who exercise, CRF severity decreases by 4.9% compared to a 29.1% increase in CRF in patients who do not exercise. After treatment, exercise decreases CRF by 20.5% compared to a decrease of 1.3% in patients who do not exercise.
Both during and after treatment, patients with higher exercise adherence experienced the most improvement (P < 0.001). Patients in active treatment with less severe baseline CRF demonstrated greater adherence to the exercise program and saw greater improvements in CRF. Patients who were further from active treatment saw greater CRF severity reduction than patients closer to active treatment. After treatment, the longer the exercise program, the more effective it was in decreasing CRF. No specific type of exercise program (eg, home-based, supervised, vigorous, moderate) was shown to be more effective than another.
Conclusion. Exercise decreases CRF in patients during and after treatment. The type of exercise does not change the positive effect of exercise, so it is important to encourage patients to be active.
Commentary
Cancer-related fatigue (CRF) is the most disturbing symptom associated with cancer diagnosis and its treatment [1]. Defined as a persistent, subjective sense of tiredness that is not proportional to activity and not relieved by rest, CRF is reported in over 80% of oncology patients during active treatment [1]. This symptom is not limited to the active treatment phase, with over 30% of cancer survivors reporting CRF lasting at least 5 years [2]. CRF is associated with decreased quality of life (QOL), decreased functional status, and decreased participation in social activities [1]. The pathogenesis of CRF is not fully understood [3,4]. Disruptions in biochemical pathways [5], genome expression [6] chemotherapy or radiation treatments [7,8], cancer pathogenesis [4], or a combination of factors [9] are hypothesized as contributing to the development and severity of CRF. The complexity of CRF pathogenesis makes clinical management difficult.
The current meta-analysis suggests that exercise is an effective nonpharmacologic intervention to ameliorate the impact of this devastating symptom and improve patients’ QOL [10–12]. The meta-analysis demonstrated strong rigor, using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [13]. Multiple electronic databases were accessed and additional evidence was obtained by review of the retrieved article reference lists. No language limitations were placed on the search, adding to the potential generalizability of the results. The procedures to extract data and evaluate the quality of each retrieved article are detailed providing evidence of the rigor of the authors’ methodology.
The limitations of the meta-analysis are related to the difficulties extracting data from multiple studies without consistent reporting of exercise mode, duration or evaluation methods. Inconsistent CRF assessment methods across studies limits the validity of the results quantifying the magnitude of CRF change identified. Despite the limitations, this is the first known meta-analysis of the effect of exercise on CRF during and after treatment synthesizing current research to provide clinical reccomendations.
As with all exercise prescriptions for any patient, the patient’s level of adherence is a moderating factor for its effectiveness. A recent study describes an interesting exercise intervention that utilizes a resource some cancer patients may already have in their homes. Seven patients with early-stage non-small cell lung cancer performed light-intensity walking and balance exercises in a virtual reality environment with the Nintendo Wii Fit Plus for 6 weeks after thora-cotomy [14]. Exercise started the first week after hospitalization and continued for 6 weeks. Outcomes seen included a decrease in CRF severity, a high level of satisfaction, high adherence rate, and an increase in self-efficacy for managing their CRF [14]. While the small sample size and homogeneous cancer diagnosis and stage limit generalizability, the study describes a promising approach to supporting patient adherence to exercise.
Applications for Clinical Practice
The results of this meta-analysis support exercise as an effective intervention to decrease CRF in oncology patients during and after treatment. Based on the results, exercise should be prescribed as a nonpharmacologic intervention to decrease CRF. Patients’ adherence to the exercise intervention is needed for effective CRF reduction. Thus, exercise prescription should be tailored to patients individual preferences, abilities, and available resources.
—Fay Wright, MSN, APRN, and Allison Squires, PhD, RN
Study Overview
Objective. To systematically review randomized controlled trials (RCTs) examining the effects of exercise interventions on cancer-related fatigue (CRF) in patients during and after treatment to determine differential effects.
Design. Meta-analysis.
Data. 70 RCTs with a combined sample of 4881 oncology patients during active treatment (eg, chemotherapy, radiation therapy, hormone therapy) or after completion of treatment published before August 2011 that analyzed the effect on CRF of an exercise program compared with a non-exercise control. Excluded from analysis were RCTs that compared exercise with other types of interventions (ie, education, pharmacotherapy, different methods of exercise). 43 studies examined exercise during treatment while 27 studied the effects after treatment.
Measurement. Effect size was calculated to determine the magnitude of the effect of exercise on improving CRF.
Main results. The effect size (Δ = 0.34, P < 0.001) for the total sample of 70 RCTs indicated that exercise has a moderate effect on CRF regardless of treatment status. When effect sizes were calculated for the 43 RCTs that examined patients during treatment, exercise was found to significantly decrease CRF (Δ = 0.32, P < 0.001). Based on calculated effect size for the 27 RCTs that examined exercise after treatment completion, exercise continues to significantly decrease CRF (Δ = 0.38, P < 0.001). The effect of exercise on CRF was consistent not only during or after treatment, but also across cancer diagnosis, patient age, and sex.
Exercise reduces CRF both during and after treatment. In patients who exercise, CRF severity decreases by 4.9% compared to a 29.1% increase in CRF in patients who do not exercise. After treatment, exercise decreases CRF by 20.5% compared to a decrease of 1.3% in patients who do not exercise.
Both during and after treatment, patients with higher exercise adherence experienced the most improvement (P < 0.001). Patients in active treatment with less severe baseline CRF demonstrated greater adherence to the exercise program and saw greater improvements in CRF. Patients who were further from active treatment saw greater CRF severity reduction than patients closer to active treatment. After treatment, the longer the exercise program, the more effective it was in decreasing CRF. No specific type of exercise program (eg, home-based, supervised, vigorous, moderate) was shown to be more effective than another.
Conclusion. Exercise decreases CRF in patients during and after treatment. The type of exercise does not change the positive effect of exercise, so it is important to encourage patients to be active.
Commentary
Cancer-related fatigue (CRF) is the most disturbing symptom associated with cancer diagnosis and its treatment [1]. Defined as a persistent, subjective sense of tiredness that is not proportional to activity and not relieved by rest, CRF is reported in over 80% of oncology patients during active treatment [1]. This symptom is not limited to the active treatment phase, with over 30% of cancer survivors reporting CRF lasting at least 5 years [2]. CRF is associated with decreased quality of life (QOL), decreased functional status, and decreased participation in social activities [1]. The pathogenesis of CRF is not fully understood [3,4]. Disruptions in biochemical pathways [5], genome expression [6] chemotherapy or radiation treatments [7,8], cancer pathogenesis [4], or a combination of factors [9] are hypothesized as contributing to the development and severity of CRF. The complexity of CRF pathogenesis makes clinical management difficult.
The current meta-analysis suggests that exercise is an effective nonpharmacologic intervention to ameliorate the impact of this devastating symptom and improve patients’ QOL [10–12]. The meta-analysis demonstrated strong rigor, using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [13]. Multiple electronic databases were accessed and additional evidence was obtained by review of the retrieved article reference lists. No language limitations were placed on the search, adding to the potential generalizability of the results. The procedures to extract data and evaluate the quality of each retrieved article are detailed providing evidence of the rigor of the authors’ methodology.
The limitations of the meta-analysis are related to the difficulties extracting data from multiple studies without consistent reporting of exercise mode, duration or evaluation methods. Inconsistent CRF assessment methods across studies limits the validity of the results quantifying the magnitude of CRF change identified. Despite the limitations, this is the first known meta-analysis of the effect of exercise on CRF during and after treatment synthesizing current research to provide clinical reccomendations.
As with all exercise prescriptions for any patient, the patient’s level of adherence is a moderating factor for its effectiveness. A recent study describes an interesting exercise intervention that utilizes a resource some cancer patients may already have in their homes. Seven patients with early-stage non-small cell lung cancer performed light-intensity walking and balance exercises in a virtual reality environment with the Nintendo Wii Fit Plus for 6 weeks after thora-cotomy [14]. Exercise started the first week after hospitalization and continued for 6 weeks. Outcomes seen included a decrease in CRF severity, a high level of satisfaction, high adherence rate, and an increase in self-efficacy for managing their CRF [14]. While the small sample size and homogeneous cancer diagnosis and stage limit generalizability, the study describes a promising approach to supporting patient adherence to exercise.
Applications for Clinical Practice
The results of this meta-analysis support exercise as an effective intervention to decrease CRF in oncology patients during and after treatment. Based on the results, exercise should be prescribed as a nonpharmacologic intervention to decrease CRF. Patients’ adherence to the exercise intervention is needed for effective CRF reduction. Thus, exercise prescription should be tailored to patients individual preferences, abilities, and available resources.
—Fay Wright, MSN, APRN, and Allison Squires, PhD, RN
1. Berger AM, Abernethy A, Atkinson A, et al. NCCN guidelines: cancer-related fatigue. Version 1. National Comprehensive Cancer Network; 2013.
2. Cella D, Lai J-S, Chang C-H, et al. Fatigue in cancer patients compared with fatigue in the general United States population. Cancer 2002;94:528–38.
3. Mustian K, Morrow G, Carroll J, et al. Integrative nonpharmacologic behavioral interventions for the management of cancer-related fatigue. Oncologist 2007;12 Suppl 1:52–67.
4. Ryan J, Carroll J, Ryan E, et al. Mechanisms of cancer-related fatigue. Oncologist 2007;12 Suppl 1:22–34.
5. Hoffman AJ, Given B, von Eye A, et al. Relationships among pain, fatigue, insomnia, and gender in persons with lung cancer. Oncol Nurs Forum 2007;34:785–92.
6. Miaskowski C, Dodd MJ, Lee KA, et al. Preliminary evidence of an association between a functional interleukin-6 polymorphism and fatigue and sleep disturbance in oncology patients and their family caregivers. J Pain Symptom Manage 2010;40:531–44.
7. Hwang SY, Chang V, Rue M, Kasimis B. Multidimensional independent predictors of cancer-related fatigue. J Pain Symptom Manage 2003;26:604–14.
8. Cleeland C, Mendoza T, Wang X, et al. Levels of symptom burden during chemotherapy for advanced lung cancer: Differences between public hospitals and a tertiary cancer center. J Clin Oncol 2011;29:2859–65.
9. Cleeland C, Bennett G, Dantzer R, et al. Are the symptoms of cancer and cancer treatment due to a shared biologic mechanism? A cytokine-immunologic model of cancer symptoms. Cancer 2003;97:2919–25.
10. Al Majid S, Gray DP. A biobehavioral model for the study of exercise interventions in cancer-related fatigue. Biol Res Nurs 2009;10:381–91.
11. Cramp F, Byron-Daniel J. Exercise for the management of cancer-related fatigue in adults. Cochrane Database Syst Rev 2012;11:CD006145.
12. Puetz TW, Herring MP. Differential effects of exercise on cancer-related fatigue during and following treatment: a meta-analysis. Am J Prev Med 2012;43:e1–24.
13. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6(7):e1000097.
14. Hoffman AJ, Brintnall RA, Brown JK, et al. Too sick not to exercise: Using a 6-week, home-based exercise intervention for cancer-related fatigue self-management for postsurgical non-small cell lung cancer patients. Cancer Nurs 2013;36:175–88.
1. Berger AM, Abernethy A, Atkinson A, et al. NCCN guidelines: cancer-related fatigue. Version 1. National Comprehensive Cancer Network; 2013.
2. Cella D, Lai J-S, Chang C-H, et al. Fatigue in cancer patients compared with fatigue in the general United States population. Cancer 2002;94:528–38.
3. Mustian K, Morrow G, Carroll J, et al. Integrative nonpharmacologic behavioral interventions for the management of cancer-related fatigue. Oncologist 2007;12 Suppl 1:52–67.
4. Ryan J, Carroll J, Ryan E, et al. Mechanisms of cancer-related fatigue. Oncologist 2007;12 Suppl 1:22–34.
5. Hoffman AJ, Given B, von Eye A, et al. Relationships among pain, fatigue, insomnia, and gender in persons with lung cancer. Oncol Nurs Forum 2007;34:785–92.
6. Miaskowski C, Dodd MJ, Lee KA, et al. Preliminary evidence of an association between a functional interleukin-6 polymorphism and fatigue and sleep disturbance in oncology patients and their family caregivers. J Pain Symptom Manage 2010;40:531–44.
7. Hwang SY, Chang V, Rue M, Kasimis B. Multidimensional independent predictors of cancer-related fatigue. J Pain Symptom Manage 2003;26:604–14.
8. Cleeland C, Mendoza T, Wang X, et al. Levels of symptom burden during chemotherapy for advanced lung cancer: Differences between public hospitals and a tertiary cancer center. J Clin Oncol 2011;29:2859–65.
9. Cleeland C, Bennett G, Dantzer R, et al. Are the symptoms of cancer and cancer treatment due to a shared biologic mechanism? A cytokine-immunologic model of cancer symptoms. Cancer 2003;97:2919–25.
10. Al Majid S, Gray DP. A biobehavioral model for the study of exercise interventions in cancer-related fatigue. Biol Res Nurs 2009;10:381–91.
11. Cramp F, Byron-Daniel J. Exercise for the management of cancer-related fatigue in adults. Cochrane Database Syst Rev 2012;11:CD006145.
12. Puetz TW, Herring MP. Differential effects of exercise on cancer-related fatigue during and following treatment: a meta-analysis. Am J Prev Med 2012;43:e1–24.
13. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6(7):e1000097.
14. Hoffman AJ, Brintnall RA, Brown JK, et al. Too sick not to exercise: Using a 6-week, home-based exercise intervention for cancer-related fatigue self-management for postsurgical non-small cell lung cancer patients. Cancer Nurs 2013;36:175–88.
What Is the Global Burden of Unsafe Medical Care?
Study Overview
Objective. To examine the global burden of unsafe medical care and its comparative frequency in low/middle-income vs. high-income countries.
Design. Analytical modeling of aggregated data from observational studies.
Data. Two primary sources of data were used. First, the team conducted a search of over 16,000 articles written in English after 1976 that aimed for a comprehensive exam-ination of both peer-reviewed and non–peer-reviewed studies that focused on 7 inpatient adverse events (see below), and the clinical features of the patients who were injured from them. Two separate literature reviews were conducted in 2007 through early 2008 and then repeated in 2011. Discussions with international experts in each topic area informed the selection process. The second source of data was epidemiological studies commissioned by the World Health Organization (WHO). These aimed to identify inpatient adverse events using a 2-stage medical record review in 26 hospitals across 8 low- and middle-income countries (LMICs) in the Eastern Mediterranean and North Africanregions, and 35 hospitals across 5 countries in Latin America.
Main outcome measures. 7 types of adverse events were evaluated in the analysis: (1) adverse drug events, (2) catheter-related urinary tract infection, (3) catheter-related blood stream infections, (4) nosocomial pneumonia, (5) venous thromboembolism, (6) falls, and (7) pressure ulcers (decubiti). The global burden of disease (GBD) is a standard metric that uses disability-adjusted life years (DALYs) as a proxy measure of morbidity and mortality related to a specific condition. The GBD DALYs model requires several key inputs: the number of people affected, the age at which they are affected, and the clinical consequence of the adverse events. In this study, a single average age per event was used instead of the standard GBD calculations by age and sex. Each input of GBD and DALYs was calculated separately for high-income countries (HICs) versus LMICs. The World Bank sets the income categorization for countries and adjusts the information on an annual basis. Countries in each category share common characteristics of socioeconomic development and epidemiological profiles.
Main results. The rate of hospitalization in HICs was higher than in LMICs: 10.8 vs. 3.7 per 100 citizens per year. There were large variations in the reported incidence of adverse events in both HICs and LMICs. Of the 7 adverse events assessed, adverse drug events were the most common type in HICs, with an incidence rate of 5.0%. In LMICs, venous thromboembolism was most common, with an incidence rate of 3.0%. Catheter-related blood stream infection, venous thromboembolism, and pressure ulcers had comparable rates between HICs and LMIC . The authors estimated that for every 100 hospitalizations, approximately 14.2 adverse events in HICs and 12.7 in LMICs. This is roughly 16.8 million injuries annually among hospitalized patients in HICs. LMICs and experienced approximately 50% more adverse events than HICs. Of note, LMICs had 5 times the population of HICs but the authors did not calculate proportional incidence rates.
The authors estimated 22.6 million DALYs lost due to these adverse events in 2009 globally. Unsurprisingly, the number of DALYs lost were more than twice as high as in LMICs as they were in HICs. This is likely due to the combination of weaker health systems and human resources for health shortages in those countries. In LMICs, venous thromboembolism was the main source of lost DALYs. Although incidences of hospital-acquired infections--such as nosocomial pneumonia, catheter-related blood stream and urinary tract infections--were smaller, they caused a comparable number of DALYs lost. Premature death from adverse events was the primary source of DALYs lost for all countries.
Conclusion. Adverse events from unsafe care is a significant problem across all countries.
Commentary
Globally, the efforts to improve health care delivery for diseases that cause substantial morbidity and mortality have been largely successful. For example, antimalarial drugs and antiretroviral therapies have become more accessible to patients in need [1,2]. However, in order to create more sustainable model, the health care systems of developing countries need sustainable investments to care for their growing populations and increasing medical needs [3,4]. Allengranzi et al [5] concluded from a systemic review that health care–associated infections are ubiquitous and occur at much higher rates in LMICs than in HICs. Findings from this study support those from Allengranzi’s review.
This study helped further our understanding of and explored the impact of unsafe medical care on GBD and DALYs. Several other adverse events related to unsafe care, such as unsafe surgery, harms due to counterfeit drugs, unsafe childbirth and unsafe blood use, were not included in this study due to data limitations. The estimated lost DALYs would be much higher if these events were counted.
This study has several strengths. First, the authors sought out the best available data from a large number of sources. Evidence selected for the analysis came from studies with good quality ratings. The 7 outcome measures used in this study are now standard minimum reporting data internationally. Nonetheless, several limitations are present. As the authors noted, the lack of availability high-quality data is common in international analyses. There can be reporting delays, data collection errors due to a lack of technical capacity, and corruption problems that may influence data quality. Poor reporting practices may exclude or underreport adverse events. Also, the paucity of data for some variables limited the calculation of estimates Second, few studies used standardized approaches in their data collection and analysis, contributing to data inconsistencies that may affect the reliability of the results. Third, the same life expectancy value (the WHO standard) was used for all individuals regardless of their countries’ life expectancy. The authors acknowledged that this approach was controversial and may have resulted in a different number of DALYs lost. Finally, only English-language publications were used, which may have influenced the findings. Latin America, the former Soviet Union states, and many Asian countries have growing bodies of research published in their native languages.
Despite the limitations, the study is one of the first systematic analyses of GBD, the outcomes of unsafe medical care, and associated lost DALYs. The analysis identified that a majority of the harms from adverse events occur in LMICs. Policies addressing, supporting, and enforcing patient safety measures during the health care experience will help ensure reductions in mortality and morbidity in LMICs. Improving the safety of the healthcare system should be a major policy and research emphasis across the globe.
Applications for Clinical Practice
Even though patient safety initiatives have been at the forefront of many organizational policies and health care provider education since the 1999 Institute of Medicine report “Crossing the Quality Chasm,” this study reminds practitioners that safe clinical practice is essential for reducing domestic disease burden. The cost of adverse events from unsafe practice in the United States was estimated to be around $16.6 billion in 2004 alone [6]. With the World Health Organization calling for strengthened research infrastructure across the globe and LMICs now seeing the value of data for health systems policymaking and management, future research will help to further refine the methods developed in this study.
—Jin Jun, MSN, APRN-BC, CCRN, and Allison Squires, PhD, RN
1. Kaplan J, Hanson D, Dworkin M, et al. Epidemiology of human immunodeficiency virus-associated opportunistic infections in the United States in the era of highly active antiretroviral therapy, Clin Infect Dis 2000;30 Suppl 1:S5–14.
2. Eaton J, et al. Health benefits, costs, and cost-effectiveness of earlier eligibility for adult antiretroviral therapy and expanded treatment coverage: a cmbined analysis of 12 mathematical models, Lancet Global Health 2014;2:e23–34.
3. Mills A, Brugha R, Hanson K, et al. What can be done about the private health sector in low-income countries? Bull World Health Org 2002;80:325–30.
4. Schlein K, De La Cruz A, Gopalakrishnan T, Montagu D. Private sector delivery of health services in developing countries: a mixed-methods study on quality assurance in social franchises, BMC Health Serv Res 2013;13:4.
5. Allegranzi B, Bagheri N, Combescure C, et al. Burden of endemic health-care-associated infection in developing countries: systematic review and meta-analysis, Lancet 2011;377;
228–41.
6. Jha A, Chan D, Ridgway A, et al. Improving safety and eliminating redundant tests: cutting costs in US hospitals. Health Affairs 2009;28:1475–84.
Study Overview
Objective. To examine the global burden of unsafe medical care and its comparative frequency in low/middle-income vs. high-income countries.
Design. Analytical modeling of aggregated data from observational studies.
Data. Two primary sources of data were used. First, the team conducted a search of over 16,000 articles written in English after 1976 that aimed for a comprehensive exam-ination of both peer-reviewed and non–peer-reviewed studies that focused on 7 inpatient adverse events (see below), and the clinical features of the patients who were injured from them. Two separate literature reviews were conducted in 2007 through early 2008 and then repeated in 2011. Discussions with international experts in each topic area informed the selection process. The second source of data was epidemiological studies commissioned by the World Health Organization (WHO). These aimed to identify inpatient adverse events using a 2-stage medical record review in 26 hospitals across 8 low- and middle-income countries (LMICs) in the Eastern Mediterranean and North Africanregions, and 35 hospitals across 5 countries in Latin America.
Main outcome measures. 7 types of adverse events were evaluated in the analysis: (1) adverse drug events, (2) catheter-related urinary tract infection, (3) catheter-related blood stream infections, (4) nosocomial pneumonia, (5) venous thromboembolism, (6) falls, and (7) pressure ulcers (decubiti). The global burden of disease (GBD) is a standard metric that uses disability-adjusted life years (DALYs) as a proxy measure of morbidity and mortality related to a specific condition. The GBD DALYs model requires several key inputs: the number of people affected, the age at which they are affected, and the clinical consequence of the adverse events. In this study, a single average age per event was used instead of the standard GBD calculations by age and sex. Each input of GBD and DALYs was calculated separately for high-income countries (HICs) versus LMICs. The World Bank sets the income categorization for countries and adjusts the information on an annual basis. Countries in each category share common characteristics of socioeconomic development and epidemiological profiles.
Main results. The rate of hospitalization in HICs was higher than in LMICs: 10.8 vs. 3.7 per 100 citizens per year. There were large variations in the reported incidence of adverse events in both HICs and LMICs. Of the 7 adverse events assessed, adverse drug events were the most common type in HICs, with an incidence rate of 5.0%. In LMICs, venous thromboembolism was most common, with an incidence rate of 3.0%. Catheter-related blood stream infection, venous thromboembolism, and pressure ulcers had comparable rates between HICs and LMIC . The authors estimated that for every 100 hospitalizations, approximately 14.2 adverse events in HICs and 12.7 in LMICs. This is roughly 16.8 million injuries annually among hospitalized patients in HICs. LMICs and experienced approximately 50% more adverse events than HICs. Of note, LMICs had 5 times the population of HICs but the authors did not calculate proportional incidence rates.
The authors estimated 22.6 million DALYs lost due to these adverse events in 2009 globally. Unsurprisingly, the number of DALYs lost were more than twice as high as in LMICs as they were in HICs. This is likely due to the combination of weaker health systems and human resources for health shortages in those countries. In LMICs, venous thromboembolism was the main source of lost DALYs. Although incidences of hospital-acquired infections--such as nosocomial pneumonia, catheter-related blood stream and urinary tract infections--were smaller, they caused a comparable number of DALYs lost. Premature death from adverse events was the primary source of DALYs lost for all countries.
Conclusion. Adverse events from unsafe care is a significant problem across all countries.
Commentary
Globally, the efforts to improve health care delivery for diseases that cause substantial morbidity and mortality have been largely successful. For example, antimalarial drugs and antiretroviral therapies have become more accessible to patients in need [1,2]. However, in order to create more sustainable model, the health care systems of developing countries need sustainable investments to care for their growing populations and increasing medical needs [3,4]. Allengranzi et al [5] concluded from a systemic review that health care–associated infections are ubiquitous and occur at much higher rates in LMICs than in HICs. Findings from this study support those from Allengranzi’s review.
This study helped further our understanding of and explored the impact of unsafe medical care on GBD and DALYs. Several other adverse events related to unsafe care, such as unsafe surgery, harms due to counterfeit drugs, unsafe childbirth and unsafe blood use, were not included in this study due to data limitations. The estimated lost DALYs would be much higher if these events were counted.
This study has several strengths. First, the authors sought out the best available data from a large number of sources. Evidence selected for the analysis came from studies with good quality ratings. The 7 outcome measures used in this study are now standard minimum reporting data internationally. Nonetheless, several limitations are present. As the authors noted, the lack of availability high-quality data is common in international analyses. There can be reporting delays, data collection errors due to a lack of technical capacity, and corruption problems that may influence data quality. Poor reporting practices may exclude or underreport adverse events. Also, the paucity of data for some variables limited the calculation of estimates Second, few studies used standardized approaches in their data collection and analysis, contributing to data inconsistencies that may affect the reliability of the results. Third, the same life expectancy value (the WHO standard) was used for all individuals regardless of their countries’ life expectancy. The authors acknowledged that this approach was controversial and may have resulted in a different number of DALYs lost. Finally, only English-language publications were used, which may have influenced the findings. Latin America, the former Soviet Union states, and many Asian countries have growing bodies of research published in their native languages.
Despite the limitations, the study is one of the first systematic analyses of GBD, the outcomes of unsafe medical care, and associated lost DALYs. The analysis identified that a majority of the harms from adverse events occur in LMICs. Policies addressing, supporting, and enforcing patient safety measures during the health care experience will help ensure reductions in mortality and morbidity in LMICs. Improving the safety of the healthcare system should be a major policy and research emphasis across the globe.
Applications for Clinical Practice
Even though patient safety initiatives have been at the forefront of many organizational policies and health care provider education since the 1999 Institute of Medicine report “Crossing the Quality Chasm,” this study reminds practitioners that safe clinical practice is essential for reducing domestic disease burden. The cost of adverse events from unsafe practice in the United States was estimated to be around $16.6 billion in 2004 alone [6]. With the World Health Organization calling for strengthened research infrastructure across the globe and LMICs now seeing the value of data for health systems policymaking and management, future research will help to further refine the methods developed in this study.
—Jin Jun, MSN, APRN-BC, CCRN, and Allison Squires, PhD, RN
Study Overview
Objective. To examine the global burden of unsafe medical care and its comparative frequency in low/middle-income vs. high-income countries.
Design. Analytical modeling of aggregated data from observational studies.
Data. Two primary sources of data were used. First, the team conducted a search of over 16,000 articles written in English after 1976 that aimed for a comprehensive exam-ination of both peer-reviewed and non–peer-reviewed studies that focused on 7 inpatient adverse events (see below), and the clinical features of the patients who were injured from them. Two separate literature reviews were conducted in 2007 through early 2008 and then repeated in 2011. Discussions with international experts in each topic area informed the selection process. The second source of data was epidemiological studies commissioned by the World Health Organization (WHO). These aimed to identify inpatient adverse events using a 2-stage medical record review in 26 hospitals across 8 low- and middle-income countries (LMICs) in the Eastern Mediterranean and North Africanregions, and 35 hospitals across 5 countries in Latin America.
Main outcome measures. 7 types of adverse events were evaluated in the analysis: (1) adverse drug events, (2) catheter-related urinary tract infection, (3) catheter-related blood stream infections, (4) nosocomial pneumonia, (5) venous thromboembolism, (6) falls, and (7) pressure ulcers (decubiti). The global burden of disease (GBD) is a standard metric that uses disability-adjusted life years (DALYs) as a proxy measure of morbidity and mortality related to a specific condition. The GBD DALYs model requires several key inputs: the number of people affected, the age at which they are affected, and the clinical consequence of the adverse events. In this study, a single average age per event was used instead of the standard GBD calculations by age and sex. Each input of GBD and DALYs was calculated separately for high-income countries (HICs) versus LMICs. The World Bank sets the income categorization for countries and adjusts the information on an annual basis. Countries in each category share common characteristics of socioeconomic development and epidemiological profiles.
Main results. The rate of hospitalization in HICs was higher than in LMICs: 10.8 vs. 3.7 per 100 citizens per year. There were large variations in the reported incidence of adverse events in both HICs and LMICs. Of the 7 adverse events assessed, adverse drug events were the most common type in HICs, with an incidence rate of 5.0%. In LMICs, venous thromboembolism was most common, with an incidence rate of 3.0%. Catheter-related blood stream infection, venous thromboembolism, and pressure ulcers had comparable rates between HICs and LMIC . The authors estimated that for every 100 hospitalizations, approximately 14.2 adverse events in HICs and 12.7 in LMICs. This is roughly 16.8 million injuries annually among hospitalized patients in HICs. LMICs and experienced approximately 50% more adverse events than HICs. Of note, LMICs had 5 times the population of HICs but the authors did not calculate proportional incidence rates.
The authors estimated 22.6 million DALYs lost due to these adverse events in 2009 globally. Unsurprisingly, the number of DALYs lost were more than twice as high as in LMICs as they were in HICs. This is likely due to the combination of weaker health systems and human resources for health shortages in those countries. In LMICs, venous thromboembolism was the main source of lost DALYs. Although incidences of hospital-acquired infections--such as nosocomial pneumonia, catheter-related blood stream and urinary tract infections--were smaller, they caused a comparable number of DALYs lost. Premature death from adverse events was the primary source of DALYs lost for all countries.
Conclusion. Adverse events from unsafe care is a significant problem across all countries.
Commentary
Globally, the efforts to improve health care delivery for diseases that cause substantial morbidity and mortality have been largely successful. For example, antimalarial drugs and antiretroviral therapies have become more accessible to patients in need [1,2]. However, in order to create more sustainable model, the health care systems of developing countries need sustainable investments to care for their growing populations and increasing medical needs [3,4]. Allengranzi et al [5] concluded from a systemic review that health care–associated infections are ubiquitous and occur at much higher rates in LMICs than in HICs. Findings from this study support those from Allengranzi’s review.
This study helped further our understanding of and explored the impact of unsafe medical care on GBD and DALYs. Several other adverse events related to unsafe care, such as unsafe surgery, harms due to counterfeit drugs, unsafe childbirth and unsafe blood use, were not included in this study due to data limitations. The estimated lost DALYs would be much higher if these events were counted.
This study has several strengths. First, the authors sought out the best available data from a large number of sources. Evidence selected for the analysis came from studies with good quality ratings. The 7 outcome measures used in this study are now standard minimum reporting data internationally. Nonetheless, several limitations are present. As the authors noted, the lack of availability high-quality data is common in international analyses. There can be reporting delays, data collection errors due to a lack of technical capacity, and corruption problems that may influence data quality. Poor reporting practices may exclude or underreport adverse events. Also, the paucity of data for some variables limited the calculation of estimates Second, few studies used standardized approaches in their data collection and analysis, contributing to data inconsistencies that may affect the reliability of the results. Third, the same life expectancy value (the WHO standard) was used for all individuals regardless of their countries’ life expectancy. The authors acknowledged that this approach was controversial and may have resulted in a different number of DALYs lost. Finally, only English-language publications were used, which may have influenced the findings. Latin America, the former Soviet Union states, and many Asian countries have growing bodies of research published in their native languages.
Despite the limitations, the study is one of the first systematic analyses of GBD, the outcomes of unsafe medical care, and associated lost DALYs. The analysis identified that a majority of the harms from adverse events occur in LMICs. Policies addressing, supporting, and enforcing patient safety measures during the health care experience will help ensure reductions in mortality and morbidity in LMICs. Improving the safety of the healthcare system should be a major policy and research emphasis across the globe.
Applications for Clinical Practice
Even though patient safety initiatives have been at the forefront of many organizational policies and health care provider education since the 1999 Institute of Medicine report “Crossing the Quality Chasm,” this study reminds practitioners that safe clinical practice is essential for reducing domestic disease burden. The cost of adverse events from unsafe practice in the United States was estimated to be around $16.6 billion in 2004 alone [6]. With the World Health Organization calling for strengthened research infrastructure across the globe and LMICs now seeing the value of data for health systems policymaking and management, future research will help to further refine the methods developed in this study.
—Jin Jun, MSN, APRN-BC, CCRN, and Allison Squires, PhD, RN
1. Kaplan J, Hanson D, Dworkin M, et al. Epidemiology of human immunodeficiency virus-associated opportunistic infections in the United States in the era of highly active antiretroviral therapy, Clin Infect Dis 2000;30 Suppl 1:S5–14.
2. Eaton J, et al. Health benefits, costs, and cost-effectiveness of earlier eligibility for adult antiretroviral therapy and expanded treatment coverage: a cmbined analysis of 12 mathematical models, Lancet Global Health 2014;2:e23–34.
3. Mills A, Brugha R, Hanson K, et al. What can be done about the private health sector in low-income countries? Bull World Health Org 2002;80:325–30.
4. Schlein K, De La Cruz A, Gopalakrishnan T, Montagu D. Private sector delivery of health services in developing countries: a mixed-methods study on quality assurance in social franchises, BMC Health Serv Res 2013;13:4.
5. Allegranzi B, Bagheri N, Combescure C, et al. Burden of endemic health-care-associated infection in developing countries: systematic review and meta-analysis, Lancet 2011;377;
228–41.
6. Jha A, Chan D, Ridgway A, et al. Improving safety and eliminating redundant tests: cutting costs in US hospitals. Health Affairs 2009;28:1475–84.
1. Kaplan J, Hanson D, Dworkin M, et al. Epidemiology of human immunodeficiency virus-associated opportunistic infections in the United States in the era of highly active antiretroviral therapy, Clin Infect Dis 2000;30 Suppl 1:S5–14.
2. Eaton J, et al. Health benefits, costs, and cost-effectiveness of earlier eligibility for adult antiretroviral therapy and expanded treatment coverage: a cmbined analysis of 12 mathematical models, Lancet Global Health 2014;2:e23–34.
3. Mills A, Brugha R, Hanson K, et al. What can be done about the private health sector in low-income countries? Bull World Health Org 2002;80:325–30.
4. Schlein K, De La Cruz A, Gopalakrishnan T, Montagu D. Private sector delivery of health services in developing countries: a mixed-methods study on quality assurance in social franchises, BMC Health Serv Res 2013;13:4.
5. Allegranzi B, Bagheri N, Combescure C, et al. Burden of endemic health-care-associated infection in developing countries: systematic review and meta-analysis, Lancet 2011;377;
228–41.
6. Jha A, Chan D, Ridgway A, et al. Improving safety and eliminating redundant tests: cutting costs in US hospitals. Health Affairs 2009;28:1475–84.
Does Bioelectrical Impedance Analysis Provide a Reliable Diagnosis of Secondary Lymphedema in Breast Cancer Patients?
Study Overview
Objective. To evaluate the reliability, sensitivity, and specificity of bioelectrical impedance analysis (BIA) in the diagnosis of secondary lymphedema.
Design. Cross-sectional study utilizing test-retest method
Setting and participants. The researchers used a purposeful sampling technique to recruit women between 2010 and 2011 from a metropolitan cancer center and communities in the New York City metropolitan area. Participants included women who were 18 years of age or older and able to read and write in English. Exclusion criteria included patients with bilateral breast disease, recurrent cancer, artificial limb, knee, or hip, and kidney or heart failure. Study participants were divided into 3 groups: breast cancer survivors with lymphedema, those at risk for lymphedema, and healthy adult women (no history of breast cancer or lymphedema). Women in the at risk category had to have completed surgical treatment, chemotherapy and/or radiation within the 5 years prior to the study enrollment.
Measurements. Patient’s arms were measured by the same 2 researchers using sequential circumferential measurements. BIA was measured in all patients with the ImpXCA (Impedimed Inc, Pittsford, NY), an FDA-approved device that measures impedance and resistance of the extracellular fluid. The ImpXCA utilizes a scale to correlate BIA to an L-Dex (lymphedema index) ratio; –10 to +10 defines the normal range of L-Dex values for a patient without lymphedema. Measurements were taken at 5-minute increments for a total of 3 times at the same visit to test for stability of BIA.
Main results. 250 patients were in the sample: 42 with known lymphedema, 148 at risk for lymphedema, and 60 healthy female adults. L-Dex ratios ranged from –9.7 to 7.7 in the healthy population, –9.6 to 36.9 in the at risk group, and 0.9 to 115 in the group with lymphedema. Mean L-Dex ratios were significantly different between the healthy and lymphedema groups (P < 0.001) and the at risk and lymphedema groups (P < 0.001). There was no difference between the at risk and healthy groups (P = 0.85). Utilizing an L-Dex ratio cutoff of 7.1 provided 80% sensitivity and 90% specificity in the diagnosis of secondary lymphedema.
Reliability and reproducibility of BIA by ImpXCA using the L-Dex ratio was assessed using a test-retest method. Intra-class correlation coefficients (ICC) provided strong stability for the repeated measurements in the healthy group, with ICC = 0.99 (95% CI, 0.99–0.99), and in the at risk group, with ICC = 0.99 (95% CI, 0.99–0.99). There was also fair agreement in the repeated measurements in the lymphedema group, with ICC = 0.69 (95% CI, 0.58-0.82). All of these findings were statistically significant (P < 0.001).
Conclusion. The L-Dex ratio is reliable and reproducible and may be helpful in distinguishing women with lymphedema from those without lymphedema. BIA in conjunction with other tools, such as self-report of symptoms, circumferential measurements, and clinical observation, may have a role in diagnosing lymphedema.
Commentary
The first year and a half following surgical treatment for breast cancer is when providers tend to diagnose the initial onset of lymphedema [1]. Many women, however, go undiagnosed until the illness has progressed. Earlier treatment has the potential to improve patient outcomes [2]. Although awareness of secondary lymphedema among breast cancer survivors has increased over the past 10 years, the diagnosis remains difficult and the development of effective diagnostic tools continues to challenge health care providers.
The current gold standard for diagnosis is the water displacement method where the affected and unaffected extremities are each placed into a tank of water, and the displaced water is measured [3]. Greater than a 200 mL discrepancy between arms is used to make a diagnosis of lymphedema. While useful, this measurement is messy and difficult to set up, and thus underutilized. Many providers have turned to circumferential measurements as their primary method to diagnose and monitor lymphedema [4]. However, this method may miss patients in the earlier stages of lymphedema, since it measures the size of limbs rather than changes in the tissue. Without a definitive test to diagnose lymphedema, researchers and health care providers continue to search for the most accurate, reliable, and feasible means to assist in the diagnosis.
This cross-sectional study suggests that L-Dex can be helpful in detecting lymphedema. A weakness of the study is that the investigators did not compare the results of BIA to the current gold standard of water displacement, but rather to circumferential measurement. In addition, while all results were reproducible, the difference between groups was notable in terms of age and body mass index, making it difficult to generalize to all patients at risk for lymphedema or differentiate results by those same variables. Although having the same 2 investigators obtain circumferential tape measurements is preferable to having multiple investigators do so, such measurements are still at risk for human error.
BIA shows promise as a diagnostic tool. Future studies should include healthy patients with characteristics similar to those of at risk patients and lymphedema patients. Efforts also could be directed towards determining whether combining BIA with other methods, such as self-report, circumferential measurements, and close observation, may offer greater sensitivity and specificity than one method alone.
Applications for Clinical Practice
Secondary lymphedema is a common complication caused by surgical treatment of breast cancer. Early treatment is linked to a decrease in debilitating factors such as immobility of affected joints, skin changes, and risk for infection. Measurement of extracellular fluid utilizing L-Dex ratios produces reliable and repeatable results in the assessment for lymphedema. Paired with additional tools and resources it may be helpful in making a diagnosis, which is normally difficult in its earliest stages. The early diagnosis of secondary lymphedema may allow for improved quality of life for survivors of breast cancer.
—Jennifer L. Nahum, MSN, CPNP-AC, PPCNP-BC, and Allison Squires, PhD, RN
1. Czerniec SA, Ward LC, Lee MJ, et al. Segmental measurement of breast cancer-related arm lymphoedema using perometry and bioimpedance spectroscopy. Support Care Cancer. 2011;19:703–10.
2. Damstra RJ, Voesten HG, van Schelven WD, van der Lei B. Lymphatic venous anastomosis (LVA) for treatment of secondary arm lymphedema. A prospective study of 11 LVA procedures in 10 patients with breast cancer related lymphedema and a critical review of the literature. Breast Cancer Res Treat 2009;113:199–206.
3. Brorson H, Höijer P. Standardised measurements used to order compression garments can be used to calculate arm volumes to evaluate lymphoedema treatment. Plast Surg Hand Surg 2012;46:410–5.
4. Langbecker D, Hayes SC, Newman B, Janda M. Treatment for upper-limb and lower-limb lymphedema by professionals specializing in lymphedema care. Eur J Cancer Care (Engl). 2008;17:557–64.
Study Overview
Objective. To evaluate the reliability, sensitivity, and specificity of bioelectrical impedance analysis (BIA) in the diagnosis of secondary lymphedema.
Design. Cross-sectional study utilizing test-retest method
Setting and participants. The researchers used a purposeful sampling technique to recruit women between 2010 and 2011 from a metropolitan cancer center and communities in the New York City metropolitan area. Participants included women who were 18 years of age or older and able to read and write in English. Exclusion criteria included patients with bilateral breast disease, recurrent cancer, artificial limb, knee, or hip, and kidney or heart failure. Study participants were divided into 3 groups: breast cancer survivors with lymphedema, those at risk for lymphedema, and healthy adult women (no history of breast cancer or lymphedema). Women in the at risk category had to have completed surgical treatment, chemotherapy and/or radiation within the 5 years prior to the study enrollment.
Measurements. Patient’s arms were measured by the same 2 researchers using sequential circumferential measurements. BIA was measured in all patients with the ImpXCA (Impedimed Inc, Pittsford, NY), an FDA-approved device that measures impedance and resistance of the extracellular fluid. The ImpXCA utilizes a scale to correlate BIA to an L-Dex (lymphedema index) ratio; –10 to +10 defines the normal range of L-Dex values for a patient without lymphedema. Measurements were taken at 5-minute increments for a total of 3 times at the same visit to test for stability of BIA.
Main results. 250 patients were in the sample: 42 with known lymphedema, 148 at risk for lymphedema, and 60 healthy female adults. L-Dex ratios ranged from –9.7 to 7.7 in the healthy population, –9.6 to 36.9 in the at risk group, and 0.9 to 115 in the group with lymphedema. Mean L-Dex ratios were significantly different between the healthy and lymphedema groups (P < 0.001) and the at risk and lymphedema groups (P < 0.001). There was no difference between the at risk and healthy groups (P = 0.85). Utilizing an L-Dex ratio cutoff of 7.1 provided 80% sensitivity and 90% specificity in the diagnosis of secondary lymphedema.
Reliability and reproducibility of BIA by ImpXCA using the L-Dex ratio was assessed using a test-retest method. Intra-class correlation coefficients (ICC) provided strong stability for the repeated measurements in the healthy group, with ICC = 0.99 (95% CI, 0.99–0.99), and in the at risk group, with ICC = 0.99 (95% CI, 0.99–0.99). There was also fair agreement in the repeated measurements in the lymphedema group, with ICC = 0.69 (95% CI, 0.58-0.82). All of these findings were statistically significant (P < 0.001).
Conclusion. The L-Dex ratio is reliable and reproducible and may be helpful in distinguishing women with lymphedema from those without lymphedema. BIA in conjunction with other tools, such as self-report of symptoms, circumferential measurements, and clinical observation, may have a role in diagnosing lymphedema.
Commentary
The first year and a half following surgical treatment for breast cancer is when providers tend to diagnose the initial onset of lymphedema [1]. Many women, however, go undiagnosed until the illness has progressed. Earlier treatment has the potential to improve patient outcomes [2]. Although awareness of secondary lymphedema among breast cancer survivors has increased over the past 10 years, the diagnosis remains difficult and the development of effective diagnostic tools continues to challenge health care providers.
The current gold standard for diagnosis is the water displacement method where the affected and unaffected extremities are each placed into a tank of water, and the displaced water is measured [3]. Greater than a 200 mL discrepancy between arms is used to make a diagnosis of lymphedema. While useful, this measurement is messy and difficult to set up, and thus underutilized. Many providers have turned to circumferential measurements as their primary method to diagnose and monitor lymphedema [4]. However, this method may miss patients in the earlier stages of lymphedema, since it measures the size of limbs rather than changes in the tissue. Without a definitive test to diagnose lymphedema, researchers and health care providers continue to search for the most accurate, reliable, and feasible means to assist in the diagnosis.
This cross-sectional study suggests that L-Dex can be helpful in detecting lymphedema. A weakness of the study is that the investigators did not compare the results of BIA to the current gold standard of water displacement, but rather to circumferential measurement. In addition, while all results were reproducible, the difference between groups was notable in terms of age and body mass index, making it difficult to generalize to all patients at risk for lymphedema or differentiate results by those same variables. Although having the same 2 investigators obtain circumferential tape measurements is preferable to having multiple investigators do so, such measurements are still at risk for human error.
BIA shows promise as a diagnostic tool. Future studies should include healthy patients with characteristics similar to those of at risk patients and lymphedema patients. Efforts also could be directed towards determining whether combining BIA with other methods, such as self-report, circumferential measurements, and close observation, may offer greater sensitivity and specificity than one method alone.
Applications for Clinical Practice
Secondary lymphedema is a common complication caused by surgical treatment of breast cancer. Early treatment is linked to a decrease in debilitating factors such as immobility of affected joints, skin changes, and risk for infection. Measurement of extracellular fluid utilizing L-Dex ratios produces reliable and repeatable results in the assessment for lymphedema. Paired with additional tools and resources it may be helpful in making a diagnosis, which is normally difficult in its earliest stages. The early diagnosis of secondary lymphedema may allow for improved quality of life for survivors of breast cancer.
—Jennifer L. Nahum, MSN, CPNP-AC, PPCNP-BC, and Allison Squires, PhD, RN
Study Overview
Objective. To evaluate the reliability, sensitivity, and specificity of bioelectrical impedance analysis (BIA) in the diagnosis of secondary lymphedema.
Design. Cross-sectional study utilizing test-retest method
Setting and participants. The researchers used a purposeful sampling technique to recruit women between 2010 and 2011 from a metropolitan cancer center and communities in the New York City metropolitan area. Participants included women who were 18 years of age or older and able to read and write in English. Exclusion criteria included patients with bilateral breast disease, recurrent cancer, artificial limb, knee, or hip, and kidney or heart failure. Study participants were divided into 3 groups: breast cancer survivors with lymphedema, those at risk for lymphedema, and healthy adult women (no history of breast cancer or lymphedema). Women in the at risk category had to have completed surgical treatment, chemotherapy and/or radiation within the 5 years prior to the study enrollment.
Measurements. Patient’s arms were measured by the same 2 researchers using sequential circumferential measurements. BIA was measured in all patients with the ImpXCA (Impedimed Inc, Pittsford, NY), an FDA-approved device that measures impedance and resistance of the extracellular fluid. The ImpXCA utilizes a scale to correlate BIA to an L-Dex (lymphedema index) ratio; –10 to +10 defines the normal range of L-Dex values for a patient without lymphedema. Measurements were taken at 5-minute increments for a total of 3 times at the same visit to test for stability of BIA.
Main results. 250 patients were in the sample: 42 with known lymphedema, 148 at risk for lymphedema, and 60 healthy female adults. L-Dex ratios ranged from –9.7 to 7.7 in the healthy population, –9.6 to 36.9 in the at risk group, and 0.9 to 115 in the group with lymphedema. Mean L-Dex ratios were significantly different between the healthy and lymphedema groups (P < 0.001) and the at risk and lymphedema groups (P < 0.001). There was no difference between the at risk and healthy groups (P = 0.85). Utilizing an L-Dex ratio cutoff of 7.1 provided 80% sensitivity and 90% specificity in the diagnosis of secondary lymphedema.
Reliability and reproducibility of BIA by ImpXCA using the L-Dex ratio was assessed using a test-retest method. Intra-class correlation coefficients (ICC) provided strong stability for the repeated measurements in the healthy group, with ICC = 0.99 (95% CI, 0.99–0.99), and in the at risk group, with ICC = 0.99 (95% CI, 0.99–0.99). There was also fair agreement in the repeated measurements in the lymphedema group, with ICC = 0.69 (95% CI, 0.58-0.82). All of these findings were statistically significant (P < 0.001).
Conclusion. The L-Dex ratio is reliable and reproducible and may be helpful in distinguishing women with lymphedema from those without lymphedema. BIA in conjunction with other tools, such as self-report of symptoms, circumferential measurements, and clinical observation, may have a role in diagnosing lymphedema.
Commentary
The first year and a half following surgical treatment for breast cancer is when providers tend to diagnose the initial onset of lymphedema [1]. Many women, however, go undiagnosed until the illness has progressed. Earlier treatment has the potential to improve patient outcomes [2]. Although awareness of secondary lymphedema among breast cancer survivors has increased over the past 10 years, the diagnosis remains difficult and the development of effective diagnostic tools continues to challenge health care providers.
The current gold standard for diagnosis is the water displacement method where the affected and unaffected extremities are each placed into a tank of water, and the displaced water is measured [3]. Greater than a 200 mL discrepancy between arms is used to make a diagnosis of lymphedema. While useful, this measurement is messy and difficult to set up, and thus underutilized. Many providers have turned to circumferential measurements as their primary method to diagnose and monitor lymphedema [4]. However, this method may miss patients in the earlier stages of lymphedema, since it measures the size of limbs rather than changes in the tissue. Without a definitive test to diagnose lymphedema, researchers and health care providers continue to search for the most accurate, reliable, and feasible means to assist in the diagnosis.
This cross-sectional study suggests that L-Dex can be helpful in detecting lymphedema. A weakness of the study is that the investigators did not compare the results of BIA to the current gold standard of water displacement, but rather to circumferential measurement. In addition, while all results were reproducible, the difference between groups was notable in terms of age and body mass index, making it difficult to generalize to all patients at risk for lymphedema or differentiate results by those same variables. Although having the same 2 investigators obtain circumferential tape measurements is preferable to having multiple investigators do so, such measurements are still at risk for human error.
BIA shows promise as a diagnostic tool. Future studies should include healthy patients with characteristics similar to those of at risk patients and lymphedema patients. Efforts also could be directed towards determining whether combining BIA with other methods, such as self-report, circumferential measurements, and close observation, may offer greater sensitivity and specificity than one method alone.
Applications for Clinical Practice
Secondary lymphedema is a common complication caused by surgical treatment of breast cancer. Early treatment is linked to a decrease in debilitating factors such as immobility of affected joints, skin changes, and risk for infection. Measurement of extracellular fluid utilizing L-Dex ratios produces reliable and repeatable results in the assessment for lymphedema. Paired with additional tools and resources it may be helpful in making a diagnosis, which is normally difficult in its earliest stages. The early diagnosis of secondary lymphedema may allow for improved quality of life for survivors of breast cancer.
—Jennifer L. Nahum, MSN, CPNP-AC, PPCNP-BC, and Allison Squires, PhD, RN
1. Czerniec SA, Ward LC, Lee MJ, et al. Segmental measurement of breast cancer-related arm lymphoedema using perometry and bioimpedance spectroscopy. Support Care Cancer. 2011;19:703–10.
2. Damstra RJ, Voesten HG, van Schelven WD, van der Lei B. Lymphatic venous anastomosis (LVA) for treatment of secondary arm lymphedema. A prospective study of 11 LVA procedures in 10 patients with breast cancer related lymphedema and a critical review of the literature. Breast Cancer Res Treat 2009;113:199–206.
3. Brorson H, Höijer P. Standardised measurements used to order compression garments can be used to calculate arm volumes to evaluate lymphoedema treatment. Plast Surg Hand Surg 2012;46:410–5.
4. Langbecker D, Hayes SC, Newman B, Janda M. Treatment for upper-limb and lower-limb lymphedema by professionals specializing in lymphedema care. Eur J Cancer Care (Engl). 2008;17:557–64.
1. Czerniec SA, Ward LC, Lee MJ, et al. Segmental measurement of breast cancer-related arm lymphoedema using perometry and bioimpedance spectroscopy. Support Care Cancer. 2011;19:703–10.
2. Damstra RJ, Voesten HG, van Schelven WD, van der Lei B. Lymphatic venous anastomosis (LVA) for treatment of secondary arm lymphedema. A prospective study of 11 LVA procedures in 10 patients with breast cancer related lymphedema and a critical review of the literature. Breast Cancer Res Treat 2009;113:199–206.
3. Brorson H, Höijer P. Standardised measurements used to order compression garments can be used to calculate arm volumes to evaluate lymphoedema treatment. Plast Surg Hand Surg 2012;46:410–5.
4. Langbecker D, Hayes SC, Newman B, Janda M. Treatment for upper-limb and lower-limb lymphedema by professionals specializing in lymphedema care. Eur J Cancer Care (Engl). 2008;17:557–64.