User login
Decreased hospital LOS not associated with increase in 30-day readmission rates
Clinical question
Does decreased length of stay result in increased risk of 30-day readmission for hospitalized patients with acute medical illness?
Bottom line
Reduction in length of stay (LOS) is not associated with increased risk of 30-day readmission for patients with acute medical illness. Although this may suggest that decreased LOS does not affect quality of care, this finding may also be due to improved efficiencies in a previously inefficient Veteran Affairs (VA) system leading to earlier discharges and increased efforts at bettering transitions of care. LOE = 2b
Reference
Study design
Cohort (retrospective)
Funding source
Government
Allocation
Uncertain
Setting
Inpatient (any location)
Synopsis
To determine whether reductions in LOS adversely affect 30-day readmission rates, these investigators used a national VA administrative database to identify all acute medical admissions to VA hospitals from 1997 to 2010. Patients who died, were transferred to another acute care facility, or whose LOS was longer than 30 days were excluded from consideration. Readmissions were defined as those that were linked to the index admission and occurred within 30 days of discharge. The cohort consisted of more than 4 million admissions and was further subdivided into 5 high-volume diagnoses: heart failure, chronic obstructive pulmonary disease (COPD), heart failure, acute myocardial infarction (AMI), community-acquired pneumonia, and gastrointestinal bleed. After adjusting for hospital and patient characteristics, LOS decreased during the 14-year period from 5.44 days to 3.98 days, and 30-day readmission rates decreased from 16.5% to 13.8%. Among the 5 high-volume conditions, LOS decreased the most for AMI (by almost 3 days) while readmission rates decreased the most for COPD (3.3%). Further analysis of all medical conditions showed that each additional day of stay resulted in a 3% increased rate of readmission. This was likely due to unmeasured severity of illness that affected both LOS and readmission. Of note, however, hospitals that had a mean LOS lower than the average LOS across all hospitals had higher readmissions rates (6% increase for each day lower than the average). Despite this, the overall readmission rate decreased over time as LOS decreased. All-cause mortality at 30 days and 90 days also improved over time.
Clinical question
Does decreased length of stay result in increased risk of 30-day readmission for hospitalized patients with acute medical illness?
Bottom line
Reduction in length of stay (LOS) is not associated with increased risk of 30-day readmission for patients with acute medical illness. Although this may suggest that decreased LOS does not affect quality of care, this finding may also be due to improved efficiencies in a previously inefficient Veteran Affairs (VA) system leading to earlier discharges and increased efforts at bettering transitions of care. LOE = 2b
Reference
Study design
Cohort (retrospective)
Funding source
Government
Allocation
Uncertain
Setting
Inpatient (any location)
Synopsis
To determine whether reductions in LOS adversely affect 30-day readmission rates, these investigators used a national VA administrative database to identify all acute medical admissions to VA hospitals from 1997 to 2010. Patients who died, were transferred to another acute care facility, or whose LOS was longer than 30 days were excluded from consideration. Readmissions were defined as those that were linked to the index admission and occurred within 30 days of discharge. The cohort consisted of more than 4 million admissions and was further subdivided into 5 high-volume diagnoses: heart failure, chronic obstructive pulmonary disease (COPD), heart failure, acute myocardial infarction (AMI), community-acquired pneumonia, and gastrointestinal bleed. After adjusting for hospital and patient characteristics, LOS decreased during the 14-year period from 5.44 days to 3.98 days, and 30-day readmission rates decreased from 16.5% to 13.8%. Among the 5 high-volume conditions, LOS decreased the most for AMI (by almost 3 days) while readmission rates decreased the most for COPD (3.3%). Further analysis of all medical conditions showed that each additional day of stay resulted in a 3% increased rate of readmission. This was likely due to unmeasured severity of illness that affected both LOS and readmission. Of note, however, hospitals that had a mean LOS lower than the average LOS across all hospitals had higher readmissions rates (6% increase for each day lower than the average). Despite this, the overall readmission rate decreased over time as LOS decreased. All-cause mortality at 30 days and 90 days also improved over time.
Clinical question
Does decreased length of stay result in increased risk of 30-day readmission for hospitalized patients with acute medical illness?
Bottom line
Reduction in length of stay (LOS) is not associated with increased risk of 30-day readmission for patients with acute medical illness. Although this may suggest that decreased LOS does not affect quality of care, this finding may also be due to improved efficiencies in a previously inefficient Veteran Affairs (VA) system leading to earlier discharges and increased efforts at bettering transitions of care. LOE = 2b
Reference
Study design
Cohort (retrospective)
Funding source
Government
Allocation
Uncertain
Setting
Inpatient (any location)
Synopsis
To determine whether reductions in LOS adversely affect 30-day readmission rates, these investigators used a national VA administrative database to identify all acute medical admissions to VA hospitals from 1997 to 2010. Patients who died, were transferred to another acute care facility, or whose LOS was longer than 30 days were excluded from consideration. Readmissions were defined as those that were linked to the index admission and occurred within 30 days of discharge. The cohort consisted of more than 4 million admissions and was further subdivided into 5 high-volume diagnoses: heart failure, chronic obstructive pulmonary disease (COPD), heart failure, acute myocardial infarction (AMI), community-acquired pneumonia, and gastrointestinal bleed. After adjusting for hospital and patient characteristics, LOS decreased during the 14-year period from 5.44 days to 3.98 days, and 30-day readmission rates decreased from 16.5% to 13.8%. Among the 5 high-volume conditions, LOS decreased the most for AMI (by almost 3 days) while readmission rates decreased the most for COPD (3.3%). Further analysis of all medical conditions showed that each additional day of stay resulted in a 3% increased rate of readmission. This was likely due to unmeasured severity of illness that affected both LOS and readmission. Of note, however, hospitals that had a mean LOS lower than the average LOS across all hospitals had higher readmissions rates (6% increase for each day lower than the average). Despite this, the overall readmission rate decreased over time as LOS decreased. All-cause mortality at 30 days and 90 days also improved over time.
Femoral lines not associated with increased risk of bloodstream infections
Clinical question
Do central venous catheters in the femoral vein increase the risk of catheter-related bloodstream infections as compared with those placed in the subclavian or internal jugular veins?
Bottom line
The risk of catheter-related bloodstream infections (CRBIs) from nontunneled central venous catheters has decreased in the last decade.This review suggests that there is no difference in risk of CRBIs when comparing catheters placed in femoral sites with those placed in subclavian or internal jugular (IJ) sites, especially when looking at data from more recent studies. LOE = 1a
Reference
Study design
Meta-analysis (other)
Funding source
Unknown/not stated
Allocation
Uncertain
Setting
Inpatient (any location)
Synopsis
Current guidelines from the Centers for Disease Control recommend avoiding the femoral vein for central access in adult patients because of a potentially higher risk of CRBI. Two independent investigators searched MEDLINE, EMBASE, the Cochrane Database of Systematic Reviews, and bibliographies of relevant articles, as well as performed an Internet search, to find randomized controlled trials (RCTs) and cohort studies that examined the risk of CRBIs due to nontunneled central venous catheters placed in the femoral site as compared with those placed in the subclavian or IJ sites. Two RCTs, 8 cohort studies, and data from a Welsh infection control surveillance Web site were selected. Two authors independently extracted data from the selected studies. No formal quality assessment of the studies was performed. Data from the RCTs alone showed no difference in CRBIs between femoral sites and subclavian or IJ sites. Data from all the studies that compared femoral sites to subclavian sites showed no significant difference in the risk of CRBIs. For comparisons of femoral and IJ sites, the overall data favored the IJ site (relative risk of infection with femoral site placement = 1.90; 95% CI, 1.21-2.97; P = .005). However, 2 of the 9 included studies in this analysis were "statistical outliers," possibly due to unique circumstances in the hospitals in which they were performed, thus limiting their generalizability. When these 2 studies were removed from the analysis, there was no significant difference between femoral and IJ sites. For both comparisons (femoral vs subclavian and femoral vs IJ), there was an interaction between risk of infection and year of study publication, with earlier studies noting a greater risk of infection with femoral sites. Overall, this data confirms a decrease in incidence of CRBIs by more than 50% in the last 10 years. Additionally, study meta-analysis found no difference in the risk of deep venous thrombosis with femoral versus subclavian and IJ sites.
Clinical question
Do central venous catheters in the femoral vein increase the risk of catheter-related bloodstream infections as compared with those placed in the subclavian or internal jugular veins?
Bottom line
The risk of catheter-related bloodstream infections (CRBIs) from nontunneled central venous catheters has decreased in the last decade.This review suggests that there is no difference in risk of CRBIs when comparing catheters placed in femoral sites with those placed in subclavian or internal jugular (IJ) sites, especially when looking at data from more recent studies. LOE = 1a
Reference
Study design
Meta-analysis (other)
Funding source
Unknown/not stated
Allocation
Uncertain
Setting
Inpatient (any location)
Synopsis
Current guidelines from the Centers for Disease Control recommend avoiding the femoral vein for central access in adult patients because of a potentially higher risk of CRBI. Two independent investigators searched MEDLINE, EMBASE, the Cochrane Database of Systematic Reviews, and bibliographies of relevant articles, as well as performed an Internet search, to find randomized controlled trials (RCTs) and cohort studies that examined the risk of CRBIs due to nontunneled central venous catheters placed in the femoral site as compared with those placed in the subclavian or IJ sites. Two RCTs, 8 cohort studies, and data from a Welsh infection control surveillance Web site were selected. Two authors independently extracted data from the selected studies. No formal quality assessment of the studies was performed. Data from the RCTs alone showed no difference in CRBIs between femoral sites and subclavian or IJ sites. Data from all the studies that compared femoral sites to subclavian sites showed no significant difference in the risk of CRBIs. For comparisons of femoral and IJ sites, the overall data favored the IJ site (relative risk of infection with femoral site placement = 1.90; 95% CI, 1.21-2.97; P = .005). However, 2 of the 9 included studies in this analysis were "statistical outliers," possibly due to unique circumstances in the hospitals in which they were performed, thus limiting their generalizability. When these 2 studies were removed from the analysis, there was no significant difference between femoral and IJ sites. For both comparisons (femoral vs subclavian and femoral vs IJ), there was an interaction between risk of infection and year of study publication, with earlier studies noting a greater risk of infection with femoral sites. Overall, this data confirms a decrease in incidence of CRBIs by more than 50% in the last 10 years. Additionally, study meta-analysis found no difference in the risk of deep venous thrombosis with femoral versus subclavian and IJ sites.
Clinical question
Do central venous catheters in the femoral vein increase the risk of catheter-related bloodstream infections as compared with those placed in the subclavian or internal jugular veins?
Bottom line
The risk of catheter-related bloodstream infections (CRBIs) from nontunneled central venous catheters has decreased in the last decade.This review suggests that there is no difference in risk of CRBIs when comparing catheters placed in femoral sites with those placed in subclavian or internal jugular (IJ) sites, especially when looking at data from more recent studies. LOE = 1a
Reference
Study design
Meta-analysis (other)
Funding source
Unknown/not stated
Allocation
Uncertain
Setting
Inpatient (any location)
Synopsis
Current guidelines from the Centers for Disease Control recommend avoiding the femoral vein for central access in adult patients because of a potentially higher risk of CRBI. Two independent investigators searched MEDLINE, EMBASE, the Cochrane Database of Systematic Reviews, and bibliographies of relevant articles, as well as performed an Internet search, to find randomized controlled trials (RCTs) and cohort studies that examined the risk of CRBIs due to nontunneled central venous catheters placed in the femoral site as compared with those placed in the subclavian or IJ sites. Two RCTs, 8 cohort studies, and data from a Welsh infection control surveillance Web site were selected. Two authors independently extracted data from the selected studies. No formal quality assessment of the studies was performed. Data from the RCTs alone showed no difference in CRBIs between femoral sites and subclavian or IJ sites. Data from all the studies that compared femoral sites to subclavian sites showed no significant difference in the risk of CRBIs. For comparisons of femoral and IJ sites, the overall data favored the IJ site (relative risk of infection with femoral site placement = 1.90; 95% CI, 1.21-2.97; P = .005). However, 2 of the 9 included studies in this analysis were "statistical outliers," possibly due to unique circumstances in the hospitals in which they were performed, thus limiting their generalizability. When these 2 studies were removed from the analysis, there was no significant difference between femoral and IJ sites. For both comparisons (femoral vs subclavian and femoral vs IJ), there was an interaction between risk of infection and year of study publication, with earlier studies noting a greater risk of infection with femoral sites. Overall, this data confirms a decrease in incidence of CRBIs by more than 50% in the last 10 years. Additionally, study meta-analysis found no difference in the risk of deep venous thrombosis with femoral versus subclavian and IJ sites.
Norovirus now top cause of acute gastroenteritis in young U.S. children
Norovirus is now the leading cause of acute gastroenteritis requiring medical care among U.S. children younger than 5 years of age, according to a report published online March 20 in the New England Journal of Medicine.
Now that rotavirus vaccines have dramatically reduced the number of acute gastroenteritis cases attributable to that organism, norovirus infections have taken over the lead in causing the disorder in the young U.S. pediatric population. Norovirus is responsible for an estimated 1 million health care visits each year for this age group, at an estimated cost approaching $300 million, said Daniel C. Payne, Ph.D., of the National Center for Immunization and Respiratory Diseases, Centers for Disease Control and Prevention, and his associates.
"According to our estimation, by their fifth birthday, 1 in 278 U.S. children are hospitalized for norovirus infection, 1 in 14 are seen in the emergency department, and 1 in 6 are seen by outpatient care providers," the investigators noted.
They studied the epidemiology of the infection because now that candidate norovirus vaccines are in development, "there is a need to directly measure the pediatric health care burden of norovirus-associated gastroenteritis."
Dr. Payne and his colleagues analyzed data from the New Vaccine Surveillance Network, which collects information on the medical care of children residing near Rochester, N.Y.; Nashville, Tenn.; and Cincinnati – a catchment population exceeding 141,000 children under age 5.
The researchers prospectively assessed cases of acute gastroenteritis treated at hospitals, emergency departments, and outpatient clinics during two successive 12-month surveillance periods between October 2008 and September 2010. There were 1,077 cases the first year and 820 the second year; the data from these were compared with data from 806 age-matched children attending well-child visits, who served as a control group.
The disease burden of norovirus infection was "consistently high" during both years, accounting for 20%-22% of cases of acute gastroenteritis. Norovirus was detected in 4% of healthy controls in 2009. The overall rate of medical attention for the infection was highest – 47% – among children aged 6-18 months, Dr. Payne and his associates reported (N. Engl. J. Med. 2013;368:1121-30).
This study was supported by the CDC. Dr. Payne reported that he did not have any conflicts of interest relevant to this study. His coauthors reported ties to GlaxoSmithKline, Merck, and Luminex Molecular Diagnostics.
Norovirus is now the leading cause of acute gastroenteritis requiring medical care among U.S. children younger than 5 years of age, according to a report published online March 20 in the New England Journal of Medicine.
Now that rotavirus vaccines have dramatically reduced the number of acute gastroenteritis cases attributable to that organism, norovirus infections have taken over the lead in causing the disorder in the young U.S. pediatric population. Norovirus is responsible for an estimated 1 million health care visits each year for this age group, at an estimated cost approaching $300 million, said Daniel C. Payne, Ph.D., of the National Center for Immunization and Respiratory Diseases, Centers for Disease Control and Prevention, and his associates.
"According to our estimation, by their fifth birthday, 1 in 278 U.S. children are hospitalized for norovirus infection, 1 in 14 are seen in the emergency department, and 1 in 6 are seen by outpatient care providers," the investigators noted.
They studied the epidemiology of the infection because now that candidate norovirus vaccines are in development, "there is a need to directly measure the pediatric health care burden of norovirus-associated gastroenteritis."
Dr. Payne and his colleagues analyzed data from the New Vaccine Surveillance Network, which collects information on the medical care of children residing near Rochester, N.Y.; Nashville, Tenn.; and Cincinnati – a catchment population exceeding 141,000 children under age 5.
The researchers prospectively assessed cases of acute gastroenteritis treated at hospitals, emergency departments, and outpatient clinics during two successive 12-month surveillance periods between October 2008 and September 2010. There were 1,077 cases the first year and 820 the second year; the data from these were compared with data from 806 age-matched children attending well-child visits, who served as a control group.
The disease burden of norovirus infection was "consistently high" during both years, accounting for 20%-22% of cases of acute gastroenteritis. Norovirus was detected in 4% of healthy controls in 2009. The overall rate of medical attention for the infection was highest – 47% – among children aged 6-18 months, Dr. Payne and his associates reported (N. Engl. J. Med. 2013;368:1121-30).
This study was supported by the CDC. Dr. Payne reported that he did not have any conflicts of interest relevant to this study. His coauthors reported ties to GlaxoSmithKline, Merck, and Luminex Molecular Diagnostics.
Norovirus is now the leading cause of acute gastroenteritis requiring medical care among U.S. children younger than 5 years of age, according to a report published online March 20 in the New England Journal of Medicine.
Now that rotavirus vaccines have dramatically reduced the number of acute gastroenteritis cases attributable to that organism, norovirus infections have taken over the lead in causing the disorder in the young U.S. pediatric population. Norovirus is responsible for an estimated 1 million health care visits each year for this age group, at an estimated cost approaching $300 million, said Daniel C. Payne, Ph.D., of the National Center for Immunization and Respiratory Diseases, Centers for Disease Control and Prevention, and his associates.
"According to our estimation, by their fifth birthday, 1 in 278 U.S. children are hospitalized for norovirus infection, 1 in 14 are seen in the emergency department, and 1 in 6 are seen by outpatient care providers," the investigators noted.
They studied the epidemiology of the infection because now that candidate norovirus vaccines are in development, "there is a need to directly measure the pediatric health care burden of norovirus-associated gastroenteritis."
Dr. Payne and his colleagues analyzed data from the New Vaccine Surveillance Network, which collects information on the medical care of children residing near Rochester, N.Y.; Nashville, Tenn.; and Cincinnati – a catchment population exceeding 141,000 children under age 5.
The researchers prospectively assessed cases of acute gastroenteritis treated at hospitals, emergency departments, and outpatient clinics during two successive 12-month surveillance periods between October 2008 and September 2010. There were 1,077 cases the first year and 820 the second year; the data from these were compared with data from 806 age-matched children attending well-child visits, who served as a control group.
The disease burden of norovirus infection was "consistently high" during both years, accounting for 20%-22% of cases of acute gastroenteritis. Norovirus was detected in 4% of healthy controls in 2009. The overall rate of medical attention for the infection was highest – 47% – among children aged 6-18 months, Dr. Payne and his associates reported (N. Engl. J. Med. 2013;368:1121-30).
This study was supported by the CDC. Dr. Payne reported that he did not have any conflicts of interest relevant to this study. His coauthors reported ties to GlaxoSmithKline, Merck, and Luminex Molecular Diagnostics.
FROM THE NEW ENGLAND JOURNAL OF MEDICINE
Major Finding: By the time U.S. children turn 5, 1 in 278 is admitted to the hospital for a norovirus infection, 1 in 14 is seen in an emergency department, and 1 in 6 is seen by an outpatient health care provider, at a cost of $273 million annually.
Data Source: A prospective, population-based surveillance study of norovirus infections in children under age 5.
Disclosures: This study was supported by the CDC. Dr. Payne said that he did not have any conflicts of interest relevant to this study. His coauthors reported ties to GlaxoSmithKline, Merck, and Luminex Molecular Diagnostics.
FDA Recommends New Opioids Research Prove Abuse-Deterrent Properties
Inappropriate use of prescription opioids is a major public health challenge, prompting the U.S. Food and Drug Administration (FDA) to issue a draft guidance document aimed at helping industry create new formulations of opioids with abuse-deterrent properties.
Released in January, “Guidance for Industry: Abuse-Deterrent Opioids—Evaluation and Labeling” provides recommendations for conducting studies to prove that a particular formulation contains abuse-deterrent properties. It also explains how the FDA will review the results and determine which labeling claims to approve.
This announcement is “one component of our larger effort to prevent prescription drug abuse and misuse, while ensuring that patients in pain continue to have access to these important medicines,” Douglas Throckmorton, MD, deputy director for regulatory programs in the FDA’s Center for Drug Evaluation and Research, said during a teleconference.
According to the FDA guidance, opioid analgesics can be abused in a variety of ways:
- Swallowed whole;
- Crushed and swallowed;
- Crushed and snorted;
- Crushed and smoked; or
- Crushed, dissolved, and injected.
With the science of abuse deterrence being relatively new, the FDA plans to take a flexible and adaptive approach. That’s because the analytical, clinical, and statistical methods for evaluating formulation technologies are still evolving.
“Physicians should care about this because the government is regulating prescribing practices more directly than in the past, especially with pain drugs,” says Daniel Carpenter, PhD, a Harvard University government professor and author on FDA pharmaceutical regulation. “The FDA and federal agencies are going to be leaning more heavily upon physicians.”
To date, the majority of current abuse-deterrent technologies have not been effective in preventing the most widespread type of abuse—ingesting a number of pills or tablets to reach a state of euphoria.
—Daniel Carpenter, PhD, Harvard University government professor and author on FDA pharmaceutical regulation
Science points toward ways that formulations can help thwart abuse. For instance, adding an opioid antagonist can hinder, limit, or defeat euphoria. An antagonist can be sequestered and released only upon the product’s manipulation. In one such scenario, the substance acting as an antagonist could be clinically inactive when swallowed, but then would become active if the product is crushed and injected or snorted.
“The guidance describes advice for the development of abuse-deterrent opioids and does not describe practice guidelines,” says Christopher Kelly, an FDA spokesman. However, he adds, “[FDA] urges all prescribers of extended-release and long-acting opioids to participate in the training under the Risk Evaluation and Mitigation Strategy (REMS).” The first REMS-compliant training is expected to become available by March 1.
Such a strategy is intended to manage known or potential serious risks associated with a drug product. The FDA requires it to ensure that the benefits of a drug outweigh its risks.
Manufacturers of opioid analgesics have worked with the FDA to produce materials for the REMS program that would inform healthcare professionals about safe prescribing. Continuing-education providers also are designing accredited training. (For more information, listen to this NIH podcast about training to help providers prescribe painkillers properly.)
Prescribers are advised to complete a REMS-compliant program through an accredited continuing-education provider for their discipline. They should discuss the safe use, serious risks, storage, and disposal of opioids with patients and caregivers each time they prescribe these medicines. It’s also essential to stress the importance of reading the medication guide they will receive from the pharmacist at drug-dispensing time.
Whether the FDA’s industry guidance for the development of abuse-deterrent opioids will make a difference remains to be seen, according to Carpenter. The addictive potential of opioids has created “a kind of public health epidemic,” he says. “It’s not an infectious epidemic in the sense of the flu, but it’s socially and behaviorally infectious and very destructive.”
Creating better tamper-resistant drugs could impede someone from “taking a longer-acting version and breaking it down into a much more toxic soup for other purposes,” Carpenter says. However, he concedes it won’t be impossible to swallow one or more pills too many, leading to this very common form of pharmaceutical abuse.
The FDA is accepting public comment on the draft guidance, while encouraging further scientific and clinical research to advance the development and assessment of abuse-deterrent technologies.
Susan Kreimer is a freelance writer based in New York.
Inappropriate use of prescription opioids is a major public health challenge, prompting the U.S. Food and Drug Administration (FDA) to issue a draft guidance document aimed at helping industry create new formulations of opioids with abuse-deterrent properties.
Released in January, “Guidance for Industry: Abuse-Deterrent Opioids—Evaluation and Labeling” provides recommendations for conducting studies to prove that a particular formulation contains abuse-deterrent properties. It also explains how the FDA will review the results and determine which labeling claims to approve.
This announcement is “one component of our larger effort to prevent prescription drug abuse and misuse, while ensuring that patients in pain continue to have access to these important medicines,” Douglas Throckmorton, MD, deputy director for regulatory programs in the FDA’s Center for Drug Evaluation and Research, said during a teleconference.
According to the FDA guidance, opioid analgesics can be abused in a variety of ways:
- Swallowed whole;
- Crushed and swallowed;
- Crushed and snorted;
- Crushed and smoked; or
- Crushed, dissolved, and injected.
With the science of abuse deterrence being relatively new, the FDA plans to take a flexible and adaptive approach. That’s because the analytical, clinical, and statistical methods for evaluating formulation technologies are still evolving.
“Physicians should care about this because the government is regulating prescribing practices more directly than in the past, especially with pain drugs,” says Daniel Carpenter, PhD, a Harvard University government professor and author on FDA pharmaceutical regulation. “The FDA and federal agencies are going to be leaning more heavily upon physicians.”
To date, the majority of current abuse-deterrent technologies have not been effective in preventing the most widespread type of abuse—ingesting a number of pills or tablets to reach a state of euphoria.
—Daniel Carpenter, PhD, Harvard University government professor and author on FDA pharmaceutical regulation
Science points toward ways that formulations can help thwart abuse. For instance, adding an opioid antagonist can hinder, limit, or defeat euphoria. An antagonist can be sequestered and released only upon the product’s manipulation. In one such scenario, the substance acting as an antagonist could be clinically inactive when swallowed, but then would become active if the product is crushed and injected or snorted.
“The guidance describes advice for the development of abuse-deterrent opioids and does not describe practice guidelines,” says Christopher Kelly, an FDA spokesman. However, he adds, “[FDA] urges all prescribers of extended-release and long-acting opioids to participate in the training under the Risk Evaluation and Mitigation Strategy (REMS).” The first REMS-compliant training is expected to become available by March 1.
Such a strategy is intended to manage known or potential serious risks associated with a drug product. The FDA requires it to ensure that the benefits of a drug outweigh its risks.
Manufacturers of opioid analgesics have worked with the FDA to produce materials for the REMS program that would inform healthcare professionals about safe prescribing. Continuing-education providers also are designing accredited training. (For more information, listen to this NIH podcast about training to help providers prescribe painkillers properly.)
Prescribers are advised to complete a REMS-compliant program through an accredited continuing-education provider for their discipline. They should discuss the safe use, serious risks, storage, and disposal of opioids with patients and caregivers each time they prescribe these medicines. It’s also essential to stress the importance of reading the medication guide they will receive from the pharmacist at drug-dispensing time.
Whether the FDA’s industry guidance for the development of abuse-deterrent opioids will make a difference remains to be seen, according to Carpenter. The addictive potential of opioids has created “a kind of public health epidemic,” he says. “It’s not an infectious epidemic in the sense of the flu, but it’s socially and behaviorally infectious and very destructive.”
Creating better tamper-resistant drugs could impede someone from “taking a longer-acting version and breaking it down into a much more toxic soup for other purposes,” Carpenter says. However, he concedes it won’t be impossible to swallow one or more pills too many, leading to this very common form of pharmaceutical abuse.
The FDA is accepting public comment on the draft guidance, while encouraging further scientific and clinical research to advance the development and assessment of abuse-deterrent technologies.
Susan Kreimer is a freelance writer based in New York.
Inappropriate use of prescription opioids is a major public health challenge, prompting the U.S. Food and Drug Administration (FDA) to issue a draft guidance document aimed at helping industry create new formulations of opioids with abuse-deterrent properties.
Released in January, “Guidance for Industry: Abuse-Deterrent Opioids—Evaluation and Labeling” provides recommendations for conducting studies to prove that a particular formulation contains abuse-deterrent properties. It also explains how the FDA will review the results and determine which labeling claims to approve.
This announcement is “one component of our larger effort to prevent prescription drug abuse and misuse, while ensuring that patients in pain continue to have access to these important medicines,” Douglas Throckmorton, MD, deputy director for regulatory programs in the FDA’s Center for Drug Evaluation and Research, said during a teleconference.
According to the FDA guidance, opioid analgesics can be abused in a variety of ways:
- Swallowed whole;
- Crushed and swallowed;
- Crushed and snorted;
- Crushed and smoked; or
- Crushed, dissolved, and injected.
With the science of abuse deterrence being relatively new, the FDA plans to take a flexible and adaptive approach. That’s because the analytical, clinical, and statistical methods for evaluating formulation technologies are still evolving.
“Physicians should care about this because the government is regulating prescribing practices more directly than in the past, especially with pain drugs,” says Daniel Carpenter, PhD, a Harvard University government professor and author on FDA pharmaceutical regulation. “The FDA and federal agencies are going to be leaning more heavily upon physicians.”
To date, the majority of current abuse-deterrent technologies have not been effective in preventing the most widespread type of abuse—ingesting a number of pills or tablets to reach a state of euphoria.
—Daniel Carpenter, PhD, Harvard University government professor and author on FDA pharmaceutical regulation
Science points toward ways that formulations can help thwart abuse. For instance, adding an opioid antagonist can hinder, limit, or defeat euphoria. An antagonist can be sequestered and released only upon the product’s manipulation. In one such scenario, the substance acting as an antagonist could be clinically inactive when swallowed, but then would become active if the product is crushed and injected or snorted.
“The guidance describes advice for the development of abuse-deterrent opioids and does not describe practice guidelines,” says Christopher Kelly, an FDA spokesman. However, he adds, “[FDA] urges all prescribers of extended-release and long-acting opioids to participate in the training under the Risk Evaluation and Mitigation Strategy (REMS).” The first REMS-compliant training is expected to become available by March 1.
Such a strategy is intended to manage known or potential serious risks associated with a drug product. The FDA requires it to ensure that the benefits of a drug outweigh its risks.
Manufacturers of opioid analgesics have worked with the FDA to produce materials for the REMS program that would inform healthcare professionals about safe prescribing. Continuing-education providers also are designing accredited training. (For more information, listen to this NIH podcast about training to help providers prescribe painkillers properly.)
Prescribers are advised to complete a REMS-compliant program through an accredited continuing-education provider for their discipline. They should discuss the safe use, serious risks, storage, and disposal of opioids with patients and caregivers each time they prescribe these medicines. It’s also essential to stress the importance of reading the medication guide they will receive from the pharmacist at drug-dispensing time.
Whether the FDA’s industry guidance for the development of abuse-deterrent opioids will make a difference remains to be seen, according to Carpenter. The addictive potential of opioids has created “a kind of public health epidemic,” he says. “It’s not an infectious epidemic in the sense of the flu, but it’s socially and behaviorally infectious and very destructive.”
Creating better tamper-resistant drugs could impede someone from “taking a longer-acting version and breaking it down into a much more toxic soup for other purposes,” Carpenter says. However, he concedes it won’t be impossible to swallow one or more pills too many, leading to this very common form of pharmaceutical abuse.
The FDA is accepting public comment on the draft guidance, while encouraging further scientific and clinical research to advance the development and assessment of abuse-deterrent technologies.
Susan Kreimer is a freelance writer based in New York.
Old gout drug learns new cardiac tricks
SAN FRANCISCO – The venerable antihyperuricemic agent allopurinol has shown early promise for two novel cardiovascular applications: prevention of atrial fibrillation in the setting of heart failure and reduction of left ventricular hypertrophy in patients with type 2 diabetes.
Allopurinol is a xanthine oxidase inhibitor and antigout drug. The rationale for the drug’s use in reducing the incidence of atrial fibrillation in patients with heart failure lies in the observation that serum uric acid has emerged as an independent marker of mortality and a predictor of new-onset atrial fibrillation in heart failure. Xanthine oxidase is not only a source of reactive oxygen species that adversely affect myocardial function, but it also catalyzes the conversion of xanthine to uric acid, Dr. Fernando E. Hernandez explained at the annual meeting of the American College of Cardiology.
He presented a retrospective cohort study involving 603 patients enrolled in the Miami Veterans Affairs heart failure clinic. The 103 on allopurinol, and the 500 who were not, matched up well in terms of baseline characteristics including age, prevalence of coronary artery disease, median left ventricular ejection, left atrial size, and use of guideline-recommended ACE inhibitors and beta-blockers.
During up to 5 years of follow-up, the incidence of new-onset atrial fibrillation was 184 cases/1,000 person-years in the allopurinol users compared with 252/1,000 person-years in controls. In a Cox proportional hazards analysis adjusted for small differences in potential confounders, the use of allopurinol was independently associated with a 47% reduction in the risk of atrial fibrillation (P = .04), reported Dr. Hernandez of the University of Miami.
This intriguing finding needs to be confirmed in randomized prospective trials, he noted.
In a separate presentation, Dr. Benjamin R. Szwejkowski noted that left ventricular hypertrophy (LVH) is common in patients with type 2 diabetes and contributes to their elevated risk of cardiovascular morbidity and mortality.
Based on their hypothesis that LVH is related in part to oxidative stress and reducing that stress via xanthine oxidase inhibition using allopurinol can cause LVH regression, the investigators conducted a randomized, double-blind placebo-controlled clinical trial. Sixty-six patients with type 2 diabetes and echocardiographic evidence of LVH were randomized to allopurinol at 600 mg/day or placebo for 9 months.
The primary study endpoint was change in left ventricular mass between baseline and 9 months, as measured by cardiac MRI. Allopurinol resulted in a significant mean 2.65-g reduction in LV mass, while in the control group LV mass increased by 1.21 g. Similarly, LV mass indexed to body surface area fell significantly by 1.32 g/m2 in the allopurinol group while increasing by 0.65 g/m2 in the placebo arm, reported Dr. Szwejkowski of the University of Dundee(Scotland).
"Allopurinol may be a useful therapy to reduce cardiovascular risk in type 2 diabetic patients with LVH," according to the cardiologist.
Flow-mediated dilatation didn’t change significantly over time in either study group.
Dr. Szwejkowski and Dr. Hernandez reported having no relevant financial conflicts.
SAN FRANCISCO – The venerable antihyperuricemic agent allopurinol has shown early promise for two novel cardiovascular applications: prevention of atrial fibrillation in the setting of heart failure and reduction of left ventricular hypertrophy in patients with type 2 diabetes.
Allopurinol is a xanthine oxidase inhibitor and antigout drug. The rationale for the drug’s use in reducing the incidence of atrial fibrillation in patients with heart failure lies in the observation that serum uric acid has emerged as an independent marker of mortality and a predictor of new-onset atrial fibrillation in heart failure. Xanthine oxidase is not only a source of reactive oxygen species that adversely affect myocardial function, but it also catalyzes the conversion of xanthine to uric acid, Dr. Fernando E. Hernandez explained at the annual meeting of the American College of Cardiology.
He presented a retrospective cohort study involving 603 patients enrolled in the Miami Veterans Affairs heart failure clinic. The 103 on allopurinol, and the 500 who were not, matched up well in terms of baseline characteristics including age, prevalence of coronary artery disease, median left ventricular ejection, left atrial size, and use of guideline-recommended ACE inhibitors and beta-blockers.
During up to 5 years of follow-up, the incidence of new-onset atrial fibrillation was 184 cases/1,000 person-years in the allopurinol users compared with 252/1,000 person-years in controls. In a Cox proportional hazards analysis adjusted for small differences in potential confounders, the use of allopurinol was independently associated with a 47% reduction in the risk of atrial fibrillation (P = .04), reported Dr. Hernandez of the University of Miami.
This intriguing finding needs to be confirmed in randomized prospective trials, he noted.
In a separate presentation, Dr. Benjamin R. Szwejkowski noted that left ventricular hypertrophy (LVH) is common in patients with type 2 diabetes and contributes to their elevated risk of cardiovascular morbidity and mortality.
Based on their hypothesis that LVH is related in part to oxidative stress and reducing that stress via xanthine oxidase inhibition using allopurinol can cause LVH regression, the investigators conducted a randomized, double-blind placebo-controlled clinical trial. Sixty-six patients with type 2 diabetes and echocardiographic evidence of LVH were randomized to allopurinol at 600 mg/day or placebo for 9 months.
The primary study endpoint was change in left ventricular mass between baseline and 9 months, as measured by cardiac MRI. Allopurinol resulted in a significant mean 2.65-g reduction in LV mass, while in the control group LV mass increased by 1.21 g. Similarly, LV mass indexed to body surface area fell significantly by 1.32 g/m2 in the allopurinol group while increasing by 0.65 g/m2 in the placebo arm, reported Dr. Szwejkowski of the University of Dundee(Scotland).
"Allopurinol may be a useful therapy to reduce cardiovascular risk in type 2 diabetic patients with LVH," according to the cardiologist.
Flow-mediated dilatation didn’t change significantly over time in either study group.
Dr. Szwejkowski and Dr. Hernandez reported having no relevant financial conflicts.
SAN FRANCISCO – The venerable antihyperuricemic agent allopurinol has shown early promise for two novel cardiovascular applications: prevention of atrial fibrillation in the setting of heart failure and reduction of left ventricular hypertrophy in patients with type 2 diabetes.
Allopurinol is a xanthine oxidase inhibitor and antigout drug. The rationale for the drug’s use in reducing the incidence of atrial fibrillation in patients with heart failure lies in the observation that serum uric acid has emerged as an independent marker of mortality and a predictor of new-onset atrial fibrillation in heart failure. Xanthine oxidase is not only a source of reactive oxygen species that adversely affect myocardial function, but it also catalyzes the conversion of xanthine to uric acid, Dr. Fernando E. Hernandez explained at the annual meeting of the American College of Cardiology.
He presented a retrospective cohort study involving 603 patients enrolled in the Miami Veterans Affairs heart failure clinic. The 103 on allopurinol, and the 500 who were not, matched up well in terms of baseline characteristics including age, prevalence of coronary artery disease, median left ventricular ejection, left atrial size, and use of guideline-recommended ACE inhibitors and beta-blockers.
During up to 5 years of follow-up, the incidence of new-onset atrial fibrillation was 184 cases/1,000 person-years in the allopurinol users compared with 252/1,000 person-years in controls. In a Cox proportional hazards analysis adjusted for small differences in potential confounders, the use of allopurinol was independently associated with a 47% reduction in the risk of atrial fibrillation (P = .04), reported Dr. Hernandez of the University of Miami.
This intriguing finding needs to be confirmed in randomized prospective trials, he noted.
In a separate presentation, Dr. Benjamin R. Szwejkowski noted that left ventricular hypertrophy (LVH) is common in patients with type 2 diabetes and contributes to their elevated risk of cardiovascular morbidity and mortality.
Based on their hypothesis that LVH is related in part to oxidative stress and reducing that stress via xanthine oxidase inhibition using allopurinol can cause LVH regression, the investigators conducted a randomized, double-blind placebo-controlled clinical trial. Sixty-six patients with type 2 diabetes and echocardiographic evidence of LVH were randomized to allopurinol at 600 mg/day or placebo for 9 months.
The primary study endpoint was change in left ventricular mass between baseline and 9 months, as measured by cardiac MRI. Allopurinol resulted in a significant mean 2.65-g reduction in LV mass, while in the control group LV mass increased by 1.21 g. Similarly, LV mass indexed to body surface area fell significantly by 1.32 g/m2 in the allopurinol group while increasing by 0.65 g/m2 in the placebo arm, reported Dr. Szwejkowski of the University of Dundee(Scotland).
"Allopurinol may be a useful therapy to reduce cardiovascular risk in type 2 diabetic patients with LVH," according to the cardiologist.
Flow-mediated dilatation didn’t change significantly over time in either study group.
Dr. Szwejkowski and Dr. Hernandez reported having no relevant financial conflicts.
AT ACC 13
Major finding: At the end of 5 years of allopurinol use, the incidence of new-onset atrial fibrillation was 184 cases/1,000 person-years in the allopurinol users compared with 252/1,000 person-years in controls.
Data source: A retrospective cohort study involving 603 patients with heart failure.
Disclosures: The study presenters reported having no relevant financial conflicts.
New concussion guidelines stress individualized approach
Any athlete with a possible concussion should be immediately removed from play pending an evaluation by a licensed health care provider trained in assessing concussions and traumatic brain injury, according to a new guideline from the American Academy of Neurology.
The guideline for evaluating and managing athletes with concussion was published online in the journal Neurology on March 18 (doi:10.1212/WNL.0b013e31828d57dd) in conjunction with the annual meeting of the AAN. The guideline replaces the Academy’s 1997 recommendations, which stressed using a grading system to try to predict concussion outcomes.
The new guideline takes a more individualized and conservative approach, especially for younger athletes. The new approach comes as many states have enacted legislation regulating when young athletes can return to play following a concussion.
"If in doubt, sit it out," Dr. Jeffrey S. Kutcher, coauthor of the guideline and a neurologist at the University of Michigan in Ann Arbor, said in a statement. "Being seen by a trained professional is extremely important after a concussion. If headaches or other symptoms return with the start of exercise, stop the activity and consult a doctor. You only get one brain; treat it well."
The new guideline calls for athletes to stay off the field until they are asymptomatic off medication. High school athletes and younger players with a concussion should be managed more conservatively since they take longer to recover than older athletes, according to the AAN.
But there is not enough evidence to support complete rest after a concussion. Activities that do not worsen symptoms and don’t pose a risk of another concussion can be part of the management of the injury, according to the guideline.
"We’re moved away from the concussion grading systems we first established in 1997 and are now recommending concussion and return to play be assessed in each athlete individually," Dr. Christopher C. Giza, the co–lead guideline author and a neurologist at Mattel Children’s Hospital at the University of California, Los Angeles, said in a statement. "There is no set timeline for safe return to play."
The AAN expert panel recommends that sideline providers use symptom checklists such as the Standardized Assessment of Concussion to help identify suspected concussion and that the scores be shared with the physicians involved in the athletes’ care off the field. But these checklists should not be the only tool used in making a diagnosis, according to the guidelines. Also, the checklist scores may be more useful if they are compared against preinjury individual scores, especially in younger athletes and those with prior concussions.
CT imaging should not be used to diagnose a suspected sport-related concussion, according to the guideline. But imaging might be used to rule out more serious traumatic brain injuries, such as intracranial hemorrhage in athletes with a suspected concussion who also have a loss of consciousness, posttraumatic amnesia, persistently altered mental status, focal neurologic deficit, evidence of skull fracture, or signs of clinical deterioration.
Athletes are at greater risk of concussion if they have a history of concussion. The first 10 days after a concussion pose the greatest risk for a repeat injury.
The AAN advises physicians to be on the lookout for ongoing symptoms that are linked to a longer recovery, such as continued headache or fogginess. Athletes with a history of concussions and younger players also tend to have a longer recovery.
The guideline also include level C recommendations stating that health care providers "might" develop individualized graded plans for returning to physical and cognitive activity. They might also provide cognitive restructuring counseling in an effort to shorten the duration of symptoms and the likelihood of developing chronic post-concussion syndrome, according to the guideline.
The guideline also included a number of recommendations on areas for future research, including studies of pre–high school age athletes to determine the natural history of concussion and recovery time for this age group, as well as the best assessment tools. The expert panel also called for clinical trials of different postconcussion management strategies and return-to-play protocols.
The guidelines were developed by a multidisciplinary expert committee that included representatives from neurology, athletic training, neuropsychology, epidemiology and biostatistics, neurosurgery, physical medicine and rehabilitation, and sports medicine. Many of the authors reported serving as consultants for professional sports associations, receiving honoraria and funding for travel for lectures on sports concussion, receiving research support from various foundations and organizations, and providing expert testimony in legal cases involving traumatic brain injury or concussion.
One of the most important statements in the new guideline
is that providers should not rely on a single diagnostic test when evaluating
an athlete, said Dr. Barry Jordan, the assistant medical director and attending
neurologist at the Burke Rehabilitation Hospital in White Plains, N.Y. Dr.
Jordan, who is an expert on sports concussions, said he’s seen too many
providers using a single computerized screening tool to assess whether an
athlete is well enough to return to play.
The new
guideline calls on providers to combine screening checklists with clinical
findings when making the determination about whether an athlete is well enough
to return to the field. Dr. Jordan
said this comprehensive approach is the way to go. And physicians who are
knowledgeable about concussions must be involved with that evaluation, he said.
|
| Dr. Barry Jordan |
The new guideline is an important update reflecting
the movement away from grading concussions to a more individualized approach. "You can't grade the severity until the concussion is over," he said.
Dr. Jordan
said the AAN guideline is "clear and easy to follow" and will results in better
care if followed.
Dr.
Barry Jordan is the director of the Brain Injury Program at Burke
Rehabilitation Hospital in White Plains, N.Y. He works with several sports
organizations including the New York State Athletic Commission, U.S.A. Boxing, and the National
Football League Players Association. He also writes a bimonthly column for
Clinical Neurology News called “On the Sidelines.”
One of the most important statements in the new guideline
is that providers should not rely on a single diagnostic test when evaluating
an athlete, said Dr. Barry Jordan, the assistant medical director and attending
neurologist at the Burke Rehabilitation Hospital in White Plains, N.Y. Dr.
Jordan, who is an expert on sports concussions, said he’s seen too many
providers using a single computerized screening tool to assess whether an
athlete is well enough to return to play.
The new
guideline calls on providers to combine screening checklists with clinical
findings when making the determination about whether an athlete is well enough
to return to the field. Dr. Jordan
said this comprehensive approach is the way to go. And physicians who are
knowledgeable about concussions must be involved with that evaluation, he said.
|
| Dr. Barry Jordan |
The new guideline is an important update reflecting
the movement away from grading concussions to a more individualized approach. "You can't grade the severity until the concussion is over," he said.
Dr. Jordan
said the AAN guideline is "clear and easy to follow" and will results in better
care if followed.
Dr.
Barry Jordan is the director of the Brain Injury Program at Burke
Rehabilitation Hospital in White Plains, N.Y. He works with several sports
organizations including the New York State Athletic Commission, U.S.A. Boxing, and the National
Football League Players Association. He also writes a bimonthly column for
Clinical Neurology News called “On the Sidelines.”
One of the most important statements in the new guideline
is that providers should not rely on a single diagnostic test when evaluating
an athlete, said Dr. Barry Jordan, the assistant medical director and attending
neurologist at the Burke Rehabilitation Hospital in White Plains, N.Y. Dr.
Jordan, who is an expert on sports concussions, said he’s seen too many
providers using a single computerized screening tool to assess whether an
athlete is well enough to return to play.
The new
guideline calls on providers to combine screening checklists with clinical
findings when making the determination about whether an athlete is well enough
to return to the field. Dr. Jordan
said this comprehensive approach is the way to go. And physicians who are
knowledgeable about concussions must be involved with that evaluation, he said.
|
| Dr. Barry Jordan |
The new guideline is an important update reflecting
the movement away from grading concussions to a more individualized approach. "You can't grade the severity until the concussion is over," he said.
Dr. Jordan
said the AAN guideline is "clear and easy to follow" and will results in better
care if followed.
Dr.
Barry Jordan is the director of the Brain Injury Program at Burke
Rehabilitation Hospital in White Plains, N.Y. He works with several sports
organizations including the New York State Athletic Commission, U.S.A. Boxing, and the National
Football League Players Association. He also writes a bimonthly column for
Clinical Neurology News called “On the Sidelines.”
Any athlete with a possible concussion should be immediately removed from play pending an evaluation by a licensed health care provider trained in assessing concussions and traumatic brain injury, according to a new guideline from the American Academy of Neurology.
The guideline for evaluating and managing athletes with concussion was published online in the journal Neurology on March 18 (doi:10.1212/WNL.0b013e31828d57dd) in conjunction with the annual meeting of the AAN. The guideline replaces the Academy’s 1997 recommendations, which stressed using a grading system to try to predict concussion outcomes.
The new guideline takes a more individualized and conservative approach, especially for younger athletes. The new approach comes as many states have enacted legislation regulating when young athletes can return to play following a concussion.
"If in doubt, sit it out," Dr. Jeffrey S. Kutcher, coauthor of the guideline and a neurologist at the University of Michigan in Ann Arbor, said in a statement. "Being seen by a trained professional is extremely important after a concussion. If headaches or other symptoms return with the start of exercise, stop the activity and consult a doctor. You only get one brain; treat it well."
The new guideline calls for athletes to stay off the field until they are asymptomatic off medication. High school athletes and younger players with a concussion should be managed more conservatively since they take longer to recover than older athletes, according to the AAN.
But there is not enough evidence to support complete rest after a concussion. Activities that do not worsen symptoms and don’t pose a risk of another concussion can be part of the management of the injury, according to the guideline.
"We’re moved away from the concussion grading systems we first established in 1997 and are now recommending concussion and return to play be assessed in each athlete individually," Dr. Christopher C. Giza, the co–lead guideline author and a neurologist at Mattel Children’s Hospital at the University of California, Los Angeles, said in a statement. "There is no set timeline for safe return to play."
The AAN expert panel recommends that sideline providers use symptom checklists such as the Standardized Assessment of Concussion to help identify suspected concussion and that the scores be shared with the physicians involved in the athletes’ care off the field. But these checklists should not be the only tool used in making a diagnosis, according to the guidelines. Also, the checklist scores may be more useful if they are compared against preinjury individual scores, especially in younger athletes and those with prior concussions.
CT imaging should not be used to diagnose a suspected sport-related concussion, according to the guideline. But imaging might be used to rule out more serious traumatic brain injuries, such as intracranial hemorrhage in athletes with a suspected concussion who also have a loss of consciousness, posttraumatic amnesia, persistently altered mental status, focal neurologic deficit, evidence of skull fracture, or signs of clinical deterioration.
Athletes are at greater risk of concussion if they have a history of concussion. The first 10 days after a concussion pose the greatest risk for a repeat injury.
The AAN advises physicians to be on the lookout for ongoing symptoms that are linked to a longer recovery, such as continued headache or fogginess. Athletes with a history of concussions and younger players also tend to have a longer recovery.
The guideline also include level C recommendations stating that health care providers "might" develop individualized graded plans for returning to physical and cognitive activity. They might also provide cognitive restructuring counseling in an effort to shorten the duration of symptoms and the likelihood of developing chronic post-concussion syndrome, according to the guideline.
The guideline also included a number of recommendations on areas for future research, including studies of pre–high school age athletes to determine the natural history of concussion and recovery time for this age group, as well as the best assessment tools. The expert panel also called for clinical trials of different postconcussion management strategies and return-to-play protocols.
The guidelines were developed by a multidisciplinary expert committee that included representatives from neurology, athletic training, neuropsychology, epidemiology and biostatistics, neurosurgery, physical medicine and rehabilitation, and sports medicine. Many of the authors reported serving as consultants for professional sports associations, receiving honoraria and funding for travel for lectures on sports concussion, receiving research support from various foundations and organizations, and providing expert testimony in legal cases involving traumatic brain injury or concussion.
Any athlete with a possible concussion should be immediately removed from play pending an evaluation by a licensed health care provider trained in assessing concussions and traumatic brain injury, according to a new guideline from the American Academy of Neurology.
The guideline for evaluating and managing athletes with concussion was published online in the journal Neurology on March 18 (doi:10.1212/WNL.0b013e31828d57dd) in conjunction with the annual meeting of the AAN. The guideline replaces the Academy’s 1997 recommendations, which stressed using a grading system to try to predict concussion outcomes.
The new guideline takes a more individualized and conservative approach, especially for younger athletes. The new approach comes as many states have enacted legislation regulating when young athletes can return to play following a concussion.
"If in doubt, sit it out," Dr. Jeffrey S. Kutcher, coauthor of the guideline and a neurologist at the University of Michigan in Ann Arbor, said in a statement. "Being seen by a trained professional is extremely important after a concussion. If headaches or other symptoms return with the start of exercise, stop the activity and consult a doctor. You only get one brain; treat it well."
The new guideline calls for athletes to stay off the field until they are asymptomatic off medication. High school athletes and younger players with a concussion should be managed more conservatively since they take longer to recover than older athletes, according to the AAN.
But there is not enough evidence to support complete rest after a concussion. Activities that do not worsen symptoms and don’t pose a risk of another concussion can be part of the management of the injury, according to the guideline.
"We’re moved away from the concussion grading systems we first established in 1997 and are now recommending concussion and return to play be assessed in each athlete individually," Dr. Christopher C. Giza, the co–lead guideline author and a neurologist at Mattel Children’s Hospital at the University of California, Los Angeles, said in a statement. "There is no set timeline for safe return to play."
The AAN expert panel recommends that sideline providers use symptom checklists such as the Standardized Assessment of Concussion to help identify suspected concussion and that the scores be shared with the physicians involved in the athletes’ care off the field. But these checklists should not be the only tool used in making a diagnosis, according to the guidelines. Also, the checklist scores may be more useful if they are compared against preinjury individual scores, especially in younger athletes and those with prior concussions.
CT imaging should not be used to diagnose a suspected sport-related concussion, according to the guideline. But imaging might be used to rule out more serious traumatic brain injuries, such as intracranial hemorrhage in athletes with a suspected concussion who also have a loss of consciousness, posttraumatic amnesia, persistently altered mental status, focal neurologic deficit, evidence of skull fracture, or signs of clinical deterioration.
Athletes are at greater risk of concussion if they have a history of concussion. The first 10 days after a concussion pose the greatest risk for a repeat injury.
The AAN advises physicians to be on the lookout for ongoing symptoms that are linked to a longer recovery, such as continued headache or fogginess. Athletes with a history of concussions and younger players also tend to have a longer recovery.
The guideline also include level C recommendations stating that health care providers "might" develop individualized graded plans for returning to physical and cognitive activity. They might also provide cognitive restructuring counseling in an effort to shorten the duration of symptoms and the likelihood of developing chronic post-concussion syndrome, according to the guideline.
The guideline also included a number of recommendations on areas for future research, including studies of pre–high school age athletes to determine the natural history of concussion and recovery time for this age group, as well as the best assessment tools. The expert panel also called for clinical trials of different postconcussion management strategies and return-to-play protocols.
The guidelines were developed by a multidisciplinary expert committee that included representatives from neurology, athletic training, neuropsychology, epidemiology and biostatistics, neurosurgery, physical medicine and rehabilitation, and sports medicine. Many of the authors reported serving as consultants for professional sports associations, receiving honoraria and funding for travel for lectures on sports concussion, receiving research support from various foundations and organizations, and providing expert testimony in legal cases involving traumatic brain injury or concussion.
FROM NEUROLOGY
Sleep in Hospitalized Adults
Lack of sleep is a common problem in hospitalized patients and is associated with poorer health outcomes, especially in older patients.[1, 2, 3] Prior studies highlight a multitude of factors that can result in sleep loss in the hospital[3, 4, 5, 6] with 1 of the most common causes of sleep disruption in the hospital being noise.[7, 8, 9]
In addition to external factors, such as hospital noise, there may be inherent characteristics that predispose certain patients to greater sleep loss when hospitalized. One such measure is the construct of perceived control or the psychological measure of how much individuals expect themselves to be capable of bringing about desired outcomes.[10] Among older patients, low perceived control is associated with increased rates of physician visits, hospitalizations, and death.[11, 12] In contrast, patients who feel more in control of their environment may experience positive health benefits.[13]
Yet, when patients are placed in a hospital setting, they experience a significant reduction in control over their environment along with an increase in dependency on medical staff and therapies.[14, 15] For example, hospitalized patients are restricted in their personal decisions, such as what clothes they can wear and what they can eat and are not in charge of their own schedules, including their sleep time.
Although prior studies suggest that perceived control over sleep is related to actual sleep among community‐dwelling adults,[16, 17] no study has examined this relationship in hospitalized adults. Therefore, the aim of our study was to examine the possible association between perceived control, noise levels, and sleep in hospitalized middle‐aged and older patients.
METHODS
Study Design
We conducted a prospective cohort study of subjects recruited from a large ongoing study of admitted patients at the University of Chicago inpatient general medicine service.[18] Because we were interested in middle‐aged and older adults who are most sensitive to sleep disruptions, patients who were age 50 years and over, ambulatory, and living in the community were eligible for the study.[19] Exclusion criteria were cognitive impairment (telephone version of the Mini‐Mental State Exam <17 out of 22), preexisting sleeping disorders identified via patient charts, such as obstructive sleep apnea and narcolepsy, transfer from the intensive care unit (ICU), and admission to the hospital more than 72 hours prior to enrollment.[20] These inclusion and exclusion criteria were selected to identify a patient population with minimal sleep disturbances at baseline. Patients under isolation were excluded because they are not visited as frequently by the healthcare team.[21, 22] Most general medicine rooms were double occupancy but efforts were made to make patient rooms single when possible or required (ie, isolation for infection control). The study was approved by the University of Chicago Institutional Review Board.
Subjective Data Collection
Baseline levels of perceived control over sleep, or the amount of control patients believe they have over their sleep, were assessed using 2 different scales. The first tool was the 8‐item Sleep Locus of Control (SLOC) scale,[17] which ranges from 8 to 48, with higher values corresponding to a greater internal locus of control over sleep. An internal sleep locus of control indicates beliefs that patients feel that they are primarily responsible for their own sleep as opposed to an external locus of control which indicates beliefs that good sleep is due to luck or chance. For example, patients were asked how strongly they agree or disagree with statements, such as, If I take care of myself, I can avoid insomnia and People who never get insomnia are just plain lucky (see Supporting Information, Appendix 2, in the online version of this article). The second tool was the 9‐item Sleep Self‐Efficacy (SSE) scale,[23] which ranges from 9 to 45, with higher values corresponding to greater confidence patients have in their ability to sleep. One of the items asks, How confident are you that you can lie in bed feeling physically relaxed (see Supporting Information, Appendix 1, in the online version of this article)? Both instruments have been validated in an outpatient setting.[23] These surveys were given immediately on enrollment in the study to measure baseline perceived control.
Baseline sleep habits were also collected on enrollment using the Epworth Sleepiness Scale,[24, 25] a standard validated survey that assesses excess daytime sleepiness in various common situations. For each day in the hospital, patients were asked to report in‐hospital sleep quality using the Karolinska Sleep Log.[26] The Karolinska Sleep Quality Index (KSQI) is calculated from 4 items on the Karolinska Sleep Log (sleep quality, sleep restlessness, slept throughout the night, ease of falling asleep). The questions are on a 5‐point scale and the 4 items are averaged for a final score out of 5 with a higher number indicating better subjective sleep quality. The item How much was your sleep disturbed by noise? on the Karolinska Sleep Log was used to assess the degree to which noise was a disruptor of sleep. This question was also on a 5‐point scale with higher scores indicating greater disruptiveness of noise. Patients were also asked how disruptive noise from roommates was on a nightly basis using this same scale.
Objective Data Collection
Wrist activity monitors (Actiwatch 2; Respironics, Inc., Murrysville, PA)[27, 28, 29, 30] were used to measure patient sleep. Actiware 5 software (Respironics, Inc.)[31] was used to estimate quantitative measures of sleep time and efficiency. Sleep time is defined as the total duration of time spent sleeping at night and sleep efficiency is defined as the fraction of time, reported as a percentage, spent sleeping by actigraphy out of the total time patients reported they were sleeping.
Sound levels in patient rooms were recorded using Larson Davis 720 Sound Level Monitors (Larson Davis, Inc., Provo, UT). These monitors store functional average sound pressure levels in A‐weighted decibels called the Leq over 1‐hour intervals. The Leq is the average sound level over the given time interval. Minimum (Lmin) and maximum (Lmax) sound levels are also stored. The LD SLM Utility Program (Larson Davis, Inc.) was used to extract the sound level measurements recorded by the monitors.
Demographic information (age, gender, race, ethnicity, highest level of education, length of stay in the hospital, and comorbidities) was obtained from hospital charts via an ongoing study of admitted patients at the University of Chicago Medical Center inpatient general medicine service.[18] Chart audits were performed to determine whether patients received pharmacologic sleep aids in the hospital.
Data Analysis
Descriptive statistics were used to summarize mean sleep duration and sleep efficiency in the hospital as well as SLOC and SSE. Because the SSE scores were not normally distributed, the scores were dichotomized at the median to create a variable denoting high and low SSE. Additionally, because the distribution of responses to the noise disruption question was skewed to the right, reports of noise disruptions were grouped into not disruptive (score=1) and disruptive (score>1).
Two‐sample t tests with equal variances were used to assess the relationship between perceived control measures (high/low SLOC, SSE) and objective sleep measures (sleep time, sleep efficiency). Multivariate linear regression was used to test the association between high SSE (independent variable) and sleep time (dependent variable), clustering for multiple nights of data within the subject. Multivariate logistic regression, also adjusting for subject, was used to test the association between high SSE and noise disruptiveness and the association between high SSE and Karolinska scores. Leq, Lmax, and Lmin were all tested using stepwise forward regression. Because our prior work[9] demonstrated that noise levels separated into tertiles were significantly associated with sleep time, our analysis also used noise levels separated into tertiles. Stepwise forward regression was used to add basic patient demographics (gender, race, age) to the models. Statistical significance was defined as P<0.05, and all statistical analysis was done using Stata 11.0 (StataCorp, College Station, TX).
RESULTS
From April 2010 to May 2012, 1134 patients were screened by study personnel for this study via an ongoing study of hospitalized patients on the inpatient general medicine ward. Of the 361 (31.8%) eligible patients, 206 (57.1%) consented to participate. Of the subjects enrolled in the study, 118 were able to complete at least 1 night of actigraphy, sound monitoring, and subjective assessment for a total of 185 patient nights (Figure 1).
The majority of patients were female (57%), African American (67%), and non‐Hispanic (97%). The mean age was 65 years (standard deviation [SD], 11.6 years), and the median length of stay was 4 days (interquartile range [IQR], 36). The majority of patients also had hypertension (67%), with chronic obstructive pulmonary disease [COPD] (31%) and congestive heart failure (31%) being the next most common comorbidities. About two‐thirds of subjects (64%) were characterized as average or above average sleepers with Epworth Sleepiness Scale scores 9[20] (Table 1). Only 5% of patients received pharmacological sleep aids.
| Value, n (%)a | |
|---|---|
| |
| Patient characteristics | |
| Age, mean (SD), y | 63 (12) |
| Length of stay, median (IQR), db | 4 (36) |
| Female | 67 (57) |
| African American | 79 (67) |
| Hispanic | 3 (3) |
| High school graduate | 92 (78) |
| Comorbidities | |
| Hypertension | 79 (66) |
| Chronic obstructive pulmonary disease | 37 (31) |
| Congestive heart failure | 37 (31) |
| Diabetes | 36 (30) |
| End stage renal disease | 23 (19) |
| Baseline sleep characteristics | |
| Sleep duration, mean (SD), minc | 333 (128) |
| Epworth Sleepiness Scale, score 9d | 73 (64) |
The mean baseline SLOC score was 30.4 (SD, 6.7), with a median of 31 (IQR, 2735). The mean baseline SSE score was 32.1 (SD, 9.4), with a median of 34 (IQR, 2441). Fifty‐four patients were categorized as having high sleep self‐efficacy (high SSE), which we defined as scoring above the median of 34.
Average in‐hospital sleep was 5.5 hours (333 minutes; SD, 128 minutes) which was significantly shorter than the self‐reported sleep duration of 6.5 hours prior to admission (387 minutes, SD, 125 minutes; P=0.0001). The mean sleep efficiency was 73% (SD, 19%) with 55% of actigraphy nights below the normal range of 80% efficiency for adults.[19] Median KSQI was 3.5 (IQR, 2.254.75), with 41% of the patients with a KSQI 3, putting them in the insomniac range.[32] The median score on the noise disruptiveness question was 1 (IQR, 14) with 42% of reports coded as disruptive defined as a score >1 on the 5‐point scale. The median score on the roommate disruptiveness question was 1 (IQR, 11) with 77% of responses coded as not disruptive defined as a score of 1 on the 5‐point scale.
A 2‐sample t test with equal variances showed that those patients reporting high SSE were more likely to sleep longer in the hospital than those reporting low SSE (364 minutes 95% confidence interval [CI]: 340, 388 vs 309 minutes 95% CI: 283, 336; P=0.003) (Figure 2). Patients with high SSE were also more likely to have a normal sleep efficiency (above 80%) compared to those with low SSE (54% 95% CI: 43, 65 vs 38% 95% CI: 28,47; P=0.028). Last, there was a trend toward patients reporting higher SSE to also report less noise disruption compared to those patients with low SSE ([42%] 95% CI: 31, 53 vs [56%] 95% CI: 46, 65; P=0.063) (Figure 3).
Linear regression clustered by subject showed that high SSE was associated with longer sleep duration (55 minutes 95% CI: 14, 97; P=0.010). Furthermore, high SSE was significantly associated with longer sleep duration after controlling for both objective noise level and patient demographics in the model using stepwise forward regression (50 minutes 95% CI: 11, 90; P=0.014) (Table 2).
| Sleep Duration (min) | Model 1 Beta [95% CI]a | Model 2 Beta [95% CI]a |
|---|---|---|
| ||
| High SSE | 55 [14, 97]b | 50 [11, 90]b |
| Lmin tert 3 | 14 [59, 29] | |
| Lmin tert 2 | 21 [65, 23] | |
| Female | 49 [10, 89]b | |
| African American | 16 [59, 27] | |
| Age | 1 [0.9, 3] | |
| Karolinska Sleep Quality | Model 1 OR [95% CI]c | Model 2 OR [95% CI]c |
| High SSE | 2.04 [1.12, 3.71]b | 2.01 [1.06, 3.79]b |
| Lmin tert 3 | 0.90 [0.37, 2.2] | |
| Lmin tert 2 | 0.86 [0.38, 1.94] | |
| Female | 1.78 [0.90, 3.52] | |
| African American | 1.19 [0.60, 2.38] | |
| Age | 1.02 [0.99, 1.05] | |
| Noise Complaints | Model 1 OR [95% CI]d | Model 2 OR [95% CI]d |
| High SSE | 0.57 [0.30, 1.12] | 0.49 [0.25, 0.96]b |
| Lmin tert 3 | 0.85 [0.39, 1.84] | |
| Lmin tert 2 | 0.91 [0.43, 1.93] | |
| Female | 1.40 [0.71, 2.78] | |
| African American | 0.35 [0.17, 0.70] | |
| Age | 1.00 [0.96, 1.03] | |
| Age2e | 1.00 [1.00, 1.00] | |
Logistic regression clustered by subject demonstrated that patients with high SSE had 2 times higher odds of having a KSQI score above 3 (95% CI: 1.12, 3.71; P=0.020). This association was still significant after controlling for noise and patient demographics (OR: 2.01; 95% CI: 1.06, 3.79; P=0.032). After controlling for noise levels and patient demographics, there was a statistically significant association between high SSE and lower odds of noise complaints (OR: 0.49; 95% CI: 0.25, 0.96; P=0.039) (Table 2). Although demographic characteristics were not associated with high SSE, those patients with high SSE had lower odds of being in the loudest tertile rooms (OR: 0.34; 95% CI: 0.15, 0.74; P=0.007).
In multivariate linear regression analyses, there were no significant relationships between SLOC scores and KSQI, reported noise disruptiveness, and markers of sleep (sleep duration or sleep efficiency).
DISCUSSION
This study is the first to examine the relationship between perceived control, noise levels, and objective measurements of sleep in a hospital setting. One measure of perceived control, namely SSE, was associated with objective sleep duration, subjective and objective sleep quality, noise levels in patient rooms, and perhaps also patient complaints of noise. These associations remained significant after controlling for objective noise levels and patient demographics, suggesting that SSE is independently related to sleep.
In contrast to SSE, SLOC was not found to be significantly associated with either subjective or objective measures of sleep quality. The lack of association may be due to the fact that the SLOC questionnaire does not translate as well to the inpatient setting as the SSE questionnaire. The SLOC questionnaire focuses on general beliefs about sleep whereas the SSE questionnaire focuses on personal beliefs about one's own ability sleep in the immediate future, which may make it more relevant in the inpatient setting (see Supporting Information, Appendix 1 and 2, in the online version of this article).
Given our findings, it is important to identify why patients with high SSE have better sleep and fewer noise complaints. One possibility is that sleep self‐efficacy is an inherited trait unique to each person that is also predictive of a patient's sleep patterns. However, is it also possible that those patients with high SSE feel more empowered to take control of their environment, allowing them to advocate for better sleep? This hypothesis is further strengthened by the finding that those patients with high SSE on study entry were less likely to be in the noisiest rooms. This raises the possibility that at least 1 of the mechanisms by which high SSE may be protective against sleep loss is through patients taking an active role in noise reduction, such as closing the door or advocating for their sleep with staff. However, we did not directly observe or ask patients whether doors of patient rooms were open or closed or whether the patients took other measures to advocate for their own sleep. Thus, further work is necessary to understand the mechanisms by which sleep self‐efficacy may influence sleep.
One potential avenue for future research is to explore possible interventions for boosting sleep self‐efficacy in the hospital. Although most interventions have focused on environmental noise and staff‐based education, empowering patients through boosting SSE may be a helpful adjunct to improving hospital sleep.[33, 34] Currently, the SSE scale is not commonly used in the inpatient setting. Motivational interviewing and patient coaching could be explored as potential tools for boosting SSE. Furthermore, even if SSE is not easily changed, measuring SSE in patients newly admitted to the hospital may be useful in identifying patients most susceptible to sleep disruptions. Efforts to identify patients with low SSE should go hand‐in‐hand with measures to reduce noise. Addressing both patient‐level and environmental factors simultaneously may be the best strategy for improving sleep in an inpatient hospital setting.
In contrast to our prior study, it is worth noting that we did not find any significant relationships between overall noise levels and sleep.[9] In this dataset, nighttime noise is still a predictor of sleep loss in the hospital. However, when we restrict our sample to those who answered the SSE questionnaire and had nighttime noise recorded, we lose a significant number of observations. Because of our interest in testing the relationship between SSE and sleep, we chose to control for overall noise (which enabled us to retain more observations). We also did not find any interactions between SSE and noise in our regression models. Further work is warranted with larger sample sizes to better understand the role of SSE in the context of sleep and noise levels. In addition, females also received more sleep than males in our study.
There are several limitations to this study. This study was carried out at a single service at a single institution, limiting the ability to generalize the findings to other hospital settings. This study had a relatively high rate of patients who were unable to complete at least 1 night of data collection (42%), often due to watch removal for imaging or procedures, which may also affect the representativeness of our sample. Moreover, we can only examine associations and not causal relationships. The SSE scale has never been used in hospitalized patients, making comparisons between scores from hospitalized patients and population controls difficult. In addition, the SSE scale also has not been dichotomized in previous studies into high and low SSE. However, a sensitivity analysis with raw SSE scores did not change the results of our study. It can be difficult to perform actigraphy measurements in the hospital because many patients spend most of their time in bed. Because we chose a relatively healthy cohort of patients without significant limitations in mobility, actigraphy could still be used to differentiate time spent awake from time spent sleeping. Because we did not perform polysomnography, we cannot explore the role of sleep architecture which is an important component of sleep quality. Although the use of pharmacologic sleep aids is a potential confounding factor, the rate of use was very low in our cohort and unlikely to significantly affect our results. Continued study of this patient population is warranted to further develop the findings.
In conclusion, patients with high SSE sleep better in the hospital, tend to be in quieter rooms, and may report fewer noise complaints. Our findings suggest that a greater confidence in the ability to sleep may be beneficial in hospitalized adults. In addition to noise control, hospitals should also consider targeting patients with low SSE when designing novel interventions to improve in‐hospital sleep.
Disclosures
This work was supported by funding from the National Institute on Aging through a Short‐Term Aging‐Related Research Program (1 T35 AG029795), National Institute on Aging career development award (K23AG033763), a midcareer career development award (1K24AG031326), a program project (P01AG‐11412), an Agency for Healthcare Research and Quality Centers for Education and Research on Therapeutics grant (1U18HS016967), and a National Institute on Aging Clinical Translational Sciences award (UL1 RR024999). Dr. Arora had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the statistical analysis. The funding agencies had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. The authors report no conflicts of interest.
- , , , . The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163–178.
- , , , , , . Poor self‐reported sleep quality predicts mortality within one year of inpatient post‐acute rehabilitation among older adults. Sleep. 2011;34(12):1715–1721.
- , , , et al. The sleep of older people in hospital and nursing homes. J Clin Nurs. 1999;8:360–368.
- , , et al. Sleep in hospitalized medical patients, part 1: factors affecting sleep. J Hosp Med. 2008; 3:473–482.
- , , , et al. Nocturnal care interactions with patients in critical care units. Am J Crit Care. 2004;13:102–112; quiz 114–115.
- , , . Patient perception of sleep quality and etiology of sleep disruption in the intensive care unit. Am J Respir Crit Care Med. 1999;159:1155–1162.
- . Sleep in acute care settings: an integrative review. J Nurs Scholarsh. 2000;32(1):31–38.
- , , , et al. Sleep disruption due to hospital noises: a prospective evaluation. Ann Int Med. 2012;157(3): 170–179.
- , , , et al. Noise and sleep among adult medical inpatients: far from a quiet night. Arch Intern Med. 2012;172:68–70.
- . Generalized expectancies for internal versus external control of reinforcement. Psychol Monogr. 1966;80:1–28.
- , . Psychosocial risk factors and mortality: a prospective study with special focus on social support, social participation, and locus of control in Norway. J Epidemiol Community Health. 1998;52:476–481.
- , . The interactive effect of perceived control and functional status on health and mortality among young‐old and old‐old adults. J Gerontol B Psychol Sci Soc Sci. 1997;52:P118–P126.
- , . Role‐specific feelings of control and mortality. Psychol Aging. 2000;15:617–626.
- , , . Patient empowerment in intensive care—an interview study. Intensive Crit Care Nurs. 2006;22:370–377.
- , , . Exploring the relationship between personal control and the hospital environment. J Clin Nurs. 2008;17:1601–1609.
- , , , et al. Effects of volitional lifestyle on sleep‐life habits in the aged. Psychiatry Clin Neurosci. 1998;52:183–184.
- , , , et al. Sleep locus of control: report on a new scale. Behav Sleep Med. 2004;2:79–93.
- , , , et al. Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists. Ann Intern Med. 2002;137:866–874.
- , , , et al. The effects of age, sex, ethnicity, and sleep‐disordered breathing on sleep architecture. Arch Intern Med. 2004;164:406–418.
- , , , et al. Validation of a telephone version of the mini‐mental state examination. J Am Geriatr Soc. 1992;40:697–702.
- , , , et al. Contact isolation in surgical patients: a barrier to care? Surgery. 2003;134:180–188.
- , . Adverse effects of contact isolation. Lancet. 1999;354:1177–1178.
- . Behavioral Treatment for Persistent Insomnia. Elmsford, NY: Pergamon Press; 1987.
- . A new method for measuring daytime sleepiness: the Epworth sleepiness scale. Sleep. 1991;14:540–545.
- . Reliability and factor analysis of the Epworth Sleepiness Scale. Sleep. 1992;15:376–381.
- , . Objective components of individual differences in subjective sleep quality. J Sleep Res. 1997;6:217–220.
- , , , et al. The role of actigraphy in the study of sleep and circadian rhythms. Sleep. 2003;26:342–392.
- , , , et al. Practice parameters for the use of actigraphy in the assessment of sleep and sleep disorders: an update for 2007. Sleep. 2007;30:519–529.
- , , , et al. The role of actigraphy in the evaluation of sleep disorders. Sleep. 1995;18:288–302.
- , , , et al. Clinical review: sleep measurement in critical care patients: research and clinical implications. Crit Care. 2007;11:226.
- , , , et al. Evaluation of immobility time for sleep latency in actigraphy. Sleep Med. 2009;10:621–625.
- , , , et al. The subjective meaning of sleep quality: a comparison of individuals with and without insomnia. Sleep. 2008;31:383–393.
- , , , et al. Sleep in hospitalized medical patients, part 2: behavioral and pharmacological management of sleep disturbances. J Hosp Med. 2009;4:50–59.
- , , , . A nonpharmacologic sleep protocol for hospitalized older patients. J Am Geriatr Soc. 1998;46(6):700–705.
Lack of sleep is a common problem in hospitalized patients and is associated with poorer health outcomes, especially in older patients.[1, 2, 3] Prior studies highlight a multitude of factors that can result in sleep loss in the hospital[3, 4, 5, 6] with 1 of the most common causes of sleep disruption in the hospital being noise.[7, 8, 9]
In addition to external factors, such as hospital noise, there may be inherent characteristics that predispose certain patients to greater sleep loss when hospitalized. One such measure is the construct of perceived control or the psychological measure of how much individuals expect themselves to be capable of bringing about desired outcomes.[10] Among older patients, low perceived control is associated with increased rates of physician visits, hospitalizations, and death.[11, 12] In contrast, patients who feel more in control of their environment may experience positive health benefits.[13]
Yet, when patients are placed in a hospital setting, they experience a significant reduction in control over their environment along with an increase in dependency on medical staff and therapies.[14, 15] For example, hospitalized patients are restricted in their personal decisions, such as what clothes they can wear and what they can eat and are not in charge of their own schedules, including their sleep time.
Although prior studies suggest that perceived control over sleep is related to actual sleep among community‐dwelling adults,[16, 17] no study has examined this relationship in hospitalized adults. Therefore, the aim of our study was to examine the possible association between perceived control, noise levels, and sleep in hospitalized middle‐aged and older patients.
METHODS
Study Design
We conducted a prospective cohort study of subjects recruited from a large ongoing study of admitted patients at the University of Chicago inpatient general medicine service.[18] Because we were interested in middle‐aged and older adults who are most sensitive to sleep disruptions, patients who were age 50 years and over, ambulatory, and living in the community were eligible for the study.[19] Exclusion criteria were cognitive impairment (telephone version of the Mini‐Mental State Exam <17 out of 22), preexisting sleeping disorders identified via patient charts, such as obstructive sleep apnea and narcolepsy, transfer from the intensive care unit (ICU), and admission to the hospital more than 72 hours prior to enrollment.[20] These inclusion and exclusion criteria were selected to identify a patient population with minimal sleep disturbances at baseline. Patients under isolation were excluded because they are not visited as frequently by the healthcare team.[21, 22] Most general medicine rooms were double occupancy but efforts were made to make patient rooms single when possible or required (ie, isolation for infection control). The study was approved by the University of Chicago Institutional Review Board.
Subjective Data Collection
Baseline levels of perceived control over sleep, or the amount of control patients believe they have over their sleep, were assessed using 2 different scales. The first tool was the 8‐item Sleep Locus of Control (SLOC) scale,[17] which ranges from 8 to 48, with higher values corresponding to a greater internal locus of control over sleep. An internal sleep locus of control indicates beliefs that patients feel that they are primarily responsible for their own sleep as opposed to an external locus of control which indicates beliefs that good sleep is due to luck or chance. For example, patients were asked how strongly they agree or disagree with statements, such as, If I take care of myself, I can avoid insomnia and People who never get insomnia are just plain lucky (see Supporting Information, Appendix 2, in the online version of this article). The second tool was the 9‐item Sleep Self‐Efficacy (SSE) scale,[23] which ranges from 9 to 45, with higher values corresponding to greater confidence patients have in their ability to sleep. One of the items asks, How confident are you that you can lie in bed feeling physically relaxed (see Supporting Information, Appendix 1, in the online version of this article)? Both instruments have been validated in an outpatient setting.[23] These surveys were given immediately on enrollment in the study to measure baseline perceived control.
Baseline sleep habits were also collected on enrollment using the Epworth Sleepiness Scale,[24, 25] a standard validated survey that assesses excess daytime sleepiness in various common situations. For each day in the hospital, patients were asked to report in‐hospital sleep quality using the Karolinska Sleep Log.[26] The Karolinska Sleep Quality Index (KSQI) is calculated from 4 items on the Karolinska Sleep Log (sleep quality, sleep restlessness, slept throughout the night, ease of falling asleep). The questions are on a 5‐point scale and the 4 items are averaged for a final score out of 5 with a higher number indicating better subjective sleep quality. The item How much was your sleep disturbed by noise? on the Karolinska Sleep Log was used to assess the degree to which noise was a disruptor of sleep. This question was also on a 5‐point scale with higher scores indicating greater disruptiveness of noise. Patients were also asked how disruptive noise from roommates was on a nightly basis using this same scale.
Objective Data Collection
Wrist activity monitors (Actiwatch 2; Respironics, Inc., Murrysville, PA)[27, 28, 29, 30] were used to measure patient sleep. Actiware 5 software (Respironics, Inc.)[31] was used to estimate quantitative measures of sleep time and efficiency. Sleep time is defined as the total duration of time spent sleeping at night and sleep efficiency is defined as the fraction of time, reported as a percentage, spent sleeping by actigraphy out of the total time patients reported they were sleeping.
Sound levels in patient rooms were recorded using Larson Davis 720 Sound Level Monitors (Larson Davis, Inc., Provo, UT). These monitors store functional average sound pressure levels in A‐weighted decibels called the Leq over 1‐hour intervals. The Leq is the average sound level over the given time interval. Minimum (Lmin) and maximum (Lmax) sound levels are also stored. The LD SLM Utility Program (Larson Davis, Inc.) was used to extract the sound level measurements recorded by the monitors.
Demographic information (age, gender, race, ethnicity, highest level of education, length of stay in the hospital, and comorbidities) was obtained from hospital charts via an ongoing study of admitted patients at the University of Chicago Medical Center inpatient general medicine service.[18] Chart audits were performed to determine whether patients received pharmacologic sleep aids in the hospital.
Data Analysis
Descriptive statistics were used to summarize mean sleep duration and sleep efficiency in the hospital as well as SLOC and SSE. Because the SSE scores were not normally distributed, the scores were dichotomized at the median to create a variable denoting high and low SSE. Additionally, because the distribution of responses to the noise disruption question was skewed to the right, reports of noise disruptions were grouped into not disruptive (score=1) and disruptive (score>1).
Two‐sample t tests with equal variances were used to assess the relationship between perceived control measures (high/low SLOC, SSE) and objective sleep measures (sleep time, sleep efficiency). Multivariate linear regression was used to test the association between high SSE (independent variable) and sleep time (dependent variable), clustering for multiple nights of data within the subject. Multivariate logistic regression, also adjusting for subject, was used to test the association between high SSE and noise disruptiveness and the association between high SSE and Karolinska scores. Leq, Lmax, and Lmin were all tested using stepwise forward regression. Because our prior work[9] demonstrated that noise levels separated into tertiles were significantly associated with sleep time, our analysis also used noise levels separated into tertiles. Stepwise forward regression was used to add basic patient demographics (gender, race, age) to the models. Statistical significance was defined as P<0.05, and all statistical analysis was done using Stata 11.0 (StataCorp, College Station, TX).
RESULTS
From April 2010 to May 2012, 1134 patients were screened by study personnel for this study via an ongoing study of hospitalized patients on the inpatient general medicine ward. Of the 361 (31.8%) eligible patients, 206 (57.1%) consented to participate. Of the subjects enrolled in the study, 118 were able to complete at least 1 night of actigraphy, sound monitoring, and subjective assessment for a total of 185 patient nights (Figure 1).
The majority of patients were female (57%), African American (67%), and non‐Hispanic (97%). The mean age was 65 years (standard deviation [SD], 11.6 years), and the median length of stay was 4 days (interquartile range [IQR], 36). The majority of patients also had hypertension (67%), with chronic obstructive pulmonary disease [COPD] (31%) and congestive heart failure (31%) being the next most common comorbidities. About two‐thirds of subjects (64%) were characterized as average or above average sleepers with Epworth Sleepiness Scale scores 9[20] (Table 1). Only 5% of patients received pharmacological sleep aids.
| Value, n (%)a | |
|---|---|
| |
| Patient characteristics | |
| Age, mean (SD), y | 63 (12) |
| Length of stay, median (IQR), db | 4 (36) |
| Female | 67 (57) |
| African American | 79 (67) |
| Hispanic | 3 (3) |
| High school graduate | 92 (78) |
| Comorbidities | |
| Hypertension | 79 (66) |
| Chronic obstructive pulmonary disease | 37 (31) |
| Congestive heart failure | 37 (31) |
| Diabetes | 36 (30) |
| End stage renal disease | 23 (19) |
| Baseline sleep characteristics | |
| Sleep duration, mean (SD), minc | 333 (128) |
| Epworth Sleepiness Scale, score 9d | 73 (64) |
The mean baseline SLOC score was 30.4 (SD, 6.7), with a median of 31 (IQR, 2735). The mean baseline SSE score was 32.1 (SD, 9.4), with a median of 34 (IQR, 2441). Fifty‐four patients were categorized as having high sleep self‐efficacy (high SSE), which we defined as scoring above the median of 34.
Average in‐hospital sleep was 5.5 hours (333 minutes; SD, 128 minutes) which was significantly shorter than the self‐reported sleep duration of 6.5 hours prior to admission (387 minutes, SD, 125 minutes; P=0.0001). The mean sleep efficiency was 73% (SD, 19%) with 55% of actigraphy nights below the normal range of 80% efficiency for adults.[19] Median KSQI was 3.5 (IQR, 2.254.75), with 41% of the patients with a KSQI 3, putting them in the insomniac range.[32] The median score on the noise disruptiveness question was 1 (IQR, 14) with 42% of reports coded as disruptive defined as a score >1 on the 5‐point scale. The median score on the roommate disruptiveness question was 1 (IQR, 11) with 77% of responses coded as not disruptive defined as a score of 1 on the 5‐point scale.
A 2‐sample t test with equal variances showed that those patients reporting high SSE were more likely to sleep longer in the hospital than those reporting low SSE (364 minutes 95% confidence interval [CI]: 340, 388 vs 309 minutes 95% CI: 283, 336; P=0.003) (Figure 2). Patients with high SSE were also more likely to have a normal sleep efficiency (above 80%) compared to those with low SSE (54% 95% CI: 43, 65 vs 38% 95% CI: 28,47; P=0.028). Last, there was a trend toward patients reporting higher SSE to also report less noise disruption compared to those patients with low SSE ([42%] 95% CI: 31, 53 vs [56%] 95% CI: 46, 65; P=0.063) (Figure 3).
Linear regression clustered by subject showed that high SSE was associated with longer sleep duration (55 minutes 95% CI: 14, 97; P=0.010). Furthermore, high SSE was significantly associated with longer sleep duration after controlling for both objective noise level and patient demographics in the model using stepwise forward regression (50 minutes 95% CI: 11, 90; P=0.014) (Table 2).
| Sleep Duration (min) | Model 1 Beta [95% CI]a | Model 2 Beta [95% CI]a |
|---|---|---|
| ||
| High SSE | 55 [14, 97]b | 50 [11, 90]b |
| Lmin tert 3 | 14 [59, 29] | |
| Lmin tert 2 | 21 [65, 23] | |
| Female | 49 [10, 89]b | |
| African American | 16 [59, 27] | |
| Age | 1 [0.9, 3] | |
| Karolinska Sleep Quality | Model 1 OR [95% CI]c | Model 2 OR [95% CI]c |
| High SSE | 2.04 [1.12, 3.71]b | 2.01 [1.06, 3.79]b |
| Lmin tert 3 | 0.90 [0.37, 2.2] | |
| Lmin tert 2 | 0.86 [0.38, 1.94] | |
| Female | 1.78 [0.90, 3.52] | |
| African American | 1.19 [0.60, 2.38] | |
| Age | 1.02 [0.99, 1.05] | |
| Noise Complaints | Model 1 OR [95% CI]d | Model 2 OR [95% CI]d |
| High SSE | 0.57 [0.30, 1.12] | 0.49 [0.25, 0.96]b |
| Lmin tert 3 | 0.85 [0.39, 1.84] | |
| Lmin tert 2 | 0.91 [0.43, 1.93] | |
| Female | 1.40 [0.71, 2.78] | |
| African American | 0.35 [0.17, 0.70] | |
| Age | 1.00 [0.96, 1.03] | |
| Age2e | 1.00 [1.00, 1.00] | |
Logistic regression clustered by subject demonstrated that patients with high SSE had 2 times higher odds of having a KSQI score above 3 (95% CI: 1.12, 3.71; P=0.020). This association was still significant after controlling for noise and patient demographics (OR: 2.01; 95% CI: 1.06, 3.79; P=0.032). After controlling for noise levels and patient demographics, there was a statistically significant association between high SSE and lower odds of noise complaints (OR: 0.49; 95% CI: 0.25, 0.96; P=0.039) (Table 2). Although demographic characteristics were not associated with high SSE, those patients with high SSE had lower odds of being in the loudest tertile rooms (OR: 0.34; 95% CI: 0.15, 0.74; P=0.007).
In multivariate linear regression analyses, there were no significant relationships between SLOC scores and KSQI, reported noise disruptiveness, and markers of sleep (sleep duration or sleep efficiency).
DISCUSSION
This study is the first to examine the relationship between perceived control, noise levels, and objective measurements of sleep in a hospital setting. One measure of perceived control, namely SSE, was associated with objective sleep duration, subjective and objective sleep quality, noise levels in patient rooms, and perhaps also patient complaints of noise. These associations remained significant after controlling for objective noise levels and patient demographics, suggesting that SSE is independently related to sleep.
In contrast to SSE, SLOC was not found to be significantly associated with either subjective or objective measures of sleep quality. The lack of association may be due to the fact that the SLOC questionnaire does not translate as well to the inpatient setting as the SSE questionnaire. The SLOC questionnaire focuses on general beliefs about sleep whereas the SSE questionnaire focuses on personal beliefs about one's own ability sleep in the immediate future, which may make it more relevant in the inpatient setting (see Supporting Information, Appendix 1 and 2, in the online version of this article).
Given our findings, it is important to identify why patients with high SSE have better sleep and fewer noise complaints. One possibility is that sleep self‐efficacy is an inherited trait unique to each person that is also predictive of a patient's sleep patterns. However, is it also possible that those patients with high SSE feel more empowered to take control of their environment, allowing them to advocate for better sleep? This hypothesis is further strengthened by the finding that those patients with high SSE on study entry were less likely to be in the noisiest rooms. This raises the possibility that at least 1 of the mechanisms by which high SSE may be protective against sleep loss is through patients taking an active role in noise reduction, such as closing the door or advocating for their sleep with staff. However, we did not directly observe or ask patients whether doors of patient rooms were open or closed or whether the patients took other measures to advocate for their own sleep. Thus, further work is necessary to understand the mechanisms by which sleep self‐efficacy may influence sleep.
One potential avenue for future research is to explore possible interventions for boosting sleep self‐efficacy in the hospital. Although most interventions have focused on environmental noise and staff‐based education, empowering patients through boosting SSE may be a helpful adjunct to improving hospital sleep.[33, 34] Currently, the SSE scale is not commonly used in the inpatient setting. Motivational interviewing and patient coaching could be explored as potential tools for boosting SSE. Furthermore, even if SSE is not easily changed, measuring SSE in patients newly admitted to the hospital may be useful in identifying patients most susceptible to sleep disruptions. Efforts to identify patients with low SSE should go hand‐in‐hand with measures to reduce noise. Addressing both patient‐level and environmental factors simultaneously may be the best strategy for improving sleep in an inpatient hospital setting.
In contrast to our prior study, it is worth noting that we did not find any significant relationships between overall noise levels and sleep.[9] In this dataset, nighttime noise is still a predictor of sleep loss in the hospital. However, when we restrict our sample to those who answered the SSE questionnaire and had nighttime noise recorded, we lose a significant number of observations. Because of our interest in testing the relationship between SSE and sleep, we chose to control for overall noise (which enabled us to retain more observations). We also did not find any interactions between SSE and noise in our regression models. Further work is warranted with larger sample sizes to better understand the role of SSE in the context of sleep and noise levels. In addition, females also received more sleep than males in our study.
There are several limitations to this study. This study was carried out at a single service at a single institution, limiting the ability to generalize the findings to other hospital settings. This study had a relatively high rate of patients who were unable to complete at least 1 night of data collection (42%), often due to watch removal for imaging or procedures, which may also affect the representativeness of our sample. Moreover, we can only examine associations and not causal relationships. The SSE scale has never been used in hospitalized patients, making comparisons between scores from hospitalized patients and population controls difficult. In addition, the SSE scale also has not been dichotomized in previous studies into high and low SSE. However, a sensitivity analysis with raw SSE scores did not change the results of our study. It can be difficult to perform actigraphy measurements in the hospital because many patients spend most of their time in bed. Because we chose a relatively healthy cohort of patients without significant limitations in mobility, actigraphy could still be used to differentiate time spent awake from time spent sleeping. Because we did not perform polysomnography, we cannot explore the role of sleep architecture which is an important component of sleep quality. Although the use of pharmacologic sleep aids is a potential confounding factor, the rate of use was very low in our cohort and unlikely to significantly affect our results. Continued study of this patient population is warranted to further develop the findings.
In conclusion, patients with high SSE sleep better in the hospital, tend to be in quieter rooms, and may report fewer noise complaints. Our findings suggest that a greater confidence in the ability to sleep may be beneficial in hospitalized adults. In addition to noise control, hospitals should also consider targeting patients with low SSE when designing novel interventions to improve in‐hospital sleep.
Disclosures
This work was supported by funding from the National Institute on Aging through a Short‐Term Aging‐Related Research Program (1 T35 AG029795), National Institute on Aging career development award (K23AG033763), a midcareer career development award (1K24AG031326), a program project (P01AG‐11412), an Agency for Healthcare Research and Quality Centers for Education and Research on Therapeutics grant (1U18HS016967), and a National Institute on Aging Clinical Translational Sciences award (UL1 RR024999). Dr. Arora had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the statistical analysis. The funding agencies had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. The authors report no conflicts of interest.
Lack of sleep is a common problem in hospitalized patients and is associated with poorer health outcomes, especially in older patients.[1, 2, 3] Prior studies highlight a multitude of factors that can result in sleep loss in the hospital[3, 4, 5, 6] with 1 of the most common causes of sleep disruption in the hospital being noise.[7, 8, 9]
In addition to external factors, such as hospital noise, there may be inherent characteristics that predispose certain patients to greater sleep loss when hospitalized. One such measure is the construct of perceived control or the psychological measure of how much individuals expect themselves to be capable of bringing about desired outcomes.[10] Among older patients, low perceived control is associated with increased rates of physician visits, hospitalizations, and death.[11, 12] In contrast, patients who feel more in control of their environment may experience positive health benefits.[13]
Yet, when patients are placed in a hospital setting, they experience a significant reduction in control over their environment along with an increase in dependency on medical staff and therapies.[14, 15] For example, hospitalized patients are restricted in their personal decisions, such as what clothes they can wear and what they can eat and are not in charge of their own schedules, including their sleep time.
Although prior studies suggest that perceived control over sleep is related to actual sleep among community‐dwelling adults,[16, 17] no study has examined this relationship in hospitalized adults. Therefore, the aim of our study was to examine the possible association between perceived control, noise levels, and sleep in hospitalized middle‐aged and older patients.
METHODS
Study Design
We conducted a prospective cohort study of subjects recruited from a large ongoing study of admitted patients at the University of Chicago inpatient general medicine service.[18] Because we were interested in middle‐aged and older adults who are most sensitive to sleep disruptions, patients who were age 50 years and over, ambulatory, and living in the community were eligible for the study.[19] Exclusion criteria were cognitive impairment (telephone version of the Mini‐Mental State Exam <17 out of 22), preexisting sleeping disorders identified via patient charts, such as obstructive sleep apnea and narcolepsy, transfer from the intensive care unit (ICU), and admission to the hospital more than 72 hours prior to enrollment.[20] These inclusion and exclusion criteria were selected to identify a patient population with minimal sleep disturbances at baseline. Patients under isolation were excluded because they are not visited as frequently by the healthcare team.[21, 22] Most general medicine rooms were double occupancy but efforts were made to make patient rooms single when possible or required (ie, isolation for infection control). The study was approved by the University of Chicago Institutional Review Board.
Subjective Data Collection
Baseline levels of perceived control over sleep, or the amount of control patients believe they have over their sleep, were assessed using 2 different scales. The first tool was the 8‐item Sleep Locus of Control (SLOC) scale,[17] which ranges from 8 to 48, with higher values corresponding to a greater internal locus of control over sleep. An internal sleep locus of control indicates beliefs that patients feel that they are primarily responsible for their own sleep as opposed to an external locus of control which indicates beliefs that good sleep is due to luck or chance. For example, patients were asked how strongly they agree or disagree with statements, such as, If I take care of myself, I can avoid insomnia and People who never get insomnia are just plain lucky (see Supporting Information, Appendix 2, in the online version of this article). The second tool was the 9‐item Sleep Self‐Efficacy (SSE) scale,[23] which ranges from 9 to 45, with higher values corresponding to greater confidence patients have in their ability to sleep. One of the items asks, How confident are you that you can lie in bed feeling physically relaxed (see Supporting Information, Appendix 1, in the online version of this article)? Both instruments have been validated in an outpatient setting.[23] These surveys were given immediately on enrollment in the study to measure baseline perceived control.
Baseline sleep habits were also collected on enrollment using the Epworth Sleepiness Scale,[24, 25] a standard validated survey that assesses excess daytime sleepiness in various common situations. For each day in the hospital, patients were asked to report in‐hospital sleep quality using the Karolinska Sleep Log.[26] The Karolinska Sleep Quality Index (KSQI) is calculated from 4 items on the Karolinska Sleep Log (sleep quality, sleep restlessness, slept throughout the night, ease of falling asleep). The questions are on a 5‐point scale and the 4 items are averaged for a final score out of 5 with a higher number indicating better subjective sleep quality. The item How much was your sleep disturbed by noise? on the Karolinska Sleep Log was used to assess the degree to which noise was a disruptor of sleep. This question was also on a 5‐point scale with higher scores indicating greater disruptiveness of noise. Patients were also asked how disruptive noise from roommates was on a nightly basis using this same scale.
Objective Data Collection
Wrist activity monitors (Actiwatch 2; Respironics, Inc., Murrysville, PA)[27, 28, 29, 30] were used to measure patient sleep. Actiware 5 software (Respironics, Inc.)[31] was used to estimate quantitative measures of sleep time and efficiency. Sleep time is defined as the total duration of time spent sleeping at night and sleep efficiency is defined as the fraction of time, reported as a percentage, spent sleeping by actigraphy out of the total time patients reported they were sleeping.
Sound levels in patient rooms were recorded using Larson Davis 720 Sound Level Monitors (Larson Davis, Inc., Provo, UT). These monitors store functional average sound pressure levels in A‐weighted decibels called the Leq over 1‐hour intervals. The Leq is the average sound level over the given time interval. Minimum (Lmin) and maximum (Lmax) sound levels are also stored. The LD SLM Utility Program (Larson Davis, Inc.) was used to extract the sound level measurements recorded by the monitors.
Demographic information (age, gender, race, ethnicity, highest level of education, length of stay in the hospital, and comorbidities) was obtained from hospital charts via an ongoing study of admitted patients at the University of Chicago Medical Center inpatient general medicine service.[18] Chart audits were performed to determine whether patients received pharmacologic sleep aids in the hospital.
Data Analysis
Descriptive statistics were used to summarize mean sleep duration and sleep efficiency in the hospital as well as SLOC and SSE. Because the SSE scores were not normally distributed, the scores were dichotomized at the median to create a variable denoting high and low SSE. Additionally, because the distribution of responses to the noise disruption question was skewed to the right, reports of noise disruptions were grouped into not disruptive (score=1) and disruptive (score>1).
Two‐sample t tests with equal variances were used to assess the relationship between perceived control measures (high/low SLOC, SSE) and objective sleep measures (sleep time, sleep efficiency). Multivariate linear regression was used to test the association between high SSE (independent variable) and sleep time (dependent variable), clustering for multiple nights of data within the subject. Multivariate logistic regression, also adjusting for subject, was used to test the association between high SSE and noise disruptiveness and the association between high SSE and Karolinska scores. Leq, Lmax, and Lmin were all tested using stepwise forward regression. Because our prior work[9] demonstrated that noise levels separated into tertiles were significantly associated with sleep time, our analysis also used noise levels separated into tertiles. Stepwise forward regression was used to add basic patient demographics (gender, race, age) to the models. Statistical significance was defined as P<0.05, and all statistical analysis was done using Stata 11.0 (StataCorp, College Station, TX).
RESULTS
From April 2010 to May 2012, 1134 patients were screened by study personnel for this study via an ongoing study of hospitalized patients on the inpatient general medicine ward. Of the 361 (31.8%) eligible patients, 206 (57.1%) consented to participate. Of the subjects enrolled in the study, 118 were able to complete at least 1 night of actigraphy, sound monitoring, and subjective assessment for a total of 185 patient nights (Figure 1).
The majority of patients were female (57%), African American (67%), and non‐Hispanic (97%). The mean age was 65 years (standard deviation [SD], 11.6 years), and the median length of stay was 4 days (interquartile range [IQR], 36). The majority of patients also had hypertension (67%), with chronic obstructive pulmonary disease [COPD] (31%) and congestive heart failure (31%) being the next most common comorbidities. About two‐thirds of subjects (64%) were characterized as average or above average sleepers with Epworth Sleepiness Scale scores 9[20] (Table 1). Only 5% of patients received pharmacological sleep aids.
| Value, n (%)a | |
|---|---|
| |
| Patient characteristics | |
| Age, mean (SD), y | 63 (12) |
| Length of stay, median (IQR), db | 4 (36) |
| Female | 67 (57) |
| African American | 79 (67) |
| Hispanic | 3 (3) |
| High school graduate | 92 (78) |
| Comorbidities | |
| Hypertension | 79 (66) |
| Chronic obstructive pulmonary disease | 37 (31) |
| Congestive heart failure | 37 (31) |
| Diabetes | 36 (30) |
| End stage renal disease | 23 (19) |
| Baseline sleep characteristics | |
| Sleep duration, mean (SD), minc | 333 (128) |
| Epworth Sleepiness Scale, score 9d | 73 (64) |
The mean baseline SLOC score was 30.4 (SD, 6.7), with a median of 31 (IQR, 2735). The mean baseline SSE score was 32.1 (SD, 9.4), with a median of 34 (IQR, 2441). Fifty‐four patients were categorized as having high sleep self‐efficacy (high SSE), which we defined as scoring above the median of 34.
Average in‐hospital sleep was 5.5 hours (333 minutes; SD, 128 minutes) which was significantly shorter than the self‐reported sleep duration of 6.5 hours prior to admission (387 minutes, SD, 125 minutes; P=0.0001). The mean sleep efficiency was 73% (SD, 19%) with 55% of actigraphy nights below the normal range of 80% efficiency for adults.[19] Median KSQI was 3.5 (IQR, 2.254.75), with 41% of the patients with a KSQI 3, putting them in the insomniac range.[32] The median score on the noise disruptiveness question was 1 (IQR, 14) with 42% of reports coded as disruptive defined as a score >1 on the 5‐point scale. The median score on the roommate disruptiveness question was 1 (IQR, 11) with 77% of responses coded as not disruptive defined as a score of 1 on the 5‐point scale.
A 2‐sample t test with equal variances showed that those patients reporting high SSE were more likely to sleep longer in the hospital than those reporting low SSE (364 minutes 95% confidence interval [CI]: 340, 388 vs 309 minutes 95% CI: 283, 336; P=0.003) (Figure 2). Patients with high SSE were also more likely to have a normal sleep efficiency (above 80%) compared to those with low SSE (54% 95% CI: 43, 65 vs 38% 95% CI: 28,47; P=0.028). Last, there was a trend toward patients reporting higher SSE to also report less noise disruption compared to those patients with low SSE ([42%] 95% CI: 31, 53 vs [56%] 95% CI: 46, 65; P=0.063) (Figure 3).
Linear regression clustered by subject showed that high SSE was associated with longer sleep duration (55 minutes 95% CI: 14, 97; P=0.010). Furthermore, high SSE was significantly associated with longer sleep duration after controlling for both objective noise level and patient demographics in the model using stepwise forward regression (50 minutes 95% CI: 11, 90; P=0.014) (Table 2).
| Sleep Duration (min) | Model 1 Beta [95% CI]a | Model 2 Beta [95% CI]a |
|---|---|---|
| ||
| High SSE | 55 [14, 97]b | 50 [11, 90]b |
| Lmin tert 3 | 14 [59, 29] | |
| Lmin tert 2 | 21 [65, 23] | |
| Female | 49 [10, 89]b | |
| African American | 16 [59, 27] | |
| Age | 1 [0.9, 3] | |
| Karolinska Sleep Quality | Model 1 OR [95% CI]c | Model 2 OR [95% CI]c |
| High SSE | 2.04 [1.12, 3.71]b | 2.01 [1.06, 3.79]b |
| Lmin tert 3 | 0.90 [0.37, 2.2] | |
| Lmin tert 2 | 0.86 [0.38, 1.94] | |
| Female | 1.78 [0.90, 3.52] | |
| African American | 1.19 [0.60, 2.38] | |
| Age | 1.02 [0.99, 1.05] | |
| Noise Complaints | Model 1 OR [95% CI]d | Model 2 OR [95% CI]d |
| High SSE | 0.57 [0.30, 1.12] | 0.49 [0.25, 0.96]b |
| Lmin tert 3 | 0.85 [0.39, 1.84] | |
| Lmin tert 2 | 0.91 [0.43, 1.93] | |
| Female | 1.40 [0.71, 2.78] | |
| African American | 0.35 [0.17, 0.70] | |
| Age | 1.00 [0.96, 1.03] | |
| Age2e | 1.00 [1.00, 1.00] | |
Logistic regression clustered by subject demonstrated that patients with high SSE had 2 times higher odds of having a KSQI score above 3 (95% CI: 1.12, 3.71; P=0.020). This association was still significant after controlling for noise and patient demographics (OR: 2.01; 95% CI: 1.06, 3.79; P=0.032). After controlling for noise levels and patient demographics, there was a statistically significant association between high SSE and lower odds of noise complaints (OR: 0.49; 95% CI: 0.25, 0.96; P=0.039) (Table 2). Although demographic characteristics were not associated with high SSE, those patients with high SSE had lower odds of being in the loudest tertile rooms (OR: 0.34; 95% CI: 0.15, 0.74; P=0.007).
In multivariate linear regression analyses, there were no significant relationships between SLOC scores and KSQI, reported noise disruptiveness, and markers of sleep (sleep duration or sleep efficiency).
DISCUSSION
This study is the first to examine the relationship between perceived control, noise levels, and objective measurements of sleep in a hospital setting. One measure of perceived control, namely SSE, was associated with objective sleep duration, subjective and objective sleep quality, noise levels in patient rooms, and perhaps also patient complaints of noise. These associations remained significant after controlling for objective noise levels and patient demographics, suggesting that SSE is independently related to sleep.
In contrast to SSE, SLOC was not found to be significantly associated with either subjective or objective measures of sleep quality. The lack of association may be due to the fact that the SLOC questionnaire does not translate as well to the inpatient setting as the SSE questionnaire. The SLOC questionnaire focuses on general beliefs about sleep whereas the SSE questionnaire focuses on personal beliefs about one's own ability sleep in the immediate future, which may make it more relevant in the inpatient setting (see Supporting Information, Appendix 1 and 2, in the online version of this article).
Given our findings, it is important to identify why patients with high SSE have better sleep and fewer noise complaints. One possibility is that sleep self‐efficacy is an inherited trait unique to each person that is also predictive of a patient's sleep patterns. However, is it also possible that those patients with high SSE feel more empowered to take control of their environment, allowing them to advocate for better sleep? This hypothesis is further strengthened by the finding that those patients with high SSE on study entry were less likely to be in the noisiest rooms. This raises the possibility that at least 1 of the mechanisms by which high SSE may be protective against sleep loss is through patients taking an active role in noise reduction, such as closing the door or advocating for their sleep with staff. However, we did not directly observe or ask patients whether doors of patient rooms were open or closed or whether the patients took other measures to advocate for their own sleep. Thus, further work is necessary to understand the mechanisms by which sleep self‐efficacy may influence sleep.
One potential avenue for future research is to explore possible interventions for boosting sleep self‐efficacy in the hospital. Although most interventions have focused on environmental noise and staff‐based education, empowering patients through boosting SSE may be a helpful adjunct to improving hospital sleep.[33, 34] Currently, the SSE scale is not commonly used in the inpatient setting. Motivational interviewing and patient coaching could be explored as potential tools for boosting SSE. Furthermore, even if SSE is not easily changed, measuring SSE in patients newly admitted to the hospital may be useful in identifying patients most susceptible to sleep disruptions. Efforts to identify patients with low SSE should go hand‐in‐hand with measures to reduce noise. Addressing both patient‐level and environmental factors simultaneously may be the best strategy for improving sleep in an inpatient hospital setting.
In contrast to our prior study, it is worth noting that we did not find any significant relationships between overall noise levels and sleep.[9] In this dataset, nighttime noise is still a predictor of sleep loss in the hospital. However, when we restrict our sample to those who answered the SSE questionnaire and had nighttime noise recorded, we lose a significant number of observations. Because of our interest in testing the relationship between SSE and sleep, we chose to control for overall noise (which enabled us to retain more observations). We also did not find any interactions between SSE and noise in our regression models. Further work is warranted with larger sample sizes to better understand the role of SSE in the context of sleep and noise levels. In addition, females also received more sleep than males in our study.
There are several limitations to this study. This study was carried out at a single service at a single institution, limiting the ability to generalize the findings to other hospital settings. This study had a relatively high rate of patients who were unable to complete at least 1 night of data collection (42%), often due to watch removal for imaging or procedures, which may also affect the representativeness of our sample. Moreover, we can only examine associations and not causal relationships. The SSE scale has never been used in hospitalized patients, making comparisons between scores from hospitalized patients and population controls difficult. In addition, the SSE scale also has not been dichotomized in previous studies into high and low SSE. However, a sensitivity analysis with raw SSE scores did not change the results of our study. It can be difficult to perform actigraphy measurements in the hospital because many patients spend most of their time in bed. Because we chose a relatively healthy cohort of patients without significant limitations in mobility, actigraphy could still be used to differentiate time spent awake from time spent sleeping. Because we did not perform polysomnography, we cannot explore the role of sleep architecture which is an important component of sleep quality. Although the use of pharmacologic sleep aids is a potential confounding factor, the rate of use was very low in our cohort and unlikely to significantly affect our results. Continued study of this patient population is warranted to further develop the findings.
In conclusion, patients with high SSE sleep better in the hospital, tend to be in quieter rooms, and may report fewer noise complaints. Our findings suggest that a greater confidence in the ability to sleep may be beneficial in hospitalized adults. In addition to noise control, hospitals should also consider targeting patients with low SSE when designing novel interventions to improve in‐hospital sleep.
Disclosures
This work was supported by funding from the National Institute on Aging through a Short‐Term Aging‐Related Research Program (1 T35 AG029795), National Institute on Aging career development award (K23AG033763), a midcareer career development award (1K24AG031326), a program project (P01AG‐11412), an Agency for Healthcare Research and Quality Centers for Education and Research on Therapeutics grant (1U18HS016967), and a National Institute on Aging Clinical Translational Sciences award (UL1 RR024999). Dr. Arora had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the statistical analysis. The funding agencies had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve publication of the finished manuscript. The authors report no conflicts of interest.
- , , , . The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163–178.
- , , , , , . Poor self‐reported sleep quality predicts mortality within one year of inpatient post‐acute rehabilitation among older adults. Sleep. 2011;34(12):1715–1721.
- , , , et al. The sleep of older people in hospital and nursing homes. J Clin Nurs. 1999;8:360–368.
- , , et al. Sleep in hospitalized medical patients, part 1: factors affecting sleep. J Hosp Med. 2008; 3:473–482.
- , , , et al. Nocturnal care interactions with patients in critical care units. Am J Crit Care. 2004;13:102–112; quiz 114–115.
- , , . Patient perception of sleep quality and etiology of sleep disruption in the intensive care unit. Am J Respir Crit Care Med. 1999;159:1155–1162.
- . Sleep in acute care settings: an integrative review. J Nurs Scholarsh. 2000;32(1):31–38.
- , , , et al. Sleep disruption due to hospital noises: a prospective evaluation. Ann Int Med. 2012;157(3): 170–179.
- , , , et al. Noise and sleep among adult medical inpatients: far from a quiet night. Arch Intern Med. 2012;172:68–70.
- . Generalized expectancies for internal versus external control of reinforcement. Psychol Monogr. 1966;80:1–28.
- , . Psychosocial risk factors and mortality: a prospective study with special focus on social support, social participation, and locus of control in Norway. J Epidemiol Community Health. 1998;52:476–481.
- , . The interactive effect of perceived control and functional status on health and mortality among young‐old and old‐old adults. J Gerontol B Psychol Sci Soc Sci. 1997;52:P118–P126.
- , . Role‐specific feelings of control and mortality. Psychol Aging. 2000;15:617–626.
- , , . Patient empowerment in intensive care—an interview study. Intensive Crit Care Nurs. 2006;22:370–377.
- , , . Exploring the relationship between personal control and the hospital environment. J Clin Nurs. 2008;17:1601–1609.
- , , , et al. Effects of volitional lifestyle on sleep‐life habits in the aged. Psychiatry Clin Neurosci. 1998;52:183–184.
- , , , et al. Sleep locus of control: report on a new scale. Behav Sleep Med. 2004;2:79–93.
- , , , et al. Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists. Ann Intern Med. 2002;137:866–874.
- , , , et al. The effects of age, sex, ethnicity, and sleep‐disordered breathing on sleep architecture. Arch Intern Med. 2004;164:406–418.
- , , , et al. Validation of a telephone version of the mini‐mental state examination. J Am Geriatr Soc. 1992;40:697–702.
- , , , et al. Contact isolation in surgical patients: a barrier to care? Surgery. 2003;134:180–188.
- , . Adverse effects of contact isolation. Lancet. 1999;354:1177–1178.
- . Behavioral Treatment for Persistent Insomnia. Elmsford, NY: Pergamon Press; 1987.
- . A new method for measuring daytime sleepiness: the Epworth sleepiness scale. Sleep. 1991;14:540–545.
- . Reliability and factor analysis of the Epworth Sleepiness Scale. Sleep. 1992;15:376–381.
- , . Objective components of individual differences in subjective sleep quality. J Sleep Res. 1997;6:217–220.
- , , , et al. The role of actigraphy in the study of sleep and circadian rhythms. Sleep. 2003;26:342–392.
- , , , et al. Practice parameters for the use of actigraphy in the assessment of sleep and sleep disorders: an update for 2007. Sleep. 2007;30:519–529.
- , , , et al. The role of actigraphy in the evaluation of sleep disorders. Sleep. 1995;18:288–302.
- , , , et al. Clinical review: sleep measurement in critical care patients: research and clinical implications. Crit Care. 2007;11:226.
- , , , et al. Evaluation of immobility time for sleep latency in actigraphy. Sleep Med. 2009;10:621–625.
- , , , et al. The subjective meaning of sleep quality: a comparison of individuals with and without insomnia. Sleep. 2008;31:383–393.
- , , , et al. Sleep in hospitalized medical patients, part 2: behavioral and pharmacological management of sleep disturbances. J Hosp Med. 2009;4:50–59.
- , , , . A nonpharmacologic sleep protocol for hospitalized older patients. J Am Geriatr Soc. 1998;46(6):700–705.
- , , , . The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163–178.
- , , , , , . Poor self‐reported sleep quality predicts mortality within one year of inpatient post‐acute rehabilitation among older adults. Sleep. 2011;34(12):1715–1721.
- , , , et al. The sleep of older people in hospital and nursing homes. J Clin Nurs. 1999;8:360–368.
- , , et al. Sleep in hospitalized medical patients, part 1: factors affecting sleep. J Hosp Med. 2008; 3:473–482.
- , , , et al. Nocturnal care interactions with patients in critical care units. Am J Crit Care. 2004;13:102–112; quiz 114–115.
- , , . Patient perception of sleep quality and etiology of sleep disruption in the intensive care unit. Am J Respir Crit Care Med. 1999;159:1155–1162.
- . Sleep in acute care settings: an integrative review. J Nurs Scholarsh. 2000;32(1):31–38.
- , , , et al. Sleep disruption due to hospital noises: a prospective evaluation. Ann Int Med. 2012;157(3): 170–179.
- , , , et al. Noise and sleep among adult medical inpatients: far from a quiet night. Arch Intern Med. 2012;172:68–70.
- . Generalized expectancies for internal versus external control of reinforcement. Psychol Monogr. 1966;80:1–28.
- , . Psychosocial risk factors and mortality: a prospective study with special focus on social support, social participation, and locus of control in Norway. J Epidemiol Community Health. 1998;52:476–481.
- , . The interactive effect of perceived control and functional status on health and mortality among young‐old and old‐old adults. J Gerontol B Psychol Sci Soc Sci. 1997;52:P118–P126.
- , . Role‐specific feelings of control and mortality. Psychol Aging. 2000;15:617–626.
- , , . Patient empowerment in intensive care—an interview study. Intensive Crit Care Nurs. 2006;22:370–377.
- , , . Exploring the relationship between personal control and the hospital environment. J Clin Nurs. 2008;17:1601–1609.
- , , , et al. Effects of volitional lifestyle on sleep‐life habits in the aged. Psychiatry Clin Neurosci. 1998;52:183–184.
- , , , et al. Sleep locus of control: report on a new scale. Behav Sleep Med. 2004;2:79–93.
- , , , et al. Effects of physician experience on costs and outcomes on an academic general medicine service: results of a trial of hospitalists. Ann Intern Med. 2002;137:866–874.
- , , , et al. The effects of age, sex, ethnicity, and sleep‐disordered breathing on sleep architecture. Arch Intern Med. 2004;164:406–418.
- , , , et al. Validation of a telephone version of the mini‐mental state examination. J Am Geriatr Soc. 1992;40:697–702.
- , , , et al. Contact isolation in surgical patients: a barrier to care? Surgery. 2003;134:180–188.
- , . Adverse effects of contact isolation. Lancet. 1999;354:1177–1178.
- . Behavioral Treatment for Persistent Insomnia. Elmsford, NY: Pergamon Press; 1987.
- . A new method for measuring daytime sleepiness: the Epworth sleepiness scale. Sleep. 1991;14:540–545.
- . Reliability and factor analysis of the Epworth Sleepiness Scale. Sleep. 1992;15:376–381.
- , . Objective components of individual differences in subjective sleep quality. J Sleep Res. 1997;6:217–220.
- , , , et al. The role of actigraphy in the study of sleep and circadian rhythms. Sleep. 2003;26:342–392.
- , , , et al. Practice parameters for the use of actigraphy in the assessment of sleep and sleep disorders: an update for 2007. Sleep. 2007;30:519–529.
- , , , et al. The role of actigraphy in the evaluation of sleep disorders. Sleep. 1995;18:288–302.
- , , , et al. Clinical review: sleep measurement in critical care patients: research and clinical implications. Crit Care. 2007;11:226.
- , , , et al. Evaluation of immobility time for sleep latency in actigraphy. Sleep Med. 2009;10:621–625.
- , , , et al. The subjective meaning of sleep quality: a comparison of individuals with and without insomnia. Sleep. 2008;31:383–393.
- , , , et al. Sleep in hospitalized medical patients, part 2: behavioral and pharmacological management of sleep disturbances. J Hosp Med. 2009;4:50–59.
- , , , . A nonpharmacologic sleep protocol for hospitalized older patients. J Am Geriatr Soc. 1998;46(6):700–705.
Copyright © 2013 Society of Hospital Medicine
Hospitalists on Alert as CRE Infections Spike
Hospitalists should be on the lookout for carbapenem-resistant Enterobacteriaceae (CRE) infections, says one author of a CDC report that noted a three-fold increase in the proportion of Enterobacteriaceae bugs that proved resistant to carbapenem within the past decade.
Earlier this month, the CDC's Morbidity and Mortality Weekly Report revealed that the percentage of CRE infections jumped to 4.2% in 2011 from 1.2% in 2001, according to data from the National Nosocomial Infection Surveillance system.
"It is a very serious public health threat," says co-author Alex Kallen, MD, MPH, a medical epidemiologist and outbreak response coordinator in the CDC's Division of Healthcare Quality Promotion. "Maybe it's not that common now, but with no action, it has the potential to become much more common, like a lot of the other MDROs [multidrug-resistant organisms] that hospitalists see regularly. [Hospitalists] have a lot of control over some of the things that could potentially lead to increased transmission."
Dr. Kallen says HM groups can help reduce the spread of CRE through antibiotic stewardship, the review of detailed patient histories to ferret out risk factors, and dedication to contact precautions and hand hygiene. Hospitalists also play a leadership role in coordinating efforts for patients transferring between hospitals and other institutions, such as skilled-nursing or assisted-living facilities, he says.
Dr. Kallen added that hospitalists should not dismiss CRE, even if they rarely encounter it.
"If you're a place that doesn't see this very often, and you see one, that's a big deal," he adds. "It needs to be acted on aggressively. Being proactive is much more effective than waiting until it's common and then trying to intervene."
Visit our website for more information on hospital-acquired infections.
Hospitalists should be on the lookout for carbapenem-resistant Enterobacteriaceae (CRE) infections, says one author of a CDC report that noted a three-fold increase in the proportion of Enterobacteriaceae bugs that proved resistant to carbapenem within the past decade.
Earlier this month, the CDC's Morbidity and Mortality Weekly Report revealed that the percentage of CRE infections jumped to 4.2% in 2011 from 1.2% in 2001, according to data from the National Nosocomial Infection Surveillance system.
"It is a very serious public health threat," says co-author Alex Kallen, MD, MPH, a medical epidemiologist and outbreak response coordinator in the CDC's Division of Healthcare Quality Promotion. "Maybe it's not that common now, but with no action, it has the potential to become much more common, like a lot of the other MDROs [multidrug-resistant organisms] that hospitalists see regularly. [Hospitalists] have a lot of control over some of the things that could potentially lead to increased transmission."
Dr. Kallen says HM groups can help reduce the spread of CRE through antibiotic stewardship, the review of detailed patient histories to ferret out risk factors, and dedication to contact precautions and hand hygiene. Hospitalists also play a leadership role in coordinating efforts for patients transferring between hospitals and other institutions, such as skilled-nursing or assisted-living facilities, he says.
Dr. Kallen added that hospitalists should not dismiss CRE, even if they rarely encounter it.
"If you're a place that doesn't see this very often, and you see one, that's a big deal," he adds. "It needs to be acted on aggressively. Being proactive is much more effective than waiting until it's common and then trying to intervene."
Visit our website for more information on hospital-acquired infections.
Hospitalists should be on the lookout for carbapenem-resistant Enterobacteriaceae (CRE) infections, says one author of a CDC report that noted a three-fold increase in the proportion of Enterobacteriaceae bugs that proved resistant to carbapenem within the past decade.
Earlier this month, the CDC's Morbidity and Mortality Weekly Report revealed that the percentage of CRE infections jumped to 4.2% in 2011 from 1.2% in 2001, according to data from the National Nosocomial Infection Surveillance system.
"It is a very serious public health threat," says co-author Alex Kallen, MD, MPH, a medical epidemiologist and outbreak response coordinator in the CDC's Division of Healthcare Quality Promotion. "Maybe it's not that common now, but with no action, it has the potential to become much more common, like a lot of the other MDROs [multidrug-resistant organisms] that hospitalists see regularly. [Hospitalists] have a lot of control over some of the things that could potentially lead to increased transmission."
Dr. Kallen says HM groups can help reduce the spread of CRE through antibiotic stewardship, the review of detailed patient histories to ferret out risk factors, and dedication to contact precautions and hand hygiene. Hospitalists also play a leadership role in coordinating efforts for patients transferring between hospitals and other institutions, such as skilled-nursing or assisted-living facilities, he says.
Dr. Kallen added that hospitalists should not dismiss CRE, even if they rarely encounter it.
"If you're a place that doesn't see this very often, and you see one, that's a big deal," he adds. "It needs to be acted on aggressively. Being proactive is much more effective than waiting until it's common and then trying to intervene."
Visit our website for more information on hospital-acquired infections.
Foundation Chips in to Reduce 30-Day Readmissions
The Robert Wood Johnson Foundation of Princeton, N.J., the country’s largest healthcare-focused philanthropy, has undertaken a number of initiatives to improve care transitions and reduce preventable hospital readmissions.
One of the key conclusions from these initiatives, says Anne Weiss, MPP, director of the foundation's Quality/Equality Health Care Team, is that hospitals and hospitalists can't do it alone. "Hospitals are now being held financially accountable for something they can't possibly control," Weiss says, referring to whether or not the discharged patient returns to the hospital within 30 days.
The foundation has mobilized broad community coalitions through its Aligning Forces for Quality campaign, bringing together healthcare providers, purchasers, consumers, and other stakeholders to improve care transitions. One such coalition, Better Health Greater Cleveland of Ohio, announced a 10.7% reduction in avoidable hospitalizations for common cardiac conditions in 2011.
Successful care transitions also require healthcare providers to appreciate the need for patients and their families to engage in their plans for post-discharge care, Weiss says. "I have been stunned to learn the kinds of medical tasks patients and families are now expected to conduct when they go home," she adds. "I hear them say, 'Nobody told us we would have to flush IVs.'"
Through another initiative, the foundation produced an interactive map that displays the percentage of patients readmitted to hospitals within 30 days of discharge; it has supported research that found improvements in nurses' work environments helped to reduce avoidable hospital readmissions. It also has produced a "Transitions to Better Care" video contest for hospitals, as well as a national publicity campaign about these issues called "Care About Your Care."
Visit our website for more information about patient care transitions.
The Robert Wood Johnson Foundation of Princeton, N.J., the country’s largest healthcare-focused philanthropy, has undertaken a number of initiatives to improve care transitions and reduce preventable hospital readmissions.
One of the key conclusions from these initiatives, says Anne Weiss, MPP, director of the foundation's Quality/Equality Health Care Team, is that hospitals and hospitalists can't do it alone. "Hospitals are now being held financially accountable for something they can't possibly control," Weiss says, referring to whether or not the discharged patient returns to the hospital within 30 days.
The foundation has mobilized broad community coalitions through its Aligning Forces for Quality campaign, bringing together healthcare providers, purchasers, consumers, and other stakeholders to improve care transitions. One such coalition, Better Health Greater Cleveland of Ohio, announced a 10.7% reduction in avoidable hospitalizations for common cardiac conditions in 2011.
Successful care transitions also require healthcare providers to appreciate the need for patients and their families to engage in their plans for post-discharge care, Weiss says. "I have been stunned to learn the kinds of medical tasks patients and families are now expected to conduct when they go home," she adds. "I hear them say, 'Nobody told us we would have to flush IVs.'"
Through another initiative, the foundation produced an interactive map that displays the percentage of patients readmitted to hospitals within 30 days of discharge; it has supported research that found improvements in nurses' work environments helped to reduce avoidable hospital readmissions. It also has produced a "Transitions to Better Care" video contest for hospitals, as well as a national publicity campaign about these issues called "Care About Your Care."
Visit our website for more information about patient care transitions.
The Robert Wood Johnson Foundation of Princeton, N.J., the country’s largest healthcare-focused philanthropy, has undertaken a number of initiatives to improve care transitions and reduce preventable hospital readmissions.
One of the key conclusions from these initiatives, says Anne Weiss, MPP, director of the foundation's Quality/Equality Health Care Team, is that hospitals and hospitalists can't do it alone. "Hospitals are now being held financially accountable for something they can't possibly control," Weiss says, referring to whether or not the discharged patient returns to the hospital within 30 days.
The foundation has mobilized broad community coalitions through its Aligning Forces for Quality campaign, bringing together healthcare providers, purchasers, consumers, and other stakeholders to improve care transitions. One such coalition, Better Health Greater Cleveland of Ohio, announced a 10.7% reduction in avoidable hospitalizations for common cardiac conditions in 2011.
Successful care transitions also require healthcare providers to appreciate the need for patients and their families to engage in their plans for post-discharge care, Weiss says. "I have been stunned to learn the kinds of medical tasks patients and families are now expected to conduct when they go home," she adds. "I hear them say, 'Nobody told us we would have to flush IVs.'"
Through another initiative, the foundation produced an interactive map that displays the percentage of patients readmitted to hospitals within 30 days of discharge; it has supported research that found improvements in nurses' work environments helped to reduce avoidable hospital readmissions. It also has produced a "Transitions to Better Care" video contest for hospitals, as well as a national publicity campaign about these issues called "Care About Your Care."
Visit our website for more information about patient care transitions.
Head, neck infections rising among children
Pediatricians in the emergency room are seeing more complicated head and neck infections, and more often. That's according to Dr. Keith Borg, emergency medicine physician and assistant professor at the Medical University of South Carolina. He gave some tips to choosing the best course of diagnosis and treatment.
Pediatricians in the emergency room are seeing more complicated head and neck infections, and more often. That's according to Dr. Keith Borg, emergency medicine physician and assistant professor at the Medical University of South Carolina. He gave some tips to choosing the best course of diagnosis and treatment.
Pediatricians in the emergency room are seeing more complicated head and neck infections, and more often. That's according to Dr. Keith Borg, emergency medicine physician and assistant professor at the Medical University of South Carolina. He gave some tips to choosing the best course of diagnosis and treatment.