User login
A research team based at the Cleveland Clinic has released a new risk assessment tool to allow fair comparison of hospital outcomes across institutions. The tool provides a reliable way for hospitals to predict length of stay and mortality for surgical patients using only administrative data, researchers said.
The tool – the Risk Stratification Index – is in the public domain. The Cleveland Clinic uses it to stratify risk in its internal outcomes analyses, according to Dr. Daniel Sessler, the article's lead author, who chairs the department of outcomes research at the clinic (Anesthesiology 2010;113:1026-37).
“Hospitals are already being compared,” Dr. Sessler said in an interview. “But comparisons only make sense after adjusting for baseline and the risk associated with different operations. Our Risk Stratification Index allows for an accurate and fair comparison among hospitals using only publicly available data.” He said a new risk assessment tool was needed because institutions use various systems to evaluate outcomes, and many of these systems are proprietary and nontransparent.
“Available systems are either inaccurate or require special clinical data that are not generally or publicly available,” he said, adding that the Risk Stratification Index (RSI) is more accurate than other generally available nonproprietary systems, and uses only publicly available billing information.
To develop the index, Dr. Sessler and his colleagues used more than 35 million patient stay records from 2001-2006 Medicare Provider Analysis and Review files, randomly dividing them into development and validation sets. RSIs for length of stay and mortality end points were derived from aggregate risk associated with individual diagnostic and procedure codes.
Next, the researchers tested performance of the RSIs prospectively on the validation database, as well as on a single institution registry of 103,324 adult surgical patients, and compared the results with an index designed to predict 1-year mortality.
They found that the risk stratification model accurately predicted 30-day and 1-year postdischarge mortality, while separate risk stratification models predicted length of stay and in-hospital mortality. The risk predictions are accurate for as few as 2,000 patients, meaning the system can be used effectively by small hospitals.
“RSI is a broadly applicable and robust system for assessing hospital length of stay and mortality for groups of surgical patients based solely on administrative data,” Dr. Sessler and his colleagues concluded in their paper.
They wanted to make the RSI available to any hospital, so they put it in the public domain, Dr. Sessler explained. He anticipates that it will be adopted rapidly because it's objective, transparent, requires only billing codes, and is free to use. Details of how to use the system and sample files are available at www.clevelandclinic.org/RSI
The tool shows promise but has some drawbacks, Dr. Charles Mabry of the University of Arkansas in Pine Bluff noted in an interview. “Like many risk adjustment methods, this relies upon the administrative data set, which is submitted with hospital bills to insurers. As such, many clinical factors, such as weight, blood pressure, drugs used, socioeconomic status, etc., aren't reported, and thus [are] unavailable to help with risk stratification.”
For large numbers of patients, the administrative data set can help reveal major differences in such factors as treatment and medications, Dr. Mabry said. However, for smaller numbers of patients – for example, the number in a group that had one particular surgical procedure – it becomes weaker, he said.
Other large organizations, along with the Centers for Medicare and Medicaid Services, are using the administrative data set for their own risk adjustment algorithms, Dr. Mabry noted.
The American College of Surgeons' National Surgical Quality Improvement Program (NSQIP) does measure the clinical factors omitted from the administrative data set, along with some complications that might also be missed, said Dr. Mabry. “Thus, compared with the Sessler index, it can more reliably detect differences in outcomes for smaller numbers of patients, such as comparing the outcomes of gallbladder surgery between various hospitals,” he said.
However, the Physician Quality Reporting Initiative (PQRI) primarily measures process as opposed to outcomes, Dr. Mabry said. “I think PQRI is a waste of time and effort,” he said. “Many feel that outcomes measurement is really what we need to be aiming for, rather than process compliance.”
Dr. Chad Rubin, a surgeon in Columbia, S.C., agreed that the RSI is limited through its use of the administrative data set. “While it appears a useful tool, I am always reticent to give credence to something so important as hospital (and maybe doctor) outcomes when the original data may be flawed,” he said in an interview.
The NSQIP, meanwhile, “may be more relevant to quality. For instance, the definition of skin and soft tissue infection, while a very common diagnosis/complication, varies widely in the claims data but has a strict definition by NSQIP,” Dr. Rubin said. “While NSQIP is expensive (both the enrollment and FTE required), it depends on the quality of the data as to whether it is too resource-intensive. I'm sure hospitals have spent a lot more on SCIP [Surgical Care Improvement Project] than on NSQIP for a lot less improvement in quality.”
NSQIP remains the gold standard, Dr. Rubin said. “The use of good clinical data carefully collected and carefully risk adjusted is, in my opinion, the way to go,” he said. “I'm worried that lesser claims data will not be accurate but will be acted upon as if it were.”
Dr. Sessler said he agrees that the NSQIP registry is valuable, but it applies to a limited number of hospitals, and fewer than 1% of U.S. surgical patients. “Specially trained nurses must abstract clinical details from the records of each NSQIP patient,” he said. “Because NSQIP applies to so few patients in so few hospitals, it cannot be used to compare hospital performance.”
The RSI provides 'an accurate and fair comparison among hospitals using only publicly available data.'
Source DR. SESSLER
A research team based at the Cleveland Clinic has released a new risk assessment tool to allow fair comparison of hospital outcomes across institutions. The tool provides a reliable way for hospitals to predict length of stay and mortality for surgical patients using only administrative data, researchers said.
The tool – the Risk Stratification Index – is in the public domain. The Cleveland Clinic uses it to stratify risk in its internal outcomes analyses, according to Dr. Daniel Sessler, the article's lead author, who chairs the department of outcomes research at the clinic (Anesthesiology 2010;113:1026-37).
“Hospitals are already being compared,” Dr. Sessler said in an interview. “But comparisons only make sense after adjusting for baseline and the risk associated with different operations. Our Risk Stratification Index allows for an accurate and fair comparison among hospitals using only publicly available data.” He said a new risk assessment tool was needed because institutions use various systems to evaluate outcomes, and many of these systems are proprietary and nontransparent.
“Available systems are either inaccurate or require special clinical data that are not generally or publicly available,” he said, adding that the Risk Stratification Index (RSI) is more accurate than other generally available nonproprietary systems, and uses only publicly available billing information.
To develop the index, Dr. Sessler and his colleagues used more than 35 million patient stay records from 2001-2006 Medicare Provider Analysis and Review files, randomly dividing them into development and validation sets. RSIs for length of stay and mortality end points were derived from aggregate risk associated with individual diagnostic and procedure codes.
Next, the researchers tested performance of the RSIs prospectively on the validation database, as well as on a single institution registry of 103,324 adult surgical patients, and compared the results with an index designed to predict 1-year mortality.
They found that the risk stratification model accurately predicted 30-day and 1-year postdischarge mortality, while separate risk stratification models predicted length of stay and in-hospital mortality. The risk predictions are accurate for as few as 2,000 patients, meaning the system can be used effectively by small hospitals.
“RSI is a broadly applicable and robust system for assessing hospital length of stay and mortality for groups of surgical patients based solely on administrative data,” Dr. Sessler and his colleagues concluded in their paper.
They wanted to make the RSI available to any hospital, so they put it in the public domain, Dr. Sessler explained. He anticipates that it will be adopted rapidly because it's objective, transparent, requires only billing codes, and is free to use. Details of how to use the system and sample files are available at www.clevelandclinic.org/RSI
The tool shows promise but has some drawbacks, Dr. Charles Mabry of the University of Arkansas in Pine Bluff noted in an interview. “Like many risk adjustment methods, this relies upon the administrative data set, which is submitted with hospital bills to insurers. As such, many clinical factors, such as weight, blood pressure, drugs used, socioeconomic status, etc., aren't reported, and thus [are] unavailable to help with risk stratification.”
For large numbers of patients, the administrative data set can help reveal major differences in such factors as treatment and medications, Dr. Mabry said. However, for smaller numbers of patients – for example, the number in a group that had one particular surgical procedure – it becomes weaker, he said.
Other large organizations, along with the Centers for Medicare and Medicaid Services, are using the administrative data set for their own risk adjustment algorithms, Dr. Mabry noted.
The American College of Surgeons' National Surgical Quality Improvement Program (NSQIP) does measure the clinical factors omitted from the administrative data set, along with some complications that might also be missed, said Dr. Mabry. “Thus, compared with the Sessler index, it can more reliably detect differences in outcomes for smaller numbers of patients, such as comparing the outcomes of gallbladder surgery between various hospitals,” he said.
However, the Physician Quality Reporting Initiative (PQRI) primarily measures process as opposed to outcomes, Dr. Mabry said. “I think PQRI is a waste of time and effort,” he said. “Many feel that outcomes measurement is really what we need to be aiming for, rather than process compliance.”
Dr. Chad Rubin, a surgeon in Columbia, S.C., agreed that the RSI is limited through its use of the administrative data set. “While it appears a useful tool, I am always reticent to give credence to something so important as hospital (and maybe doctor) outcomes when the original data may be flawed,” he said in an interview.
The NSQIP, meanwhile, “may be more relevant to quality. For instance, the definition of skin and soft tissue infection, while a very common diagnosis/complication, varies widely in the claims data but has a strict definition by NSQIP,” Dr. Rubin said. “While NSQIP is expensive (both the enrollment and FTE required), it depends on the quality of the data as to whether it is too resource-intensive. I'm sure hospitals have spent a lot more on SCIP [Surgical Care Improvement Project] than on NSQIP for a lot less improvement in quality.”
NSQIP remains the gold standard, Dr. Rubin said. “The use of good clinical data carefully collected and carefully risk adjusted is, in my opinion, the way to go,” he said. “I'm worried that lesser claims data will not be accurate but will be acted upon as if it were.”
Dr. Sessler said he agrees that the NSQIP registry is valuable, but it applies to a limited number of hospitals, and fewer than 1% of U.S. surgical patients. “Specially trained nurses must abstract clinical details from the records of each NSQIP patient,” he said. “Because NSQIP applies to so few patients in so few hospitals, it cannot be used to compare hospital performance.”
The RSI provides 'an accurate and fair comparison among hospitals using only publicly available data.'
Source DR. SESSLER
A research team based at the Cleveland Clinic has released a new risk assessment tool to allow fair comparison of hospital outcomes across institutions. The tool provides a reliable way for hospitals to predict length of stay and mortality for surgical patients using only administrative data, researchers said.
The tool – the Risk Stratification Index – is in the public domain. The Cleveland Clinic uses it to stratify risk in its internal outcomes analyses, according to Dr. Daniel Sessler, the article's lead author, who chairs the department of outcomes research at the clinic (Anesthesiology 2010;113:1026-37).
“Hospitals are already being compared,” Dr. Sessler said in an interview. “But comparisons only make sense after adjusting for baseline and the risk associated with different operations. Our Risk Stratification Index allows for an accurate and fair comparison among hospitals using only publicly available data.” He said a new risk assessment tool was needed because institutions use various systems to evaluate outcomes, and many of these systems are proprietary and nontransparent.
“Available systems are either inaccurate or require special clinical data that are not generally or publicly available,” he said, adding that the Risk Stratification Index (RSI) is more accurate than other generally available nonproprietary systems, and uses only publicly available billing information.
To develop the index, Dr. Sessler and his colleagues used more than 35 million patient stay records from 2001-2006 Medicare Provider Analysis and Review files, randomly dividing them into development and validation sets. RSIs for length of stay and mortality end points were derived from aggregate risk associated with individual diagnostic and procedure codes.
Next, the researchers tested performance of the RSIs prospectively on the validation database, as well as on a single institution registry of 103,324 adult surgical patients, and compared the results with an index designed to predict 1-year mortality.
They found that the risk stratification model accurately predicted 30-day and 1-year postdischarge mortality, while separate risk stratification models predicted length of stay and in-hospital mortality. The risk predictions are accurate for as few as 2,000 patients, meaning the system can be used effectively by small hospitals.
“RSI is a broadly applicable and robust system for assessing hospital length of stay and mortality for groups of surgical patients based solely on administrative data,” Dr. Sessler and his colleagues concluded in their paper.
They wanted to make the RSI available to any hospital, so they put it in the public domain, Dr. Sessler explained. He anticipates that it will be adopted rapidly because it's objective, transparent, requires only billing codes, and is free to use. Details of how to use the system and sample files are available at www.clevelandclinic.org/RSI
The tool shows promise but has some drawbacks, Dr. Charles Mabry of the University of Arkansas in Pine Bluff noted in an interview. “Like many risk adjustment methods, this relies upon the administrative data set, which is submitted with hospital bills to insurers. As such, many clinical factors, such as weight, blood pressure, drugs used, socioeconomic status, etc., aren't reported, and thus [are] unavailable to help with risk stratification.”
For large numbers of patients, the administrative data set can help reveal major differences in such factors as treatment and medications, Dr. Mabry said. However, for smaller numbers of patients – for example, the number in a group that had one particular surgical procedure – it becomes weaker, he said.
Other large organizations, along with the Centers for Medicare and Medicaid Services, are using the administrative data set for their own risk adjustment algorithms, Dr. Mabry noted.
The American College of Surgeons' National Surgical Quality Improvement Program (NSQIP) does measure the clinical factors omitted from the administrative data set, along with some complications that might also be missed, said Dr. Mabry. “Thus, compared with the Sessler index, it can more reliably detect differences in outcomes for smaller numbers of patients, such as comparing the outcomes of gallbladder surgery between various hospitals,” he said.
However, the Physician Quality Reporting Initiative (PQRI) primarily measures process as opposed to outcomes, Dr. Mabry said. “I think PQRI is a waste of time and effort,” he said. “Many feel that outcomes measurement is really what we need to be aiming for, rather than process compliance.”
Dr. Chad Rubin, a surgeon in Columbia, S.C., agreed that the RSI is limited through its use of the administrative data set. “While it appears a useful tool, I am always reticent to give credence to something so important as hospital (and maybe doctor) outcomes when the original data may be flawed,” he said in an interview.
The NSQIP, meanwhile, “may be more relevant to quality. For instance, the definition of skin and soft tissue infection, while a very common diagnosis/complication, varies widely in the claims data but has a strict definition by NSQIP,” Dr. Rubin said. “While NSQIP is expensive (both the enrollment and FTE required), it depends on the quality of the data as to whether it is too resource-intensive. I'm sure hospitals have spent a lot more on SCIP [Surgical Care Improvement Project] than on NSQIP for a lot less improvement in quality.”
NSQIP remains the gold standard, Dr. Rubin said. “The use of good clinical data carefully collected and carefully risk adjusted is, in my opinion, the way to go,” he said. “I'm worried that lesser claims data will not be accurate but will be acted upon as if it were.”
Dr. Sessler said he agrees that the NSQIP registry is valuable, but it applies to a limited number of hospitals, and fewer than 1% of U.S. surgical patients. “Specially trained nurses must abstract clinical details from the records of each NSQIP patient,” he said. “Because NSQIP applies to so few patients in so few hospitals, it cannot be used to compare hospital performance.”
The RSI provides 'an accurate and fair comparison among hospitals using only publicly available data.'
Source DR. SESSLER