User login
Surgeons successfully reattach testis after wrong-site surgery
NEW YORK (Reuters) – After doctors removed the wrong testis from a young man, the search was on for a surgeon who might be willing to try to replant it.
A new case report details the experience of a 25-year-old patient who had developed testicular pain and a palpable mass in his right testis; he went to a local hospital for a radical orchiectomy only to have the surgical team remove the left – wrong – testis.
Once the team recognized their error, they began searching for a center with microsurgical capacity to replant the testis.
“The take-home message is that microsurgery can be used to reattach an organ, in the case of a wrong-site surgery,” lead author Dr. Fatma Tuncer, a microsurgery fellow at the Cleveland Clinic, in Ohio, at the time of the surgery, told Reuters Health by email. She is now an assistant professor of plastic surgery at the University of Utah.
“The vast majority of surgeries, including urologic procedures will never have such an event, but there are helpful groups of physicians that are available to reduce the morbidity of such an event,” said coauthor Dr. Brian Gastman, a professor of surgery at the Case Western School of Medicine and a surgeon at the Cleveland Clinic.
“We were, I believe, the third one contacted, each one causing a greater time of ischemia,” Dr. Gastman told Reuters Health by email. “I accepted the patient and in doing so had the buy-in of my urology and anesthesia colleagues.”
Once Dr. Gastman and his team agreed to take on the task, the patient, and his testis, were flown to Cleveland. Once the patient arrived, he was counseled on the risks and benefits of the surgery. After agreeing to the surgery, the patient was taken to the OR immediately by the plastic surgery and urology teams.
Prior to initiating anesthesia, the testicle was examined and the urology team performed testicular sperm extraction as the patient did not have any biological children. The sperm were transported to a CLIA-certified andrology lab and were cryopreserved.
Next, the team examined the testis and spermatic cord under the microscope. The team identified the testicular artery, veins and vas deferens and marked them with prolene sutures. They next placed the testis in a moist gauze over ice until the recipient vessels were prepared.
After the team reconnected vessels, they observed strong arterial and venous Doppler flow on both testicular vessels and the testis itself. Five days after the replantation surgery, the team performed a radical orchiectomy on the correct side.
Dr. Gastman isn’t sure how well the testis will perform over time. “I cannot speak too much on this as it is ongoing,” he said. “But he will likely need some level of hormonal supplementation. I can state that the testis is alive and palpable.”
This is a “very interesting paper,” said Dr. Miroslav Djordjevic, a professor of urology at the Icahn School of Medicine at Mount Sinai, New York. “Congratulations to colleagues for a great idea for solving this wrong-site surgery with very precise microsurgical technique and new insight in the fight to save the organs.”
Still, Dr. Djordjevic told Reuters Health by email, “postoperatively, the authors confirmed there was not complete testicular function based on testosterone levels and hypotrophy of the reimplanted testis. The main reason is the time between removal and reimplanation. Based on experiences with testicular torsion, four to six hours is the maximum that will offer restoration of volume and function. Here, a longer period (10 hours) resulted in poor outcomes.”
“Our experience with testicular implantation in monozygotic twins showed great success (Belgrade University, Serbia, December 2019, personal report) because the cold ischemia was only one hour,” Dr. Djordjevic said.
Reuters Health Information © 2022
NEW YORK (Reuters) – After doctors removed the wrong testis from a young man, the search was on for a surgeon who might be willing to try to replant it.
A new case report details the experience of a 25-year-old patient who had developed testicular pain and a palpable mass in his right testis; he went to a local hospital for a radical orchiectomy only to have the surgical team remove the left – wrong – testis.
Once the team recognized their error, they began searching for a center with microsurgical capacity to replant the testis.
“The take-home message is that microsurgery can be used to reattach an organ, in the case of a wrong-site surgery,” lead author Dr. Fatma Tuncer, a microsurgery fellow at the Cleveland Clinic, in Ohio, at the time of the surgery, told Reuters Health by email. She is now an assistant professor of plastic surgery at the University of Utah.
“The vast majority of surgeries, including urologic procedures will never have such an event, but there are helpful groups of physicians that are available to reduce the morbidity of such an event,” said coauthor Dr. Brian Gastman, a professor of surgery at the Case Western School of Medicine and a surgeon at the Cleveland Clinic.
“We were, I believe, the third one contacted, each one causing a greater time of ischemia,” Dr. Gastman told Reuters Health by email. “I accepted the patient and in doing so had the buy-in of my urology and anesthesia colleagues.”
Once Dr. Gastman and his team agreed to take on the task, the patient, and his testis, were flown to Cleveland. Once the patient arrived, he was counseled on the risks and benefits of the surgery. After agreeing to the surgery, the patient was taken to the OR immediately by the plastic surgery and urology teams.
Prior to initiating anesthesia, the testicle was examined and the urology team performed testicular sperm extraction as the patient did not have any biological children. The sperm were transported to a CLIA-certified andrology lab and were cryopreserved.
Next, the team examined the testis and spermatic cord under the microscope. The team identified the testicular artery, veins and vas deferens and marked them with prolene sutures. They next placed the testis in a moist gauze over ice until the recipient vessels were prepared.
After the team reconnected vessels, they observed strong arterial and venous Doppler flow on both testicular vessels and the testis itself. Five days after the replantation surgery, the team performed a radical orchiectomy on the correct side.
Dr. Gastman isn’t sure how well the testis will perform over time. “I cannot speak too much on this as it is ongoing,” he said. “But he will likely need some level of hormonal supplementation. I can state that the testis is alive and palpable.”
This is a “very interesting paper,” said Dr. Miroslav Djordjevic, a professor of urology at the Icahn School of Medicine at Mount Sinai, New York. “Congratulations to colleagues for a great idea for solving this wrong-site surgery with very precise microsurgical technique and new insight in the fight to save the organs.”
Still, Dr. Djordjevic told Reuters Health by email, “postoperatively, the authors confirmed there was not complete testicular function based on testosterone levels and hypotrophy of the reimplanted testis. The main reason is the time between removal and reimplanation. Based on experiences with testicular torsion, four to six hours is the maximum that will offer restoration of volume and function. Here, a longer period (10 hours) resulted in poor outcomes.”
“Our experience with testicular implantation in monozygotic twins showed great success (Belgrade University, Serbia, December 2019, personal report) because the cold ischemia was only one hour,” Dr. Djordjevic said.
Reuters Health Information © 2022
NEW YORK (Reuters) – After doctors removed the wrong testis from a young man, the search was on for a surgeon who might be willing to try to replant it.
A new case report details the experience of a 25-year-old patient who had developed testicular pain and a palpable mass in his right testis; he went to a local hospital for a radical orchiectomy only to have the surgical team remove the left – wrong – testis.
Once the team recognized their error, they began searching for a center with microsurgical capacity to replant the testis.
“The take-home message is that microsurgery can be used to reattach an organ, in the case of a wrong-site surgery,” lead author Dr. Fatma Tuncer, a microsurgery fellow at the Cleveland Clinic, in Ohio, at the time of the surgery, told Reuters Health by email. She is now an assistant professor of plastic surgery at the University of Utah.
“The vast majority of surgeries, including urologic procedures will never have such an event, but there are helpful groups of physicians that are available to reduce the morbidity of such an event,” said coauthor Dr. Brian Gastman, a professor of surgery at the Case Western School of Medicine and a surgeon at the Cleveland Clinic.
“We were, I believe, the third one contacted, each one causing a greater time of ischemia,” Dr. Gastman told Reuters Health by email. “I accepted the patient and in doing so had the buy-in of my urology and anesthesia colleagues.”
Once Dr. Gastman and his team agreed to take on the task, the patient, and his testis, were flown to Cleveland. Once the patient arrived, he was counseled on the risks and benefits of the surgery. After agreeing to the surgery, the patient was taken to the OR immediately by the plastic surgery and urology teams.
Prior to initiating anesthesia, the testicle was examined and the urology team performed testicular sperm extraction as the patient did not have any biological children. The sperm were transported to a CLIA-certified andrology lab and were cryopreserved.
Next, the team examined the testis and spermatic cord under the microscope. The team identified the testicular artery, veins and vas deferens and marked them with prolene sutures. They next placed the testis in a moist gauze over ice until the recipient vessels were prepared.
After the team reconnected vessels, they observed strong arterial and venous Doppler flow on both testicular vessels and the testis itself. Five days after the replantation surgery, the team performed a radical orchiectomy on the correct side.
Dr. Gastman isn’t sure how well the testis will perform over time. “I cannot speak too much on this as it is ongoing,” he said. “But he will likely need some level of hormonal supplementation. I can state that the testis is alive and palpable.”
This is a “very interesting paper,” said Dr. Miroslav Djordjevic, a professor of urology at the Icahn School of Medicine at Mount Sinai, New York. “Congratulations to colleagues for a great idea for solving this wrong-site surgery with very precise microsurgical technique and new insight in the fight to save the organs.”
Still, Dr. Djordjevic told Reuters Health by email, “postoperatively, the authors confirmed there was not complete testicular function based on testosterone levels and hypotrophy of the reimplanted testis. The main reason is the time between removal and reimplanation. Based on experiences with testicular torsion, four to six hours is the maximum that will offer restoration of volume and function. Here, a longer period (10 hours) resulted in poor outcomes.”
“Our experience with testicular implantation in monozygotic twins showed great success (Belgrade University, Serbia, December 2019, personal report) because the cold ischemia was only one hour,” Dr. Djordjevic said.
Reuters Health Information © 2022
A Boost for QI Research
In a move that pleased many researchers, the Office of Human Research Protections (OHRP) in mid-February reversed its decision to shut down a Johns Hopkins Quality Improvement study in Michigan.
On the heels of an SHM-led coalition’s efforts, a letter to the Hopkins researchers said the OHRP decided to move on and would immediately lift its ban on data collection by the Michigan hospitals participating in the study.
At first glance the new decision appeared to be a victory for researchers and others who worried the OHRP’s earlier ruling might have a chilling effect on quality improvement (QI) studies. A closer examination of the agency’s response shows that while officials at the OHRP heard and reacted to the loud outcry from the medical community, they haven’t significantly changed their approach to regulating QI research.
In fact, the OHRP’s director explains the apparent about-face wasn’t really a reversal. It simply was a determination that the time for regulation already had passed—that essentially the horse already left the barn.
“Because the five-part intervention (including the checklist) has now been adopted by the Michigan hospitals as a proven effective standard of practice, the intervention no longer represents a research intervention with the patients at the hospitals, and is therefore not research involving human subjects,” says OHRP Director Ivor Pritchard, PhD. “And because Johns Hopkins is not receiving private, identifiable data from the Michigan hospitals, but rather de-identified data about the frequency of infections in the ICUs, this research activity is not research involving human subjects.”
What this means is the OHRP again may decide to step in if it were to receive a complaint about an ongoing QI study, like the Johns Hopkins project.
“Assuming [Health and Human Services] had the authority to regulate the activity, and the regulations had not changed, we would continue to advise institutions that such a QI study would fall under the U.S. Department of Health and Human Services (HHS) protection of human subject regulations,” Dr. Pritchard says. “Whether we would take a compliance action in response to a complaint about such a research activity is a different matter, however, and would depend on the specific facts of the case.”
The most recent letter to Johns Hopkins and Dr. Pritchard’s responses show there really hasn’t been any resolution to the problem, says Mary Ann Baily, PhD, an associate for ethics and health policy at the Hastings Center in Garrison, N.Y.
In fact, the letter to Johns Hopkins suggests a study with the exact design might again run afoul of the OHRP, Dr. Baily says.
Still, Dr. Pritchard’s comments show there have been some changes in the way the OHRP views its role when it comes to QI studies and this may impact the way the agency responds next time, Dr. Baily says.
Although Dr. Pritchard didn’t rule out the possibility a future study might be shut down, the agency appears to have become sensitized to the concerns of the research community. “Our current efforts are directed toward finding better ways to communicate the relationship between quality improvement and research to both the healthcare and research communities,” he explains. “At the same time we are also reviewing the application of these rules to QI activities like the Johns Hopkins project and whether any changes are needed to encourage such work.”
This is a good sign, Dr. Baily says. It shows an openness to outside opinions that hasn’t been obvious in the past, she adds.
QI researchers and healthcare experts also have been heartened by that newfound openness at the OHRP. It’s a solid signal that voices of protest were successful in grabbing the attention of OHRP officials, they say.
“The fact that they rescinded a prior ruling based on pushback from the field is quite important,” says Robert Wachter, MD, professor and chief of the division of hospital medicine at the University of California at San Francisco, a former SHM president, and author of the blog “Wachter’s World” (www.wachtersworld.com). “It says that they have at least heard and responded to pressure from people doing this work.”
Unless there is a clear-cut set of rules that allow researchers to easily figure out when a study might catch the attention of the OHRP, many simply may decide against pursuing QI studies.
Dr. Wachter and others hope the latest communications from the OHRP are a sign officials at the agency are open to outside opinions and ready to start a dialogue.
That would be an important change, says Michael A. Matthay, MD, a professor of medicine and anesthesia at the University of California San Francisco. Up until now, the agency has been unfettered.
Dr. Matthay has had the experience of being second-guessed by the OHRP. In 2003, he was a researcher on a study sponsored and overseen by the National Institutes of Health. The research was brought to a screeching halt when officials at the OHRP decided they didn’t like the study’s design.
Although that study eventually was allowed to resume, the down time wasn’t without its costs, since it delayed results that eventually had a significant effect on patient care, says Dr. Matthay.
The Hopkins case is just another example of what happens when a government agency like the OHRP is allowed to act without oversight of its own actions, experts contend.
It highlights the agency’s ineffectiveness and inability to protect patient interests, Dr. Matthay suggests. “It’s not going to result in a better quality of care, and it’s not protecting patient rights,” he concludes.
“I’m a hopeful guy,” Dr. Wachter says. “If you’d asked me three months ago, when the ruling first came out, whether we would be able to get people an agency that had previously been impervious to public pressure to notice and pay attention, I might not have believed it.
“I think we’ve already gotten somewhere. This is just the first step. And it’s not a trivial first step. Federal agencies tend to turn off the phone and e-mail in response to pressure. We’ve shaken them by the shoulders. They have to realize how much turmoil they’re creating in the field and why this is going to be harmful to quality care of patients.”
In one of the clearest signs that the “pushback” from researchers has had an effect, officials at the OHRP admitted the Hopkins case might have caused confusion among QI researchers. The agency would like to help clear things up, Dr. Pritchard says.
“Our impression is that many institutions are currently grappling with the challenges of determining when QI studies require [internal board review] and when informed consent should be required or waived,” he allows. “You should also be aware that, going forward, HHS officials will make a sincere effort to improve communications with medical providers and researchers so that quality improvement initiatives that pose minimal risks to subjects are not inhibited by the regulations. We’re also encouraging any providers or researchers with questions about these regulations to contact us for guidance. In addition, we’re reviewing the application of these rules to evidence-based quality improvement activities, like the Johns Hopkins project, and whether any changes are needed to encourage such work while safeguarding the rights and welfare of human subjects in research.”
Dr. Wachter and others hope there will be much more communication between researchers and the OHRP.
“My hope is that this is not done, that this is the beginning of a very important conversation,” Dr. Wachter says. “If it is done, then this has simply been a Pyrrhic victory.” TH
Linda Carroll is a medical writer based in New Jersey.
In a move that pleased many researchers, the Office of Human Research Protections (OHRP) in mid-February reversed its decision to shut down a Johns Hopkins Quality Improvement study in Michigan.
On the heels of an SHM-led coalition’s efforts, a letter to the Hopkins researchers said the OHRP decided to move on and would immediately lift its ban on data collection by the Michigan hospitals participating in the study.
At first glance the new decision appeared to be a victory for researchers and others who worried the OHRP’s earlier ruling might have a chilling effect on quality improvement (QI) studies. A closer examination of the agency’s response shows that while officials at the OHRP heard and reacted to the loud outcry from the medical community, they haven’t significantly changed their approach to regulating QI research.
In fact, the OHRP’s director explains the apparent about-face wasn’t really a reversal. It simply was a determination that the time for regulation already had passed—that essentially the horse already left the barn.
“Because the five-part intervention (including the checklist) has now been adopted by the Michigan hospitals as a proven effective standard of practice, the intervention no longer represents a research intervention with the patients at the hospitals, and is therefore not research involving human subjects,” says OHRP Director Ivor Pritchard, PhD. “And because Johns Hopkins is not receiving private, identifiable data from the Michigan hospitals, but rather de-identified data about the frequency of infections in the ICUs, this research activity is not research involving human subjects.”
What this means is the OHRP again may decide to step in if it were to receive a complaint about an ongoing QI study, like the Johns Hopkins project.
“Assuming [Health and Human Services] had the authority to regulate the activity, and the regulations had not changed, we would continue to advise institutions that such a QI study would fall under the U.S. Department of Health and Human Services (HHS) protection of human subject regulations,” Dr. Pritchard says. “Whether we would take a compliance action in response to a complaint about such a research activity is a different matter, however, and would depend on the specific facts of the case.”
The most recent letter to Johns Hopkins and Dr. Pritchard’s responses show there really hasn’t been any resolution to the problem, says Mary Ann Baily, PhD, an associate for ethics and health policy at the Hastings Center in Garrison, N.Y.
In fact, the letter to Johns Hopkins suggests a study with the exact design might again run afoul of the OHRP, Dr. Baily says.
Still, Dr. Pritchard’s comments show there have been some changes in the way the OHRP views its role when it comes to QI studies and this may impact the way the agency responds next time, Dr. Baily says.
Although Dr. Pritchard didn’t rule out the possibility a future study might be shut down, the agency appears to have become sensitized to the concerns of the research community. “Our current efforts are directed toward finding better ways to communicate the relationship between quality improvement and research to both the healthcare and research communities,” he explains. “At the same time we are also reviewing the application of these rules to QI activities like the Johns Hopkins project and whether any changes are needed to encourage such work.”
This is a good sign, Dr. Baily says. It shows an openness to outside opinions that hasn’t been obvious in the past, she adds.
QI researchers and healthcare experts also have been heartened by that newfound openness at the OHRP. It’s a solid signal that voices of protest were successful in grabbing the attention of OHRP officials, they say.
“The fact that they rescinded a prior ruling based on pushback from the field is quite important,” says Robert Wachter, MD, professor and chief of the division of hospital medicine at the University of California at San Francisco, a former SHM president, and author of the blog “Wachter’s World” (www.wachtersworld.com). “It says that they have at least heard and responded to pressure from people doing this work.”
Unless there is a clear-cut set of rules that allow researchers to easily figure out when a study might catch the attention of the OHRP, many simply may decide against pursuing QI studies.
Dr. Wachter and others hope the latest communications from the OHRP are a sign officials at the agency are open to outside opinions and ready to start a dialogue.
That would be an important change, says Michael A. Matthay, MD, a professor of medicine and anesthesia at the University of California San Francisco. Up until now, the agency has been unfettered.
Dr. Matthay has had the experience of being second-guessed by the OHRP. In 2003, he was a researcher on a study sponsored and overseen by the National Institutes of Health. The research was brought to a screeching halt when officials at the OHRP decided they didn’t like the study’s design.
Although that study eventually was allowed to resume, the down time wasn’t without its costs, since it delayed results that eventually had a significant effect on patient care, says Dr. Matthay.
The Hopkins case is just another example of what happens when a government agency like the OHRP is allowed to act without oversight of its own actions, experts contend.
It highlights the agency’s ineffectiveness and inability to protect patient interests, Dr. Matthay suggests. “It’s not going to result in a better quality of care, and it’s not protecting patient rights,” he concludes.
“I’m a hopeful guy,” Dr. Wachter says. “If you’d asked me three months ago, when the ruling first came out, whether we would be able to get people an agency that had previously been impervious to public pressure to notice and pay attention, I might not have believed it.
“I think we’ve already gotten somewhere. This is just the first step. And it’s not a trivial first step. Federal agencies tend to turn off the phone and e-mail in response to pressure. We’ve shaken them by the shoulders. They have to realize how much turmoil they’re creating in the field and why this is going to be harmful to quality care of patients.”
In one of the clearest signs that the “pushback” from researchers has had an effect, officials at the OHRP admitted the Hopkins case might have caused confusion among QI researchers. The agency would like to help clear things up, Dr. Pritchard says.
“Our impression is that many institutions are currently grappling with the challenges of determining when QI studies require [internal board review] and when informed consent should be required or waived,” he allows. “You should also be aware that, going forward, HHS officials will make a sincere effort to improve communications with medical providers and researchers so that quality improvement initiatives that pose minimal risks to subjects are not inhibited by the regulations. We’re also encouraging any providers or researchers with questions about these regulations to contact us for guidance. In addition, we’re reviewing the application of these rules to evidence-based quality improvement activities, like the Johns Hopkins project, and whether any changes are needed to encourage such work while safeguarding the rights and welfare of human subjects in research.”
Dr. Wachter and others hope there will be much more communication between researchers and the OHRP.
“My hope is that this is not done, that this is the beginning of a very important conversation,” Dr. Wachter says. “If it is done, then this has simply been a Pyrrhic victory.” TH
Linda Carroll is a medical writer based in New Jersey.
In a move that pleased many researchers, the Office of Human Research Protections (OHRP) in mid-February reversed its decision to shut down a Johns Hopkins Quality Improvement study in Michigan.
On the heels of an SHM-led coalition’s efforts, a letter to the Hopkins researchers said the OHRP decided to move on and would immediately lift its ban on data collection by the Michigan hospitals participating in the study.
At first glance the new decision appeared to be a victory for researchers and others who worried the OHRP’s earlier ruling might have a chilling effect on quality improvement (QI) studies. A closer examination of the agency’s response shows that while officials at the OHRP heard and reacted to the loud outcry from the medical community, they haven’t significantly changed their approach to regulating QI research.
In fact, the OHRP’s director explains the apparent about-face wasn’t really a reversal. It simply was a determination that the time for regulation already had passed—that essentially the horse already left the barn.
“Because the five-part intervention (including the checklist) has now been adopted by the Michigan hospitals as a proven effective standard of practice, the intervention no longer represents a research intervention with the patients at the hospitals, and is therefore not research involving human subjects,” says OHRP Director Ivor Pritchard, PhD. “And because Johns Hopkins is not receiving private, identifiable data from the Michigan hospitals, but rather de-identified data about the frequency of infections in the ICUs, this research activity is not research involving human subjects.”
What this means is the OHRP again may decide to step in if it were to receive a complaint about an ongoing QI study, like the Johns Hopkins project.
“Assuming [Health and Human Services] had the authority to regulate the activity, and the regulations had not changed, we would continue to advise institutions that such a QI study would fall under the U.S. Department of Health and Human Services (HHS) protection of human subject regulations,” Dr. Pritchard says. “Whether we would take a compliance action in response to a complaint about such a research activity is a different matter, however, and would depend on the specific facts of the case.”
The most recent letter to Johns Hopkins and Dr. Pritchard’s responses show there really hasn’t been any resolution to the problem, says Mary Ann Baily, PhD, an associate for ethics and health policy at the Hastings Center in Garrison, N.Y.
In fact, the letter to Johns Hopkins suggests a study with the exact design might again run afoul of the OHRP, Dr. Baily says.
Still, Dr. Pritchard’s comments show there have been some changes in the way the OHRP views its role when it comes to QI studies and this may impact the way the agency responds next time, Dr. Baily says.
Although Dr. Pritchard didn’t rule out the possibility a future study might be shut down, the agency appears to have become sensitized to the concerns of the research community. “Our current efforts are directed toward finding better ways to communicate the relationship between quality improvement and research to both the healthcare and research communities,” he explains. “At the same time we are also reviewing the application of these rules to QI activities like the Johns Hopkins project and whether any changes are needed to encourage such work.”
This is a good sign, Dr. Baily says. It shows an openness to outside opinions that hasn’t been obvious in the past, she adds.
QI researchers and healthcare experts also have been heartened by that newfound openness at the OHRP. It’s a solid signal that voices of protest were successful in grabbing the attention of OHRP officials, they say.
“The fact that they rescinded a prior ruling based on pushback from the field is quite important,” says Robert Wachter, MD, professor and chief of the division of hospital medicine at the University of California at San Francisco, a former SHM president, and author of the blog “Wachter’s World” (www.wachtersworld.com). “It says that they have at least heard and responded to pressure from people doing this work.”
Unless there is a clear-cut set of rules that allow researchers to easily figure out when a study might catch the attention of the OHRP, many simply may decide against pursuing QI studies.
Dr. Wachter and others hope the latest communications from the OHRP are a sign officials at the agency are open to outside opinions and ready to start a dialogue.
That would be an important change, says Michael A. Matthay, MD, a professor of medicine and anesthesia at the University of California San Francisco. Up until now, the agency has been unfettered.
Dr. Matthay has had the experience of being second-guessed by the OHRP. In 2003, he was a researcher on a study sponsored and overseen by the National Institutes of Health. The research was brought to a screeching halt when officials at the OHRP decided they didn’t like the study’s design.
Although that study eventually was allowed to resume, the down time wasn’t without its costs, since it delayed results that eventually had a significant effect on patient care, says Dr. Matthay.
The Hopkins case is just another example of what happens when a government agency like the OHRP is allowed to act without oversight of its own actions, experts contend.
It highlights the agency’s ineffectiveness and inability to protect patient interests, Dr. Matthay suggests. “It’s not going to result in a better quality of care, and it’s not protecting patient rights,” he concludes.
“I’m a hopeful guy,” Dr. Wachter says. “If you’d asked me three months ago, when the ruling first came out, whether we would be able to get people an agency that had previously been impervious to public pressure to notice and pay attention, I might not have believed it.
“I think we’ve already gotten somewhere. This is just the first step. And it’s not a trivial first step. Federal agencies tend to turn off the phone and e-mail in response to pressure. We’ve shaken them by the shoulders. They have to realize how much turmoil they’re creating in the field and why this is going to be harmful to quality care of patients.”
In one of the clearest signs that the “pushback” from researchers has had an effect, officials at the OHRP admitted the Hopkins case might have caused confusion among QI researchers. The agency would like to help clear things up, Dr. Pritchard says.
“Our impression is that many institutions are currently grappling with the challenges of determining when QI studies require [internal board review] and when informed consent should be required or waived,” he allows. “You should also be aware that, going forward, HHS officials will make a sincere effort to improve communications with medical providers and researchers so that quality improvement initiatives that pose minimal risks to subjects are not inhibited by the regulations. We’re also encouraging any providers or researchers with questions about these regulations to contact us for guidance. In addition, we’re reviewing the application of these rules to evidence-based quality improvement activities, like the Johns Hopkins project, and whether any changes are needed to encourage such work while safeguarding the rights and welfare of human subjects in research.”
Dr. Wachter and others hope there will be much more communication between researchers and the OHRP.
“My hope is that this is not done, that this is the beginning of a very important conversation,” Dr. Wachter says. “If it is done, then this has simply been a Pyrrhic victory.” TH
Linda Carroll is a medical writer based in New Jersey.
Research Riddle
The recent uproar over the Office of Human Research Protections (OHRP) ordering a multicenter study of a Michigan ICU checklist to halt data collection has left quality improvement (QI) researchers, ethicists, and legal experts scratching their heads.
Even before the Michigan debacle, there was considerable confusion about how patient privacy rules included in the Health Insurance Portability and Accountability Act of 1996 (HIPAA) affected QI studies. No one was really sure when institutional review board (IRBs) needed to be involved and when patients needed to be officially consented.
The HIPAA contains specific language addressing how patients and patient data should be handled by researchers. And experts say that’s a good thing—in theory.
But a 2007 study in the Journal of the American Medical Association found that many epidemiologists feel the rules have adversely affected research and done little to improve patient privacy.1
The situation for QI researchers is even more confusing. Many see studies examining the effect of QI interventions as fundamentally different from “human subjects research.” And because of this many were shocked when the OHRP halted data collection from the Michigan care-checklist program.
In that case, the OHRP argued that because the study was prospective, it wasn’t simply QI, but rather “human subjects research.” The OHRP demanded that researchers run their plans by the IRBs of every one of the 103 hospitals involved in the research before relenting Feb. 15 and letting the study resume. (Initial results of the study were published in the New England Journal of Medicine in 2006.2)
Many medical experts view the intervention being studied in Michigan—a simple checklist aimed at reminding physicians to follow some common-sense procedures designed to lower the infection rate associated with central lines—as a straightforward attempt at QI. They argued the study should be exempt from some of the rules regarding human research subjects.
Experts say the OHRP’s initial ruling made what was already a confusing subject into an impossibly muddled morass that may have a chilling effect on the publication of QI studies.
Some say the OHRP has extended HIPAA too far. First of all, legal experts say, it must be understood that HIPAA’s rules don’t apply equally to everyone. For example, public health authorities are allowed to gather patient data without consent if they are trying to prevent the spread of disease. And hospitals are allowed to use data to improve the quality of healthcare, says James G. Hodge Jr. an associate professor at the Johns Hopkins School of Public Health in Baltimore, and executive director of the Centers for Law and the Public’s Health at Johns Hopkins.
One key to resolving the issue may be for healthcare experts to come up with a definition of what constitutes “human subjects research.”
“This is where issues keep coming up,” Hodge says. “Is QI just an extension of clinical care or is it research? The answer to that question will tell you whether it implicates the rule or not.”
Researchers, ethicists, and legal experts hotly debate these issues. No one appears to know exactly where to draw the line that divides human subjects research from QI studies.
Even before the checklist controversy arose, HIPAA was already having a deleterious impact on the exchange of QI information, says Mary Ann Baily, PhD, an associate for ethics and health policy at the Hastings Center in Garrison, N.Y.
Loathe to run afoul of the OHRP, at least one of the larger managed-care providers decided against publishing results of its QI studies, Dr. Baily says. “They’re just trying to stay out of the OHRP’s way,” she explains. “At a recent workshop, researchers said, ‘We don’t publish our data. We make our own system work better and keep our heads down. That way we don’t run into any problems. It’s safer that way.’ ”
This means nobody benefits from that company’s research. “What a waste that is,” Dr. Baily asserts.
Not everyone has taken such a defensive position. But there is a wide range of opinions among QI researchers around the country.
“Certainly as someone who is involved in QI research on an operational level and who is also interested in conducting research looking at the effects of QI interventions I struggle with this regularly,” says Peter Lindenauer, MD, MSC, associate professor of medicine at the Tufts University School of Medicine in Boston and associate medical director, Division of Healthcare Quality, at Baystate Medical Center.
“QI officers don’t look at the work they do as being research,” Dr. Lindenauer notes. “They’re often translating research into practice and implementing and developing strategies designed to improve care. So, when you think about it in those terms, it would never dawn on a typical QI officer to seek IRB approval or to get consent from patients to participate.”
Some researchers avoid the issue by looking only at data with patient identifiers stripped. The assumption is that HIPAA rules apply only to medical records with identifiers intact.
Under that assumption, Lakshmi Halasyamani, MD, has performed two heart failure studies without getting into issues of patient consent. But she sees potential problems with future research.
“Because we were looking at the impact of interventions across the whole population of heart failure patients, it wasn’t a problem,” says Dr. Halasyamani, vice chair for the department of medicine at St. Joseph Mercy Hospital in Ann Arbor, Mich., and a member of SHM’s Board of Directors. “So long as you’re looking at global outcomes, the regulations don’t have much impact.”
But Dr. Halasyamani and her colleagues may want to start looking at the benefits of interventions on subgroups. And this is where things can get messy, she says.
Healthcare professionals really need to figure out a working definition for what constitutes research—and they need to do it soon, Dr. Halasyamani says. Without a good definition, Dr. Halasyamani can see a slowdown of QI research and the possibility of researchers using a QI loophole to get around IRBs. “I could see where some researchers might be tempted to call their studies QI to avoid bureaucratic hassles and IRB oversight,” she says.
Some experts think the minute you decide to publish, by definition you’re doing research.
“I believe you have an ethical obligation to share what you learn with other hospitals,” says Dr. Lindenauer. “So, of course, you want to write it up. But once you start to talk about publication and sharing you start to get to the point where you’re crossing the line —where you’re creating generalizable knowledge.”
And that is precisely when government organizations like the OHRP think you’ve crossed over into research, Dr. Lindenauer says. “It’s a tricky question,” he adds.
Ethicist Baily agrees that experts need to work on coming up with a practical definition of research. Right now, the situation is impossible, she says. Take, for example, a medical plan that wants to send postcards to encourage patients to show up for an annual physical. If researchers want to learn whether that technique works, do they need to send a postcard, prior to the reminder postcard, to let patients know that they’re going to be part of a study, she asks.
And even more important, with all the staff cuts at medical institutions around the country, shouldn’t QI officers study whether these cost-cutting measures adversely affect patient care, she asks. TH
Linda Carroll is a medical journalist based in New Jersey.
References
- Ness RB, Joint Policy Committee, Societies of Epidemiology. Influence of the HIPAA Privacy Rule on health research. JAMA. 2007 Nov 14;298(18):2164-2170.
- Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006 Dec 28;355(26):2725-2732. Erratum in: N Engl J Med. 2007 Jun 21;356(25):2660.
The recent uproar over the Office of Human Research Protections (OHRP) ordering a multicenter study of a Michigan ICU checklist to halt data collection has left quality improvement (QI) researchers, ethicists, and legal experts scratching their heads.
Even before the Michigan debacle, there was considerable confusion about how patient privacy rules included in the Health Insurance Portability and Accountability Act of 1996 (HIPAA) affected QI studies. No one was really sure when institutional review board (IRBs) needed to be involved and when patients needed to be officially consented.
The HIPAA contains specific language addressing how patients and patient data should be handled by researchers. And experts say that’s a good thing—in theory.
But a 2007 study in the Journal of the American Medical Association found that many epidemiologists feel the rules have adversely affected research and done little to improve patient privacy.1
The situation for QI researchers is even more confusing. Many see studies examining the effect of QI interventions as fundamentally different from “human subjects research.” And because of this many were shocked when the OHRP halted data collection from the Michigan care-checklist program.
In that case, the OHRP argued that because the study was prospective, it wasn’t simply QI, but rather “human subjects research.” The OHRP demanded that researchers run their plans by the IRBs of every one of the 103 hospitals involved in the research before relenting Feb. 15 and letting the study resume. (Initial results of the study were published in the New England Journal of Medicine in 2006.2)
Many medical experts view the intervention being studied in Michigan—a simple checklist aimed at reminding physicians to follow some common-sense procedures designed to lower the infection rate associated with central lines—as a straightforward attempt at QI. They argued the study should be exempt from some of the rules regarding human research subjects.
Experts say the OHRP’s initial ruling made what was already a confusing subject into an impossibly muddled morass that may have a chilling effect on the publication of QI studies.
Some say the OHRP has extended HIPAA too far. First of all, legal experts say, it must be understood that HIPAA’s rules don’t apply equally to everyone. For example, public health authorities are allowed to gather patient data without consent if they are trying to prevent the spread of disease. And hospitals are allowed to use data to improve the quality of healthcare, says James G. Hodge Jr. an associate professor at the Johns Hopkins School of Public Health in Baltimore, and executive director of the Centers for Law and the Public’s Health at Johns Hopkins.
One key to resolving the issue may be for healthcare experts to come up with a definition of what constitutes “human subjects research.”
“This is where issues keep coming up,” Hodge says. “Is QI just an extension of clinical care or is it research? The answer to that question will tell you whether it implicates the rule or not.”
Researchers, ethicists, and legal experts hotly debate these issues. No one appears to know exactly where to draw the line that divides human subjects research from QI studies.
Even before the checklist controversy arose, HIPAA was already having a deleterious impact on the exchange of QI information, says Mary Ann Baily, PhD, an associate for ethics and health policy at the Hastings Center in Garrison, N.Y.
Loathe to run afoul of the OHRP, at least one of the larger managed-care providers decided against publishing results of its QI studies, Dr. Baily says. “They’re just trying to stay out of the OHRP’s way,” she explains. “At a recent workshop, researchers said, ‘We don’t publish our data. We make our own system work better and keep our heads down. That way we don’t run into any problems. It’s safer that way.’ ”
This means nobody benefits from that company’s research. “What a waste that is,” Dr. Baily asserts.
Not everyone has taken such a defensive position. But there is a wide range of opinions among QI researchers around the country.
“Certainly as someone who is involved in QI research on an operational level and who is also interested in conducting research looking at the effects of QI interventions I struggle with this regularly,” says Peter Lindenauer, MD, MSC, associate professor of medicine at the Tufts University School of Medicine in Boston and associate medical director, Division of Healthcare Quality, at Baystate Medical Center.
“QI officers don’t look at the work they do as being research,” Dr. Lindenauer notes. “They’re often translating research into practice and implementing and developing strategies designed to improve care. So, when you think about it in those terms, it would never dawn on a typical QI officer to seek IRB approval or to get consent from patients to participate.”
Some researchers avoid the issue by looking only at data with patient identifiers stripped. The assumption is that HIPAA rules apply only to medical records with identifiers intact.
Under that assumption, Lakshmi Halasyamani, MD, has performed two heart failure studies without getting into issues of patient consent. But she sees potential problems with future research.
“Because we were looking at the impact of interventions across the whole population of heart failure patients, it wasn’t a problem,” says Dr. Halasyamani, vice chair for the department of medicine at St. Joseph Mercy Hospital in Ann Arbor, Mich., and a member of SHM’s Board of Directors. “So long as you’re looking at global outcomes, the regulations don’t have much impact.”
But Dr. Halasyamani and her colleagues may want to start looking at the benefits of interventions on subgroups. And this is where things can get messy, she says.
Healthcare professionals really need to figure out a working definition for what constitutes research—and they need to do it soon, Dr. Halasyamani says. Without a good definition, Dr. Halasyamani can see a slowdown of QI research and the possibility of researchers using a QI loophole to get around IRBs. “I could see where some researchers might be tempted to call their studies QI to avoid bureaucratic hassles and IRB oversight,” she says.
Some experts think the minute you decide to publish, by definition you’re doing research.
“I believe you have an ethical obligation to share what you learn with other hospitals,” says Dr. Lindenauer. “So, of course, you want to write it up. But once you start to talk about publication and sharing you start to get to the point where you’re crossing the line —where you’re creating generalizable knowledge.”
And that is precisely when government organizations like the OHRP think you’ve crossed over into research, Dr. Lindenauer says. “It’s a tricky question,” he adds.
Ethicist Baily agrees that experts need to work on coming up with a practical definition of research. Right now, the situation is impossible, she says. Take, for example, a medical plan that wants to send postcards to encourage patients to show up for an annual physical. If researchers want to learn whether that technique works, do they need to send a postcard, prior to the reminder postcard, to let patients know that they’re going to be part of a study, she asks.
And even more important, with all the staff cuts at medical institutions around the country, shouldn’t QI officers study whether these cost-cutting measures adversely affect patient care, she asks. TH
Linda Carroll is a medical journalist based in New Jersey.
References
- Ness RB, Joint Policy Committee, Societies of Epidemiology. Influence of the HIPAA Privacy Rule on health research. JAMA. 2007 Nov 14;298(18):2164-2170.
- Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006 Dec 28;355(26):2725-2732. Erratum in: N Engl J Med. 2007 Jun 21;356(25):2660.
The recent uproar over the Office of Human Research Protections (OHRP) ordering a multicenter study of a Michigan ICU checklist to halt data collection has left quality improvement (QI) researchers, ethicists, and legal experts scratching their heads.
Even before the Michigan debacle, there was considerable confusion about how patient privacy rules included in the Health Insurance Portability and Accountability Act of 1996 (HIPAA) affected QI studies. No one was really sure when institutional review board (IRBs) needed to be involved and when patients needed to be officially consented.
The HIPAA contains specific language addressing how patients and patient data should be handled by researchers. And experts say that’s a good thing—in theory.
But a 2007 study in the Journal of the American Medical Association found that many epidemiologists feel the rules have adversely affected research and done little to improve patient privacy.1
The situation for QI researchers is even more confusing. Many see studies examining the effect of QI interventions as fundamentally different from “human subjects research.” And because of this many were shocked when the OHRP halted data collection from the Michigan care-checklist program.
In that case, the OHRP argued that because the study was prospective, it wasn’t simply QI, but rather “human subjects research.” The OHRP demanded that researchers run their plans by the IRBs of every one of the 103 hospitals involved in the research before relenting Feb. 15 and letting the study resume. (Initial results of the study were published in the New England Journal of Medicine in 2006.2)
Many medical experts view the intervention being studied in Michigan—a simple checklist aimed at reminding physicians to follow some common-sense procedures designed to lower the infection rate associated with central lines—as a straightforward attempt at QI. They argued the study should be exempt from some of the rules regarding human research subjects.
Experts say the OHRP’s initial ruling made what was already a confusing subject into an impossibly muddled morass that may have a chilling effect on the publication of QI studies.
Some say the OHRP has extended HIPAA too far. First of all, legal experts say, it must be understood that HIPAA’s rules don’t apply equally to everyone. For example, public health authorities are allowed to gather patient data without consent if they are trying to prevent the spread of disease. And hospitals are allowed to use data to improve the quality of healthcare, says James G. Hodge Jr. an associate professor at the Johns Hopkins School of Public Health in Baltimore, and executive director of the Centers for Law and the Public’s Health at Johns Hopkins.
One key to resolving the issue may be for healthcare experts to come up with a definition of what constitutes “human subjects research.”
“This is where issues keep coming up,” Hodge says. “Is QI just an extension of clinical care or is it research? The answer to that question will tell you whether it implicates the rule or not.”
Researchers, ethicists, and legal experts hotly debate these issues. No one appears to know exactly where to draw the line that divides human subjects research from QI studies.
Even before the checklist controversy arose, HIPAA was already having a deleterious impact on the exchange of QI information, says Mary Ann Baily, PhD, an associate for ethics and health policy at the Hastings Center in Garrison, N.Y.
Loathe to run afoul of the OHRP, at least one of the larger managed-care providers decided against publishing results of its QI studies, Dr. Baily says. “They’re just trying to stay out of the OHRP’s way,” she explains. “At a recent workshop, researchers said, ‘We don’t publish our data. We make our own system work better and keep our heads down. That way we don’t run into any problems. It’s safer that way.’ ”
This means nobody benefits from that company’s research. “What a waste that is,” Dr. Baily asserts.
Not everyone has taken such a defensive position. But there is a wide range of opinions among QI researchers around the country.
“Certainly as someone who is involved in QI research on an operational level and who is also interested in conducting research looking at the effects of QI interventions I struggle with this regularly,” says Peter Lindenauer, MD, MSC, associate professor of medicine at the Tufts University School of Medicine in Boston and associate medical director, Division of Healthcare Quality, at Baystate Medical Center.
“QI officers don’t look at the work they do as being research,” Dr. Lindenauer notes. “They’re often translating research into practice and implementing and developing strategies designed to improve care. So, when you think about it in those terms, it would never dawn on a typical QI officer to seek IRB approval or to get consent from patients to participate.”
Some researchers avoid the issue by looking only at data with patient identifiers stripped. The assumption is that HIPAA rules apply only to medical records with identifiers intact.
Under that assumption, Lakshmi Halasyamani, MD, has performed two heart failure studies without getting into issues of patient consent. But she sees potential problems with future research.
“Because we were looking at the impact of interventions across the whole population of heart failure patients, it wasn’t a problem,” says Dr. Halasyamani, vice chair for the department of medicine at St. Joseph Mercy Hospital in Ann Arbor, Mich., and a member of SHM’s Board of Directors. “So long as you’re looking at global outcomes, the regulations don’t have much impact.”
But Dr. Halasyamani and her colleagues may want to start looking at the benefits of interventions on subgroups. And this is where things can get messy, she says.
Healthcare professionals really need to figure out a working definition for what constitutes research—and they need to do it soon, Dr. Halasyamani says. Without a good definition, Dr. Halasyamani can see a slowdown of QI research and the possibility of researchers using a QI loophole to get around IRBs. “I could see where some researchers might be tempted to call their studies QI to avoid bureaucratic hassles and IRB oversight,” she says.
Some experts think the minute you decide to publish, by definition you’re doing research.
“I believe you have an ethical obligation to share what you learn with other hospitals,” says Dr. Lindenauer. “So, of course, you want to write it up. But once you start to talk about publication and sharing you start to get to the point where you’re crossing the line —where you’re creating generalizable knowledge.”
And that is precisely when government organizations like the OHRP think you’ve crossed over into research, Dr. Lindenauer says. “It’s a tricky question,” he adds.
Ethicist Baily agrees that experts need to work on coming up with a practical definition of research. Right now, the situation is impossible, she says. Take, for example, a medical plan that wants to send postcards to encourage patients to show up for an annual physical. If researchers want to learn whether that technique works, do they need to send a postcard, prior to the reminder postcard, to let patients know that they’re going to be part of a study, she asks.
And even more important, with all the staff cuts at medical institutions around the country, shouldn’t QI officers study whether these cost-cutting measures adversely affect patient care, she asks. TH
Linda Carroll is a medical journalist based in New Jersey.
References
- Ness RB, Joint Policy Committee, Societies of Epidemiology. Influence of the HIPAA Privacy Rule on health research. JAMA. 2007 Nov 14;298(18):2164-2170.
- Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006 Dec 28;355(26):2725-2732. Erratum in: N Engl J Med. 2007 Jun 21;356(25):2660.
Watch and Earn
With recent changes in Medicare rules making reimbursement even trickier for patients who aren’t well enough to be sent home quickly but aren’t sick enough to move to an inpatient bed, hospitalists are increasingly being tapped to set up observation units at medical centers around the country.
These patients, experts say, are the ones hospitals are most likely to lose money on. That’s because the Centers for Medicare and Medicaid Services (CMS) won’t pay unless a patient meets stringent guidelines for admission to the hospital. And while recently rewritten rules allow payment for 24 hours of observation, they also can also lead to denial of claims when patients aren’t considered sick enough to have been admitted.
—Jason Napolitano, MD, medical director of the observation unit, University of California at Los Angeles Medical Center
When they’re well run, observation units can even help cover losses from emergency departments (ED) that have trouble collecting on bills because most of their patient population is uninsured or underinsured.
But the drive to create observation units isn’t just about money, says Frank W. Peacock, MD, vice chair of the emergency department at the Cleveland Clinic in Ohio. Studies have shown that death rates drop when hospitals add observation units, Dr. Peacock says.
Despite these clear benefits, experts estimate that a mere 20% of medical centers around the nation have observation units.
This may in part be because creating such a unit—also known as clinical decision unit—takes a lot of planning to start up, says William T. Ford, MD, medical director for Nashville, Tenn.-based Cogent Healthcare and chief of the section of hospital medicine at Temple University in Philadelphia. Without proper planning, observation units can fail to flourish—or just fail.
That’s what happened at Temple, Dr. Ford says. “The original observation unit got bogged down in its own infrastructure,” he explains. “It wasn’t cost effective.”
After that first attempt failed, Temple reached out to Cogent and Dr. Ford for help in developing an observation unit that would be financially viable.
Observation Origins
Classically, Dr. Ford says, observation units were developed and staffed by emergency department physicians. But these days, the units are increasingly being designed and run by hospitalists, he says, adding that this change makes a lot of sense.
“Emergency department physicians don’t have the time or the resources to monitor patients for long periods of time,” Dr. Ford says. “That’s why I think some of the early ones failed—they didn’t work as efficiently and were staffed by the wrong people.”
Hospitalist Jason Napolitano, MD, agrees with the choice to staff observation units with hospitalists. “We want our emergency department physicians to be able to focus on life-or-death issues and on the stabilization of very sick patients,” says Dr. Napolitano, medical director of the observation unit at the University of California at Los Angeles Medical Center. “These are things that ED physicians do spectacularly well. But when it gets down to management and reassessment of patients over time, we wanted a dedicated staff of hospitalists who were trained in internal medicine.”
It made sense that many of the early observation units were staffed by ED doctors, says Mark Flitcraft, a nurse and unit director of nursing at UCLA. That’s because the units were originally adjuncts to the ED. These early units were initially seen as a way to take the pressure off overcrowded, overworked EDs, Flitcraft says. “They were a way for hospitals to avoid [diverting patients] as the beds in the ED started filing up,” he adds.
Avoiding such diversions is still one of the main justifications for adding an observation unit, Dr. Ford says. “The observation unit helps increase throughput time.”
Still, he says, if you’re going to create an observation unit staffed by hospitalists, “you need to make sure that the emergency department buys in to the concept. They should be your best friends. Go over and meet with them. If they don’t buy into the idea, then you’re going to have problems.”
Time Is of the Essence
For an observation unit to work well, the staff needs to think about time in a different way, Flitcraft says.
“It’s more of an outpatient designation from a Medicare standpoint,” he explains. “The focus has to be hours rather than days. You really need to know that the clock is ticking and work on rapid turnaround.” Take discharge, for example, Flitcraft says. Normally a hospitalist would wait for morning to send a patient home. “But there are patients we might discharge at 10 p.m.,” he says. “When they are stable they go home.”
In the observation unit, staff members always have the end in sight, agrees Robin J. Trupp, a grad student at Ohio State University, expert on observation units, and president of The American Association of Heart Failure Nurses. “You know what your goal is,” she adds. “There’s a 24-hour clock and it’s always ticking. At the end of 24 hours you have to make a treatment decision: admit the patient or send him home.”
Because observation units are generally limited to treating a select group of medical conditions, they can be more efficient. Some observation units are limited to only one or two diagnoses (e.g., chest pain and heart failure). Others see a slightly broader spectrum of illnesses, including asthma, stomach pain, and pneumonia.
One byproduct of limiting the number of conditions treated in the unit is ending up with a staff that can become specialized in treating those ailments, experts say.
“In the observation unit you’re not looking at urinary tract infections or doing stitches,” Trupp says. “You’re just working on this population. You become an expert on how it’s treated and managed.”
And that offers another advantage: the possibility of doing more patient education.
She points to the example of a unit dedicated to treating heart failure patients.
“You can take advantage of the fact that at this moment, the patient can clearly see cause and effect and maybe you’ll have a chance at getting some behavior changes,” Trupp says. “It’s the case of having put their hand in the fire and feeling and having learned it’s hot; they’ll learn not to do it again. They might learn that the symptoms that landed them in the ED came from excess salt load due to eating Chinese food or chips and salsa.”
Ultimately, for certain conditions, observation units can provide better care. Studies have shown that in the three months following a visit to the hospital, heart failure patients are far less likely to return if they’ve been seen in the observation unit rather than being treated as inpatients.
And if that weren’t enough of an inducement to administrators to create observation units, Dr. Peacock offers one other: The units can do more than pay for themselves.
“We are in an urban environment, and our patient population is not well insured,” he says. “There are years when the ED loses money. The observation unit never loses money. In fact, it’s saved us a few times. That was a pleasant surprise.” TH
Linda Carroll is a medical writer based in New Jersey.
With recent changes in Medicare rules making reimbursement even trickier for patients who aren’t well enough to be sent home quickly but aren’t sick enough to move to an inpatient bed, hospitalists are increasingly being tapped to set up observation units at medical centers around the country.
These patients, experts say, are the ones hospitals are most likely to lose money on. That’s because the Centers for Medicare and Medicaid Services (CMS) won’t pay unless a patient meets stringent guidelines for admission to the hospital. And while recently rewritten rules allow payment for 24 hours of observation, they also can also lead to denial of claims when patients aren’t considered sick enough to have been admitted.
—Jason Napolitano, MD, medical director of the observation unit, University of California at Los Angeles Medical Center
When they’re well run, observation units can even help cover losses from emergency departments (ED) that have trouble collecting on bills because most of their patient population is uninsured or underinsured.
But the drive to create observation units isn’t just about money, says Frank W. Peacock, MD, vice chair of the emergency department at the Cleveland Clinic in Ohio. Studies have shown that death rates drop when hospitals add observation units, Dr. Peacock says.
Despite these clear benefits, experts estimate that a mere 20% of medical centers around the nation have observation units.
This may in part be because creating such a unit—also known as clinical decision unit—takes a lot of planning to start up, says William T. Ford, MD, medical director for Nashville, Tenn.-based Cogent Healthcare and chief of the section of hospital medicine at Temple University in Philadelphia. Without proper planning, observation units can fail to flourish—or just fail.
That’s what happened at Temple, Dr. Ford says. “The original observation unit got bogged down in its own infrastructure,” he explains. “It wasn’t cost effective.”
After that first attempt failed, Temple reached out to Cogent and Dr. Ford for help in developing an observation unit that would be financially viable.
Observation Origins
Classically, Dr. Ford says, observation units were developed and staffed by emergency department physicians. But these days, the units are increasingly being designed and run by hospitalists, he says, adding that this change makes a lot of sense.
“Emergency department physicians don’t have the time or the resources to monitor patients for long periods of time,” Dr. Ford says. “That’s why I think some of the early ones failed—they didn’t work as efficiently and were staffed by the wrong people.”
Hospitalist Jason Napolitano, MD, agrees with the choice to staff observation units with hospitalists. “We want our emergency department physicians to be able to focus on life-or-death issues and on the stabilization of very sick patients,” says Dr. Napolitano, medical director of the observation unit at the University of California at Los Angeles Medical Center. “These are things that ED physicians do spectacularly well. But when it gets down to management and reassessment of patients over time, we wanted a dedicated staff of hospitalists who were trained in internal medicine.”
It made sense that many of the early observation units were staffed by ED doctors, says Mark Flitcraft, a nurse and unit director of nursing at UCLA. That’s because the units were originally adjuncts to the ED. These early units were initially seen as a way to take the pressure off overcrowded, overworked EDs, Flitcraft says. “They were a way for hospitals to avoid [diverting patients] as the beds in the ED started filing up,” he adds.
Avoiding such diversions is still one of the main justifications for adding an observation unit, Dr. Ford says. “The observation unit helps increase throughput time.”
Still, he says, if you’re going to create an observation unit staffed by hospitalists, “you need to make sure that the emergency department buys in to the concept. They should be your best friends. Go over and meet with them. If they don’t buy into the idea, then you’re going to have problems.”
Time Is of the Essence
For an observation unit to work well, the staff needs to think about time in a different way, Flitcraft says.
“It’s more of an outpatient designation from a Medicare standpoint,” he explains. “The focus has to be hours rather than days. You really need to know that the clock is ticking and work on rapid turnaround.” Take discharge, for example, Flitcraft says. Normally a hospitalist would wait for morning to send a patient home. “But there are patients we might discharge at 10 p.m.,” he says. “When they are stable they go home.”
In the observation unit, staff members always have the end in sight, agrees Robin J. Trupp, a grad student at Ohio State University, expert on observation units, and president of The American Association of Heart Failure Nurses. “You know what your goal is,” she adds. “There’s a 24-hour clock and it’s always ticking. At the end of 24 hours you have to make a treatment decision: admit the patient or send him home.”
Because observation units are generally limited to treating a select group of medical conditions, they can be more efficient. Some observation units are limited to only one or two diagnoses (e.g., chest pain and heart failure). Others see a slightly broader spectrum of illnesses, including asthma, stomach pain, and pneumonia.
One byproduct of limiting the number of conditions treated in the unit is ending up with a staff that can become specialized in treating those ailments, experts say.
“In the observation unit you’re not looking at urinary tract infections or doing stitches,” Trupp says. “You’re just working on this population. You become an expert on how it’s treated and managed.”
And that offers another advantage: the possibility of doing more patient education.
She points to the example of a unit dedicated to treating heart failure patients.
“You can take advantage of the fact that at this moment, the patient can clearly see cause and effect and maybe you’ll have a chance at getting some behavior changes,” Trupp says. “It’s the case of having put their hand in the fire and feeling and having learned it’s hot; they’ll learn not to do it again. They might learn that the symptoms that landed them in the ED came from excess salt load due to eating Chinese food or chips and salsa.”
Ultimately, for certain conditions, observation units can provide better care. Studies have shown that in the three months following a visit to the hospital, heart failure patients are far less likely to return if they’ve been seen in the observation unit rather than being treated as inpatients.
And if that weren’t enough of an inducement to administrators to create observation units, Dr. Peacock offers one other: The units can do more than pay for themselves.
“We are in an urban environment, and our patient population is not well insured,” he says. “There are years when the ED loses money. The observation unit never loses money. In fact, it’s saved us a few times. That was a pleasant surprise.” TH
Linda Carroll is a medical writer based in New Jersey.
With recent changes in Medicare rules making reimbursement even trickier for patients who aren’t well enough to be sent home quickly but aren’t sick enough to move to an inpatient bed, hospitalists are increasingly being tapped to set up observation units at medical centers around the country.
These patients, experts say, are the ones hospitals are most likely to lose money on. That’s because the Centers for Medicare and Medicaid Services (CMS) won’t pay unless a patient meets stringent guidelines for admission to the hospital. And while recently rewritten rules allow payment for 24 hours of observation, they also can also lead to denial of claims when patients aren’t considered sick enough to have been admitted.
—Jason Napolitano, MD, medical director of the observation unit, University of California at Los Angeles Medical Center
When they’re well run, observation units can even help cover losses from emergency departments (ED) that have trouble collecting on bills because most of their patient population is uninsured or underinsured.
But the drive to create observation units isn’t just about money, says Frank W. Peacock, MD, vice chair of the emergency department at the Cleveland Clinic in Ohio. Studies have shown that death rates drop when hospitals add observation units, Dr. Peacock says.
Despite these clear benefits, experts estimate that a mere 20% of medical centers around the nation have observation units.
This may in part be because creating such a unit—also known as clinical decision unit—takes a lot of planning to start up, says William T. Ford, MD, medical director for Nashville, Tenn.-based Cogent Healthcare and chief of the section of hospital medicine at Temple University in Philadelphia. Without proper planning, observation units can fail to flourish—or just fail.
That’s what happened at Temple, Dr. Ford says. “The original observation unit got bogged down in its own infrastructure,” he explains. “It wasn’t cost effective.”
After that first attempt failed, Temple reached out to Cogent and Dr. Ford for help in developing an observation unit that would be financially viable.
Observation Origins
Classically, Dr. Ford says, observation units were developed and staffed by emergency department physicians. But these days, the units are increasingly being designed and run by hospitalists, he says, adding that this change makes a lot of sense.
“Emergency department physicians don’t have the time or the resources to monitor patients for long periods of time,” Dr. Ford says. “That’s why I think some of the early ones failed—they didn’t work as efficiently and were staffed by the wrong people.”
Hospitalist Jason Napolitano, MD, agrees with the choice to staff observation units with hospitalists. “We want our emergency department physicians to be able to focus on life-or-death issues and on the stabilization of very sick patients,” says Dr. Napolitano, medical director of the observation unit at the University of California at Los Angeles Medical Center. “These are things that ED physicians do spectacularly well. But when it gets down to management and reassessment of patients over time, we wanted a dedicated staff of hospitalists who were trained in internal medicine.”
It made sense that many of the early observation units were staffed by ED doctors, says Mark Flitcraft, a nurse and unit director of nursing at UCLA. That’s because the units were originally adjuncts to the ED. These early units were initially seen as a way to take the pressure off overcrowded, overworked EDs, Flitcraft says. “They were a way for hospitals to avoid [diverting patients] as the beds in the ED started filing up,” he adds.
Avoiding such diversions is still one of the main justifications for adding an observation unit, Dr. Ford says. “The observation unit helps increase throughput time.”
Still, he says, if you’re going to create an observation unit staffed by hospitalists, “you need to make sure that the emergency department buys in to the concept. They should be your best friends. Go over and meet with them. If they don’t buy into the idea, then you’re going to have problems.”
Time Is of the Essence
For an observation unit to work well, the staff needs to think about time in a different way, Flitcraft says.
“It’s more of an outpatient designation from a Medicare standpoint,” he explains. “The focus has to be hours rather than days. You really need to know that the clock is ticking and work on rapid turnaround.” Take discharge, for example, Flitcraft says. Normally a hospitalist would wait for morning to send a patient home. “But there are patients we might discharge at 10 p.m.,” he says. “When they are stable they go home.”
In the observation unit, staff members always have the end in sight, agrees Robin J. Trupp, a grad student at Ohio State University, expert on observation units, and president of The American Association of Heart Failure Nurses. “You know what your goal is,” she adds. “There’s a 24-hour clock and it’s always ticking. At the end of 24 hours you have to make a treatment decision: admit the patient or send him home.”
Because observation units are generally limited to treating a select group of medical conditions, they can be more efficient. Some observation units are limited to only one or two diagnoses (e.g., chest pain and heart failure). Others see a slightly broader spectrum of illnesses, including asthma, stomach pain, and pneumonia.
One byproduct of limiting the number of conditions treated in the unit is ending up with a staff that can become specialized in treating those ailments, experts say.
“In the observation unit you’re not looking at urinary tract infections or doing stitches,” Trupp says. “You’re just working on this population. You become an expert on how it’s treated and managed.”
And that offers another advantage: the possibility of doing more patient education.
She points to the example of a unit dedicated to treating heart failure patients.
“You can take advantage of the fact that at this moment, the patient can clearly see cause and effect and maybe you’ll have a chance at getting some behavior changes,” Trupp says. “It’s the case of having put their hand in the fire and feeling and having learned it’s hot; they’ll learn not to do it again. They might learn that the symptoms that landed them in the ED came from excess salt load due to eating Chinese food or chips and salsa.”
Ultimately, for certain conditions, observation units can provide better care. Studies have shown that in the three months following a visit to the hospital, heart failure patients are far less likely to return if they’ve been seen in the observation unit rather than being treated as inpatients.
And if that weren’t enough of an inducement to administrators to create observation units, Dr. Peacock offers one other: The units can do more than pay for themselves.
“We are in an urban environment, and our patient population is not well insured,” he says. “There are years when the ED loses money. The observation unit never loses money. In fact, it’s saved us a few times. That was a pleasant surprise.” TH
Linda Carroll is a medical writer based in New Jersey.
SHM to Challenge OHRP's Checklist Ruling
Quality improvement (QI) researchers were shocked and dismayed when the Office of Human Research Protections (OHRP) froze a multicenter project investigating the use of checklists to reduce infections in intensive care units (ICU).
Even though this simple intervention had been shown to dramatically cut ICU infection rates, the OHRP opted to halt the study because Johns Hopkins researchers hadn’t run their protocols by the institutional review boards (IRBs) of the 100-plus hospitals participating in the study.
Fearing that this ruling might have a chilling effect on QI studies nationwide, SHM immediately set out to build a coalition of medical organizations to challenge the OHRP’s decision.
SHM is joining several other medical societies to send a letter to Health and Human Services (HHS) Secretary Mike Leavitt to ask him to lift the OHRP’s ban on data collection. In addition, the SHM has posted a letter on its Legislative Action Center Web page (accessible at www.hospitalmedicine.org/beheard) so members can add their voices to the protest.
At Issue
What shocked many was the breadth of the OHRP’s ruling—and the rationale behind it. The OHRP’s problem with the Hopkins study wasn’t that the intervention was harmful—or even risky. The problem was that researchers Pronovost, et al., had published their results in the New England Journal of Medicine in 2006 and hadn’t treated this study as “human subjects research.”1
In general, the OHRP’s goals are laudable, says Robert Wachter, MD, professor and chief of the division of hospital medicine and a professor at the University of California at San Francisco and a former president of the SHM. They want to protect patients.
“I’m not clamoring to get rid of IRBs or to subject unwitting patients to potentially harmful therapies,” Dr. Wachter says. “But it’s crucial to find the right balance between protections built into the research world and allowing people to do quality improvement. This ruling is wrong.”
You want there to constantly be implementation of strategies to improve quality of care and attempts to measurement the impact of those strategies, Dr. Wachter says. “This is the kind of thing that hospitalists should be doing as soon as they wake up in the morning,” he adds.
The checklist at the center of the controversy included five easily implemented procedures that the Centers for Disease Control and Prevention had previously identified as effective in reducing the rate of infections that could result when a central line catheter was inserted. Among the procedures on the checklist were such seemingly commonsense measures as hand washing, cleaning the patient’s skin with chlorhexidine, and using barrier precautions during catheter insertions.
The Hopkins researchers suspected that in the busy ICU environment, these procedures were not routinely followed. Physicians might benefit from a reminder—a checklist.
To determine whether something as simple as a checklist could have an effect on infection rates, researchers from the Johns Hopkins Center for Innovation in Quality Patient Care partnered with 103 hospitals in Michigan that agreed to implement the checklists and keep track of infection rates.
When the researchers compared infection rates before and after the checklists were implemented, they found infections had dropped by two-thirds within the first three months. That’s pretty significant when you consider that each year in ICUs across the nation, there are 80,000 catheter-related bloodstream infections that result in an estimated 28,000 patient deaths.
—Robert Wachter, MD, professor and chief of the division of hospital medicine, University of California, San Francisco
Backlash
The Hopkins researchers figured they wouldn’t have problems with OHRP because they weren’t studying a new, unproven intervention. They were simply trying to discover the impact of providing a checklist of proven procedures.
Just to be on the safe side, though, the researchers presented their plans to the Hopkins IRB, which determined that the study was exempt from review.
So it was a surprise to everyone when the OHRP, acting on an anonymous complaint, weighed in and shut down data collection after ruling that IRBs from each of the 103 hospitals participating in the study would need to separately evaluate and approve the study.
“Most people read about this and their jaws dropped,” Dr. Wachter says. “They couldn’t believe that the federal government would restrict research on the use of a checklist. It’s wacky.”
This is a perfect example of regulatory overreach, Dr. Wachter says.
“It can be challenging to draw the line,” he adds. “But, to me, it defies common sense to say that a program in which we are going to implement a checklist and then collect data to see if it works constitutes research and therefore requires the same amount of patient protection as a study of a new device or a potentially toxic medication.”
What made the OHRP ruling seem even more odd was the fact that another division of the HHS, the Agency for Healthcare Research and Quality, had added Dr. Pronovost’s study to its list of “classic” papers shortly after the research was published.
Making matters worse in many researchers’ minds was the fact the OHRP didn’t stop with this study. An official letter to officials at Johns Hopkins extended the agency’s reach to all Hopkins “quality assurance/quality improvement proposals for which federal funding is being sought.”
The OHRP ordered that these proposals be “examined to determine if IRB review was conducted or if exempt status was not granted inappropriately. If these are not the case, the [principal investigators] for the proposals will be contacted and informed that prospective data collection requires IRB review and that an application for exempt status will not be accepted for these projects.”
And the OHRP went even further. The November letter suggested that even quality assurance/quality improvement studies that included retrospective reviews might be construed as “human subjects research.”
In essence, that means any attempt to evaluate the impact of any type of change in procedures meant to improve quality of care would require IRB scrutiny and—quite likely—patient consents, says Mary Ann Baily, PhD, an associate for ethics and health policy at the Hastings Center in Garrison, N.Y.
“I think it’s very maddening,” Dr. Baily says. “The OHRP has created an impossible situation. Why in heavens name would the OHRP want to tie QI researchers up in knots?”
Despite multiple requests from The Hospitalist, officials at the OHRP declined to comment and clarify the issue.
Some QI researchers see the need for regulation. There should be some oversight, even when it comes to QI, says Lakshmi Halasyamani, MD, vice chair for the department of medicine at St. Joseph Mercy Hospital in Ann Arbor. Mich., and a member of SHM’s Board of Directors. While the intervention involved in the Johns Hopkins case seems relatively benign, this isn’t always the case with QI.
It comes down to evaluating the level of risk to patients, says Dr. Halasyamani. And there needs to be someone, somewhere in the process evaluating the risks to patients of each intervention, Dr. Halasyamani says.
“We shouldn’t be creating a whole new level of bureaucracy that will slow down low risk interventions that could have a huge impact,” she adds. “But, you want someone looking at whether the interventions could have a downside.”
When it’s a low-risk intervention—like the one initiated by the Johns Hopkins researchers—then the forms filled out by patients giving consent for treatment should be enough, Dr. Halasyamani says.
While ethicists and researchers kick these ideas around, others have decided to take some more immediate actions to try to clear the way for research on the low risk interventions.
Dr. Wachter and others are rallying around the Hopkins researchers and orchestrating a letter-writing campaign. “You may ask why we are pushing back so hard to get people to take another look at the OHRP’s ruling on this one study,” he says. “Johns Hopkins has the money and infrastructure to deal with this. They will find a way to get IRB approval from the Michigan hospitals.
“But what about the next time I want to do a quality improvement study, or when one of my residents on a six-month rotation wants to do one? I’m going to say I don’t think you should. It’s going to take a month to get IRB approval and then, potentially, every patient, physician, nurse—basically everyone who comes into contact with the intervention—will need to be consented. The ruling will shut down innovation.” TH
Linda Carroll is a medical journalist based in New Jersey.
Reference
- Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006 Dec 28;355(26):2725-2732. Erratum in: N Engl J Med. 2007 Jun 21;356(25):2660.
Quality improvement (QI) researchers were shocked and dismayed when the Office of Human Research Protections (OHRP) froze a multicenter project investigating the use of checklists to reduce infections in intensive care units (ICU).
Even though this simple intervention had been shown to dramatically cut ICU infection rates, the OHRP opted to halt the study because Johns Hopkins researchers hadn’t run their protocols by the institutional review boards (IRBs) of the 100-plus hospitals participating in the study.
Fearing that this ruling might have a chilling effect on QI studies nationwide, SHM immediately set out to build a coalition of medical organizations to challenge the OHRP’s decision.
SHM is joining several other medical societies to send a letter to Health and Human Services (HHS) Secretary Mike Leavitt to ask him to lift the OHRP’s ban on data collection. In addition, the SHM has posted a letter on its Legislative Action Center Web page (accessible at www.hospitalmedicine.org/beheard) so members can add their voices to the protest.
At Issue
What shocked many was the breadth of the OHRP’s ruling—and the rationale behind it. The OHRP’s problem with the Hopkins study wasn’t that the intervention was harmful—or even risky. The problem was that researchers Pronovost, et al., had published their results in the New England Journal of Medicine in 2006 and hadn’t treated this study as “human subjects research.”1
In general, the OHRP’s goals are laudable, says Robert Wachter, MD, professor and chief of the division of hospital medicine and a professor at the University of California at San Francisco and a former president of the SHM. They want to protect patients.
“I’m not clamoring to get rid of IRBs or to subject unwitting patients to potentially harmful therapies,” Dr. Wachter says. “But it’s crucial to find the right balance between protections built into the research world and allowing people to do quality improvement. This ruling is wrong.”
You want there to constantly be implementation of strategies to improve quality of care and attempts to measurement the impact of those strategies, Dr. Wachter says. “This is the kind of thing that hospitalists should be doing as soon as they wake up in the morning,” he adds.
The checklist at the center of the controversy included five easily implemented procedures that the Centers for Disease Control and Prevention had previously identified as effective in reducing the rate of infections that could result when a central line catheter was inserted. Among the procedures on the checklist were such seemingly commonsense measures as hand washing, cleaning the patient’s skin with chlorhexidine, and using barrier precautions during catheter insertions.
The Hopkins researchers suspected that in the busy ICU environment, these procedures were not routinely followed. Physicians might benefit from a reminder—a checklist.
To determine whether something as simple as a checklist could have an effect on infection rates, researchers from the Johns Hopkins Center for Innovation in Quality Patient Care partnered with 103 hospitals in Michigan that agreed to implement the checklists and keep track of infection rates.
When the researchers compared infection rates before and after the checklists were implemented, they found infections had dropped by two-thirds within the first three months. That’s pretty significant when you consider that each year in ICUs across the nation, there are 80,000 catheter-related bloodstream infections that result in an estimated 28,000 patient deaths.
—Robert Wachter, MD, professor and chief of the division of hospital medicine, University of California, San Francisco
Backlash
The Hopkins researchers figured they wouldn’t have problems with OHRP because they weren’t studying a new, unproven intervention. They were simply trying to discover the impact of providing a checklist of proven procedures.
Just to be on the safe side, though, the researchers presented their plans to the Hopkins IRB, which determined that the study was exempt from review.
So it was a surprise to everyone when the OHRP, acting on an anonymous complaint, weighed in and shut down data collection after ruling that IRBs from each of the 103 hospitals participating in the study would need to separately evaluate and approve the study.
“Most people read about this and their jaws dropped,” Dr. Wachter says. “They couldn’t believe that the federal government would restrict research on the use of a checklist. It’s wacky.”
This is a perfect example of regulatory overreach, Dr. Wachter says.
“It can be challenging to draw the line,” he adds. “But, to me, it defies common sense to say that a program in which we are going to implement a checklist and then collect data to see if it works constitutes research and therefore requires the same amount of patient protection as a study of a new device or a potentially toxic medication.”
What made the OHRP ruling seem even more odd was the fact that another division of the HHS, the Agency for Healthcare Research and Quality, had added Dr. Pronovost’s study to its list of “classic” papers shortly after the research was published.
Making matters worse in many researchers’ minds was the fact the OHRP didn’t stop with this study. An official letter to officials at Johns Hopkins extended the agency’s reach to all Hopkins “quality assurance/quality improvement proposals for which federal funding is being sought.”
The OHRP ordered that these proposals be “examined to determine if IRB review was conducted or if exempt status was not granted inappropriately. If these are not the case, the [principal investigators] for the proposals will be contacted and informed that prospective data collection requires IRB review and that an application for exempt status will not be accepted for these projects.”
And the OHRP went even further. The November letter suggested that even quality assurance/quality improvement studies that included retrospective reviews might be construed as “human subjects research.”
In essence, that means any attempt to evaluate the impact of any type of change in procedures meant to improve quality of care would require IRB scrutiny and—quite likely—patient consents, says Mary Ann Baily, PhD, an associate for ethics and health policy at the Hastings Center in Garrison, N.Y.
“I think it’s very maddening,” Dr. Baily says. “The OHRP has created an impossible situation. Why in heavens name would the OHRP want to tie QI researchers up in knots?”
Despite multiple requests from The Hospitalist, officials at the OHRP declined to comment and clarify the issue.
Some QI researchers see the need for regulation. There should be some oversight, even when it comes to QI, says Lakshmi Halasyamani, MD, vice chair for the department of medicine at St. Joseph Mercy Hospital in Ann Arbor. Mich., and a member of SHM’s Board of Directors. While the intervention involved in the Johns Hopkins case seems relatively benign, this isn’t always the case with QI.
It comes down to evaluating the level of risk to patients, says Dr. Halasyamani. And there needs to be someone, somewhere in the process evaluating the risks to patients of each intervention, Dr. Halasyamani says.
“We shouldn’t be creating a whole new level of bureaucracy that will slow down low risk interventions that could have a huge impact,” she adds. “But, you want someone looking at whether the interventions could have a downside.”
When it’s a low-risk intervention—like the one initiated by the Johns Hopkins researchers—then the forms filled out by patients giving consent for treatment should be enough, Dr. Halasyamani says.
While ethicists and researchers kick these ideas around, others have decided to take some more immediate actions to try to clear the way for research on the low risk interventions.
Dr. Wachter and others are rallying around the Hopkins researchers and orchestrating a letter-writing campaign. “You may ask why we are pushing back so hard to get people to take another look at the OHRP’s ruling on this one study,” he says. “Johns Hopkins has the money and infrastructure to deal with this. They will find a way to get IRB approval from the Michigan hospitals.
“But what about the next time I want to do a quality improvement study, or when one of my residents on a six-month rotation wants to do one? I’m going to say I don’t think you should. It’s going to take a month to get IRB approval and then, potentially, every patient, physician, nurse—basically everyone who comes into contact with the intervention—will need to be consented. The ruling will shut down innovation.” TH
Linda Carroll is a medical journalist based in New Jersey.
Reference
- Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006 Dec 28;355(26):2725-2732. Erratum in: N Engl J Med. 2007 Jun 21;356(25):2660.
Quality improvement (QI) researchers were shocked and dismayed when the Office of Human Research Protections (OHRP) froze a multicenter project investigating the use of checklists to reduce infections in intensive care units (ICU).
Even though this simple intervention had been shown to dramatically cut ICU infection rates, the OHRP opted to halt the study because Johns Hopkins researchers hadn’t run their protocols by the institutional review boards (IRBs) of the 100-plus hospitals participating in the study.
Fearing that this ruling might have a chilling effect on QI studies nationwide, SHM immediately set out to build a coalition of medical organizations to challenge the OHRP’s decision.
SHM is joining several other medical societies to send a letter to Health and Human Services (HHS) Secretary Mike Leavitt to ask him to lift the OHRP’s ban on data collection. In addition, the SHM has posted a letter on its Legislative Action Center Web page (accessible at www.hospitalmedicine.org/beheard) so members can add their voices to the protest.
At Issue
What shocked many was the breadth of the OHRP’s ruling—and the rationale behind it. The OHRP’s problem with the Hopkins study wasn’t that the intervention was harmful—or even risky. The problem was that researchers Pronovost, et al., had published their results in the New England Journal of Medicine in 2006 and hadn’t treated this study as “human subjects research.”1
In general, the OHRP’s goals are laudable, says Robert Wachter, MD, professor and chief of the division of hospital medicine and a professor at the University of California at San Francisco and a former president of the SHM. They want to protect patients.
“I’m not clamoring to get rid of IRBs or to subject unwitting patients to potentially harmful therapies,” Dr. Wachter says. “But it’s crucial to find the right balance between protections built into the research world and allowing people to do quality improvement. This ruling is wrong.”
You want there to constantly be implementation of strategies to improve quality of care and attempts to measurement the impact of those strategies, Dr. Wachter says. “This is the kind of thing that hospitalists should be doing as soon as they wake up in the morning,” he adds.
The checklist at the center of the controversy included five easily implemented procedures that the Centers for Disease Control and Prevention had previously identified as effective in reducing the rate of infections that could result when a central line catheter was inserted. Among the procedures on the checklist were such seemingly commonsense measures as hand washing, cleaning the patient’s skin with chlorhexidine, and using barrier precautions during catheter insertions.
The Hopkins researchers suspected that in the busy ICU environment, these procedures were not routinely followed. Physicians might benefit from a reminder—a checklist.
To determine whether something as simple as a checklist could have an effect on infection rates, researchers from the Johns Hopkins Center for Innovation in Quality Patient Care partnered with 103 hospitals in Michigan that agreed to implement the checklists and keep track of infection rates.
When the researchers compared infection rates before and after the checklists were implemented, they found infections had dropped by two-thirds within the first three months. That’s pretty significant when you consider that each year in ICUs across the nation, there are 80,000 catheter-related bloodstream infections that result in an estimated 28,000 patient deaths.
—Robert Wachter, MD, professor and chief of the division of hospital medicine, University of California, San Francisco
Backlash
The Hopkins researchers figured they wouldn’t have problems with OHRP because they weren’t studying a new, unproven intervention. They were simply trying to discover the impact of providing a checklist of proven procedures.
Just to be on the safe side, though, the researchers presented their plans to the Hopkins IRB, which determined that the study was exempt from review.
So it was a surprise to everyone when the OHRP, acting on an anonymous complaint, weighed in and shut down data collection after ruling that IRBs from each of the 103 hospitals participating in the study would need to separately evaluate and approve the study.
“Most people read about this and their jaws dropped,” Dr. Wachter says. “They couldn’t believe that the federal government would restrict research on the use of a checklist. It’s wacky.”
This is a perfect example of regulatory overreach, Dr. Wachter says.
“It can be challenging to draw the line,” he adds. “But, to me, it defies common sense to say that a program in which we are going to implement a checklist and then collect data to see if it works constitutes research and therefore requires the same amount of patient protection as a study of a new device or a potentially toxic medication.”
What made the OHRP ruling seem even more odd was the fact that another division of the HHS, the Agency for Healthcare Research and Quality, had added Dr. Pronovost’s study to its list of “classic” papers shortly after the research was published.
Making matters worse in many researchers’ minds was the fact the OHRP didn’t stop with this study. An official letter to officials at Johns Hopkins extended the agency’s reach to all Hopkins “quality assurance/quality improvement proposals for which federal funding is being sought.”
The OHRP ordered that these proposals be “examined to determine if IRB review was conducted or if exempt status was not granted inappropriately. If these are not the case, the [principal investigators] for the proposals will be contacted and informed that prospective data collection requires IRB review and that an application for exempt status will not be accepted for these projects.”
And the OHRP went even further. The November letter suggested that even quality assurance/quality improvement studies that included retrospective reviews might be construed as “human subjects research.”
In essence, that means any attempt to evaluate the impact of any type of change in procedures meant to improve quality of care would require IRB scrutiny and—quite likely—patient consents, says Mary Ann Baily, PhD, an associate for ethics and health policy at the Hastings Center in Garrison, N.Y.
“I think it’s very maddening,” Dr. Baily says. “The OHRP has created an impossible situation. Why in heavens name would the OHRP want to tie QI researchers up in knots?”
Despite multiple requests from The Hospitalist, officials at the OHRP declined to comment and clarify the issue.
Some QI researchers see the need for regulation. There should be some oversight, even when it comes to QI, says Lakshmi Halasyamani, MD, vice chair for the department of medicine at St. Joseph Mercy Hospital in Ann Arbor. Mich., and a member of SHM’s Board of Directors. While the intervention involved in the Johns Hopkins case seems relatively benign, this isn’t always the case with QI.
It comes down to evaluating the level of risk to patients, says Dr. Halasyamani. And there needs to be someone, somewhere in the process evaluating the risks to patients of each intervention, Dr. Halasyamani says.
“We shouldn’t be creating a whole new level of bureaucracy that will slow down low risk interventions that could have a huge impact,” she adds. “But, you want someone looking at whether the interventions could have a downside.”
When it’s a low-risk intervention—like the one initiated by the Johns Hopkins researchers—then the forms filled out by patients giving consent for treatment should be enough, Dr. Halasyamani says.
While ethicists and researchers kick these ideas around, others have decided to take some more immediate actions to try to clear the way for research on the low risk interventions.
Dr. Wachter and others are rallying around the Hopkins researchers and orchestrating a letter-writing campaign. “You may ask why we are pushing back so hard to get people to take another look at the OHRP’s ruling on this one study,” he says. “Johns Hopkins has the money and infrastructure to deal with this. They will find a way to get IRB approval from the Michigan hospitals.
“But what about the next time I want to do a quality improvement study, or when one of my residents on a six-month rotation wants to do one? I’m going to say I don’t think you should. It’s going to take a month to get IRB approval and then, potentially, every patient, physician, nurse—basically everyone who comes into contact with the intervention—will need to be consented. The ruling will shut down innovation.” TH
Linda Carroll is a medical journalist based in New Jersey.
Reference
- Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006 Dec 28;355(26):2725-2732. Erratum in: N Engl J Med. 2007 Jun 21;356(25):2660.