User login
On June 30, a new government agency within the Department of Health and Human Services (HHS) called the Federal Coordinating Council for Comparative Effectiveness Research released its first report to President Obama and Congress. Authorized by the American Recovery and Reinvestment Act of 2009, the council is tasked with prioritizing and coordinating how multiple government agencies will spend the stimulus package’s $1.1 billion windfall for comparative effectiveness research (CER), which is aimed at improving healthcare outcomes in the U.S.
Of the funds, $400 million has been directed to the National Institutes of Health (NIH), $300 million to the Agency for Healthcare Research and Quality, and the remaining $400 million to the Office of the Secretary of Health and Human Services.
Patrick Conway, MD, MSc, the federal coordinating council’s executive director, is well versed in the potential impact of comparative effectiveness research on hospitalists. Just as Dr. Conway was joining the Center for Health Care Quality at Cincinnati Children’s Hospital after a fellowship at Children’s Hospital of Philadelphia, the pediatric hospitalist was named a 2007-2008 White House Fellow at HHS—the first hospitalist accepted into the program.
—Patrick Conway, MD, MSc, executive director, HHS’ Federal Coordinating Council for Comparative Effectiveness Research
In August 2008, he was tapped for the post of chief medical officer in the department’s Office of the Assistant Secretary for Planning and Evaluation.
Meanwhile, Dr. Conway still sees patients on weekends at Children’s National Medical Center in Washington, D.C. He recently talked with The Hospitalist about the challenges of coordinating research funding across multiple government agencies, how the Office of the Secretary’s $400 million allocation could be best spent, and what it all means for patient care.
Question: What are the biggest recommendations in the federal coordinating council’s report?
Answer: We approached this as “What unique role can the Office of the Secretary research funds address?” We identified data infrastructure as a potential primary investment. That includes things such as patient registries, distributed data networks, and claims databases.
Traditionally, the federal government has not invested in infrastructure because we have funded independent investigators on a one-question-by-one-question basis. The way I see this infusion of funds is it allows you to invest in data infrastructure that can then be used to answer literally hundreds of questions over time.
Secondly, we identified dissemination and translation, so how do we think about innovative ways to actually communicate directly to patients and physicians at the point of care? We also identified priority populations, including racial and ethnic minorities, persons with multiple chronic conditions, children, and the elderly. And lastly, we identified priority interventions, such as behavioral change, delivery systems, and prevention. So how do we decrease obesity, how do we decrease smoking rates?
Q: How will you address the challenge of coordinating research funding across multiple federal agencies?
A: I think the first step is doing the inventory [of CER], which is going to be an ongoing and iterative process. By doing that, then the council and the HHS have to attempt to avoid duplicating efforts and actually coordinate efforts across the federal government.
Honestly, I think the biggest challenge is these are extremely large, complex government programs. These are hundreds of millions of dollars going out to a huge variety of researchers, academic institutions, etc. One of the systems we’re trying to put in place is a better way to track what’s going on now, so we can actually coordinate going forward. It’s something as simple as we now have a common definition. We tag all money (e.g., CER), so we know exactly what we’re spending money on. That sounds really simple, but it’s actually never been done before. This is a relatively new area of emphasis for the federal government and for healthcare.
Q: What main point should hospitalists take away from this report?
A: This research will address primary questions about which medicine is best for which patient but also address larger issues, such as care coordination and how care is organized within the hospital and outside the hospital, so that we focus on the gamut of questions that have the potential to improve patient outcomes.
Q: What were some common themes you heard in the public listening sessions and online comments you solicited during the report’s preparation?
A: One of them was the importance of engaging stakeholders throughout the process, getting input from patients, physicians, policymakers. … We also heard themes about the need for infrastructure development, also the need for data infrastructure. We also heard a theme about the need for more work on research methodology and training of researchers. And then we heard a strong theme around “This needs to actually be disseminated and translated into care delivery.” So producing knowledge is helpful, but translating that knowledge into better outcomes is the ultimate goal.
Q: The report repeatedly mentions “real world” healthcare settings. Is this meant as a criticism of the idealized outcomes of efficacy research as it is typically conducted?
A: I don’t know that I would frame it as a criticism. I will say that as hospitalists, we are faced with patients every day where there’s unclear evidence about how best to manage that patient. And therefore, we need more evidence on the real questions that patients and physicians encounter in practice. I think we’ve had a long history of strong, well-funded randomized trials in this country, and I think we need to complement that with other methods of research as well, including databases, quality improvement, and measuring interventions.
Q: What are the limitations in translating all of this knowledge to interventions for the patients who need it?
A: I think the research paradigm traditionally has been: We fund an investigator. They go off for years and do their research. And then they publish it in the New England Journal [of Medicine] or JAMA, and we call that a success.
I would argue that we’re at a time where we need to think about a new paradigm, where just publishing it is some middle step. And we need to think about how you actually link the research enterprise to the care delivery enterprise, so research is rapidly implemented and you’re measuring outcomes and ensuring that research actually reaches the patients and clinicians.
Q: Are there any real-world examples of how to do this?
A: Say we had a national patient library and we thought about things that we have not traditionally thought about in healthcare—social networking, Twitter, Facebook, media channels that reach people now. How do you insert health content into those channels to actually change people’s behavior, or at least inform them? The medical establishment thinks we publish it in the New England Journal [of Medicine] and the world changes. That’s just fundamentally not true.
On the provider side, how do we think about the lay media? How do we think about channels that providers use, like UpToDate and Medscape? How do we get comparative effectiveness content into those channels that are used by providers and physicians?
Q: How should CER address the needs of patient groups that are under-represented in traditional medical studies?
A: I think that’s a huge area. Efficacy trials generally will show something works for the average patient. But the issue is, and I’ll give you a concrete example, if you are an elderly, African-American female with a couple of conditions (diabetes and heart disease), how will that treatment work for you? So I think the power of comparative effectiveness is that we, especially with the data sources we just talked about, can look at patient subgroups and get as close as possible to the individual level to really present information. Instead of [saying], this works on average patients, which includes lots of patients that don’t look at all like you, [we can] say we’ve looked and it actually works well for racial and ethnic minorities, or persons with disabilities, or the very elderly.
Q: What do you hope ultimately will come from this report?
A: On the care delivery side, this is an opportunity for hospitalists to test different interventions to improve care in the hospitals. For what I hope to achieve, I think as we invest in all these individual programs, we are building in evaluation components to assess how this impacts patient outcomes.
I think the ultimate goal is to improve patient outcomes in this country, which I know is an unbelievably grand goal, but I think you build up to that by each investment. You track what it produces and ultimately how it affects outcomes, and so you at least start to build a sense of what this program means for the nation’s health. TH
Bryn Nelson is a freelance writer based in Seattle.
On June 30, a new government agency within the Department of Health and Human Services (HHS) called the Federal Coordinating Council for Comparative Effectiveness Research released its first report to President Obama and Congress. Authorized by the American Recovery and Reinvestment Act of 2009, the council is tasked with prioritizing and coordinating how multiple government agencies will spend the stimulus package’s $1.1 billion windfall for comparative effectiveness research (CER), which is aimed at improving healthcare outcomes in the U.S.
Of the funds, $400 million has been directed to the National Institutes of Health (NIH), $300 million to the Agency for Healthcare Research and Quality, and the remaining $400 million to the Office of the Secretary of Health and Human Services.
Patrick Conway, MD, MSc, the federal coordinating council’s executive director, is well versed in the potential impact of comparative effectiveness research on hospitalists. Just as Dr. Conway was joining the Center for Health Care Quality at Cincinnati Children’s Hospital after a fellowship at Children’s Hospital of Philadelphia, the pediatric hospitalist was named a 2007-2008 White House Fellow at HHS—the first hospitalist accepted into the program.
—Patrick Conway, MD, MSc, executive director, HHS’ Federal Coordinating Council for Comparative Effectiveness Research
In August 2008, he was tapped for the post of chief medical officer in the department’s Office of the Assistant Secretary for Planning and Evaluation.
Meanwhile, Dr. Conway still sees patients on weekends at Children’s National Medical Center in Washington, D.C. He recently talked with The Hospitalist about the challenges of coordinating research funding across multiple government agencies, how the Office of the Secretary’s $400 million allocation could be best spent, and what it all means for patient care.
Question: What are the biggest recommendations in the federal coordinating council’s report?
Answer: We approached this as “What unique role can the Office of the Secretary research funds address?” We identified data infrastructure as a potential primary investment. That includes things such as patient registries, distributed data networks, and claims databases.
Traditionally, the federal government has not invested in infrastructure because we have funded independent investigators on a one-question-by-one-question basis. The way I see this infusion of funds is it allows you to invest in data infrastructure that can then be used to answer literally hundreds of questions over time.
Secondly, we identified dissemination and translation, so how do we think about innovative ways to actually communicate directly to patients and physicians at the point of care? We also identified priority populations, including racial and ethnic minorities, persons with multiple chronic conditions, children, and the elderly. And lastly, we identified priority interventions, such as behavioral change, delivery systems, and prevention. So how do we decrease obesity, how do we decrease smoking rates?
Q: How will you address the challenge of coordinating research funding across multiple federal agencies?
A: I think the first step is doing the inventory [of CER], which is going to be an ongoing and iterative process. By doing that, then the council and the HHS have to attempt to avoid duplicating efforts and actually coordinate efforts across the federal government.
Honestly, I think the biggest challenge is these are extremely large, complex government programs. These are hundreds of millions of dollars going out to a huge variety of researchers, academic institutions, etc. One of the systems we’re trying to put in place is a better way to track what’s going on now, so we can actually coordinate going forward. It’s something as simple as we now have a common definition. We tag all money (e.g., CER), so we know exactly what we’re spending money on. That sounds really simple, but it’s actually never been done before. This is a relatively new area of emphasis for the federal government and for healthcare.
Q: What main point should hospitalists take away from this report?
A: This research will address primary questions about which medicine is best for which patient but also address larger issues, such as care coordination and how care is organized within the hospital and outside the hospital, so that we focus on the gamut of questions that have the potential to improve patient outcomes.
Q: What were some common themes you heard in the public listening sessions and online comments you solicited during the report’s preparation?
A: One of them was the importance of engaging stakeholders throughout the process, getting input from patients, physicians, policymakers. … We also heard themes about the need for infrastructure development, also the need for data infrastructure. We also heard a theme about the need for more work on research methodology and training of researchers. And then we heard a strong theme around “This needs to actually be disseminated and translated into care delivery.” So producing knowledge is helpful, but translating that knowledge into better outcomes is the ultimate goal.
Q: The report repeatedly mentions “real world” healthcare settings. Is this meant as a criticism of the idealized outcomes of efficacy research as it is typically conducted?
A: I don’t know that I would frame it as a criticism. I will say that as hospitalists, we are faced with patients every day where there’s unclear evidence about how best to manage that patient. And therefore, we need more evidence on the real questions that patients and physicians encounter in practice. I think we’ve had a long history of strong, well-funded randomized trials in this country, and I think we need to complement that with other methods of research as well, including databases, quality improvement, and measuring interventions.
Q: What are the limitations in translating all of this knowledge to interventions for the patients who need it?
A: I think the research paradigm traditionally has been: We fund an investigator. They go off for years and do their research. And then they publish it in the New England Journal [of Medicine] or JAMA, and we call that a success.
I would argue that we’re at a time where we need to think about a new paradigm, where just publishing it is some middle step. And we need to think about how you actually link the research enterprise to the care delivery enterprise, so research is rapidly implemented and you’re measuring outcomes and ensuring that research actually reaches the patients and clinicians.
Q: Are there any real-world examples of how to do this?
A: Say we had a national patient library and we thought about things that we have not traditionally thought about in healthcare—social networking, Twitter, Facebook, media channels that reach people now. How do you insert health content into those channels to actually change people’s behavior, or at least inform them? The medical establishment thinks we publish it in the New England Journal [of Medicine] and the world changes. That’s just fundamentally not true.
On the provider side, how do we think about the lay media? How do we think about channels that providers use, like UpToDate and Medscape? How do we get comparative effectiveness content into those channels that are used by providers and physicians?
Q: How should CER address the needs of patient groups that are under-represented in traditional medical studies?
A: I think that’s a huge area. Efficacy trials generally will show something works for the average patient. But the issue is, and I’ll give you a concrete example, if you are an elderly, African-American female with a couple of conditions (diabetes and heart disease), how will that treatment work for you? So I think the power of comparative effectiveness is that we, especially with the data sources we just talked about, can look at patient subgroups and get as close as possible to the individual level to really present information. Instead of [saying], this works on average patients, which includes lots of patients that don’t look at all like you, [we can] say we’ve looked and it actually works well for racial and ethnic minorities, or persons with disabilities, or the very elderly.
Q: What do you hope ultimately will come from this report?
A: On the care delivery side, this is an opportunity for hospitalists to test different interventions to improve care in the hospitals. For what I hope to achieve, I think as we invest in all these individual programs, we are building in evaluation components to assess how this impacts patient outcomes.
I think the ultimate goal is to improve patient outcomes in this country, which I know is an unbelievably grand goal, but I think you build up to that by each investment. You track what it produces and ultimately how it affects outcomes, and so you at least start to build a sense of what this program means for the nation’s health. TH
Bryn Nelson is a freelance writer based in Seattle.
On June 30, a new government agency within the Department of Health and Human Services (HHS) called the Federal Coordinating Council for Comparative Effectiveness Research released its first report to President Obama and Congress. Authorized by the American Recovery and Reinvestment Act of 2009, the council is tasked with prioritizing and coordinating how multiple government agencies will spend the stimulus package’s $1.1 billion windfall for comparative effectiveness research (CER), which is aimed at improving healthcare outcomes in the U.S.
Of the funds, $400 million has been directed to the National Institutes of Health (NIH), $300 million to the Agency for Healthcare Research and Quality, and the remaining $400 million to the Office of the Secretary of Health and Human Services.
Patrick Conway, MD, MSc, the federal coordinating council’s executive director, is well versed in the potential impact of comparative effectiveness research on hospitalists. Just as Dr. Conway was joining the Center for Health Care Quality at Cincinnati Children’s Hospital after a fellowship at Children’s Hospital of Philadelphia, the pediatric hospitalist was named a 2007-2008 White House Fellow at HHS—the first hospitalist accepted into the program.
—Patrick Conway, MD, MSc, executive director, HHS’ Federal Coordinating Council for Comparative Effectiveness Research
In August 2008, he was tapped for the post of chief medical officer in the department’s Office of the Assistant Secretary for Planning and Evaluation.
Meanwhile, Dr. Conway still sees patients on weekends at Children’s National Medical Center in Washington, D.C. He recently talked with The Hospitalist about the challenges of coordinating research funding across multiple government agencies, how the Office of the Secretary’s $400 million allocation could be best spent, and what it all means for patient care.
Question: What are the biggest recommendations in the federal coordinating council’s report?
Answer: We approached this as “What unique role can the Office of the Secretary research funds address?” We identified data infrastructure as a potential primary investment. That includes things such as patient registries, distributed data networks, and claims databases.
Traditionally, the federal government has not invested in infrastructure because we have funded independent investigators on a one-question-by-one-question basis. The way I see this infusion of funds is it allows you to invest in data infrastructure that can then be used to answer literally hundreds of questions over time.
Secondly, we identified dissemination and translation, so how do we think about innovative ways to actually communicate directly to patients and physicians at the point of care? We also identified priority populations, including racial and ethnic minorities, persons with multiple chronic conditions, children, and the elderly. And lastly, we identified priority interventions, such as behavioral change, delivery systems, and prevention. So how do we decrease obesity, how do we decrease smoking rates?
Q: How will you address the challenge of coordinating research funding across multiple federal agencies?
A: I think the first step is doing the inventory [of CER], which is going to be an ongoing and iterative process. By doing that, then the council and the HHS have to attempt to avoid duplicating efforts and actually coordinate efforts across the federal government.
Honestly, I think the biggest challenge is these are extremely large, complex government programs. These are hundreds of millions of dollars going out to a huge variety of researchers, academic institutions, etc. One of the systems we’re trying to put in place is a better way to track what’s going on now, so we can actually coordinate going forward. It’s something as simple as we now have a common definition. We tag all money (e.g., CER), so we know exactly what we’re spending money on. That sounds really simple, but it’s actually never been done before. This is a relatively new area of emphasis for the federal government and for healthcare.
Q: What main point should hospitalists take away from this report?
A: This research will address primary questions about which medicine is best for which patient but also address larger issues, such as care coordination and how care is organized within the hospital and outside the hospital, so that we focus on the gamut of questions that have the potential to improve patient outcomes.
Q: What were some common themes you heard in the public listening sessions and online comments you solicited during the report’s preparation?
A: One of them was the importance of engaging stakeholders throughout the process, getting input from patients, physicians, policymakers. … We also heard themes about the need for infrastructure development, also the need for data infrastructure. We also heard a theme about the need for more work on research methodology and training of researchers. And then we heard a strong theme around “This needs to actually be disseminated and translated into care delivery.” So producing knowledge is helpful, but translating that knowledge into better outcomes is the ultimate goal.
Q: The report repeatedly mentions “real world” healthcare settings. Is this meant as a criticism of the idealized outcomes of efficacy research as it is typically conducted?
A: I don’t know that I would frame it as a criticism. I will say that as hospitalists, we are faced with patients every day where there’s unclear evidence about how best to manage that patient. And therefore, we need more evidence on the real questions that patients and physicians encounter in practice. I think we’ve had a long history of strong, well-funded randomized trials in this country, and I think we need to complement that with other methods of research as well, including databases, quality improvement, and measuring interventions.
Q: What are the limitations in translating all of this knowledge to interventions for the patients who need it?
A: I think the research paradigm traditionally has been: We fund an investigator. They go off for years and do their research. And then they publish it in the New England Journal [of Medicine] or JAMA, and we call that a success.
I would argue that we’re at a time where we need to think about a new paradigm, where just publishing it is some middle step. And we need to think about how you actually link the research enterprise to the care delivery enterprise, so research is rapidly implemented and you’re measuring outcomes and ensuring that research actually reaches the patients and clinicians.
Q: Are there any real-world examples of how to do this?
A: Say we had a national patient library and we thought about things that we have not traditionally thought about in healthcare—social networking, Twitter, Facebook, media channels that reach people now. How do you insert health content into those channels to actually change people’s behavior, or at least inform them? The medical establishment thinks we publish it in the New England Journal [of Medicine] and the world changes. That’s just fundamentally not true.
On the provider side, how do we think about the lay media? How do we think about channels that providers use, like UpToDate and Medscape? How do we get comparative effectiveness content into those channels that are used by providers and physicians?
Q: How should CER address the needs of patient groups that are under-represented in traditional medical studies?
A: I think that’s a huge area. Efficacy trials generally will show something works for the average patient. But the issue is, and I’ll give you a concrete example, if you are an elderly, African-American female with a couple of conditions (diabetes and heart disease), how will that treatment work for you? So I think the power of comparative effectiveness is that we, especially with the data sources we just talked about, can look at patient subgroups and get as close as possible to the individual level to really present information. Instead of [saying], this works on average patients, which includes lots of patients that don’t look at all like you, [we can] say we’ve looked and it actually works well for racial and ethnic minorities, or persons with disabilities, or the very elderly.
Q: What do you hope ultimately will come from this report?
A: On the care delivery side, this is an opportunity for hospitalists to test different interventions to improve care in the hospitals. For what I hope to achieve, I think as we invest in all these individual programs, we are building in evaluation components to assess how this impacts patient outcomes.
I think the ultimate goal is to improve patient outcomes in this country, which I know is an unbelievably grand goal, but I think you build up to that by each investment. You track what it produces and ultimately how it affects outcomes, and so you at least start to build a sense of what this program means for the nation’s health. TH
Bryn Nelson is a freelance writer based in Seattle.