User login
The Future of Choice & VA Health Care
In late August, the President signed legislation that provided $2.1 billion to extend a program that gives veterans enrolled in the VHA a “Choice” in where they receive care. In the next few months, Congress will consider various plans to redesign the Veterans Choice Program. As policy makers consider these options, they should assess not only the plan’s ability to remedy any problems in veterans’ access to care, but also its broader impact. Congress must ensure that the next Choice Program does not compromise VHA’s overall quality of health care services delivered to veterans—care that has been demonstrated, with geographic variations, to be equal to, and often superior to, non-VA care.
Launched in 2014 as part of the Veterans Access, Choice and Accountability Act, the temporary Choice Program was meant to remedy a crisis of limited capacity, access, and excessive delays reported at many VHA facilities. The program offered non-VA options to veterans who had to wait long or travel far for their care. To date, the program has provided health care services to more than 1.6 million veterans.
As Senate and House VA committees began to draft new authorizing language for the program, many have spoken out about these issues and highlighted the unique importance of the VHA’s comprehensive, integrated model of care—one that is focused on the specific problems of veterans. NOVA, alongside its partners—Association of VA Psychologist Leaders, Association of VA Social Workers, and the organization Fighting for Veterans Healthcare—has provided their thoughts on the best solution to continue providing veterans timely access to this type of high-quality health care.
Congress must ensure far more than simply preserving the VHA’s innovative, integrated-care model. It must guarantee that the VHA’s system for clinically training the majority of U.S. health care professionals is maintained. The program funding must include a robust research department whose mission not only benefits veterans, but also the health care provided to every American. It must ensure that the community has the capacity to absorb an influx of veterans in a timely manner.
Community providers must be required to meet VHA’s elevated standards, use evidence-based treatments driven by measurement-based care, have knowledge of military culture and competence in veteran-specific problems, perform needed screenings, and be subject to the same training and continuing education requirements as VHA providers.
Given that non-VA care is more expensive than VHA care, Congress must ensure that any Choice care that veterans are offered is done so judiciously. Otherwise, the cost of Choice could wind up eroding VHA’s level of services. Finally, Congress also must ensure that the VHA is improved, not dismantled. As surveys and studies have shown, this is what the majority of veterans prefer and what they have been promised by administration and congressional leaders.
As VA nurses providing and coordinating care for veterans, we have a stake in how Choice and all community care is provided. As an organization, NOVA understands that community providers are a crucial part of an integrated network set up to provide care where there are shortages, but VHA must remain the first point of access and coordinator of that care.
Any new legislation addressing community-integrated care must include measures that hold providers accountable for performance and timeliness of care and services. It also must take into account the VHA’s unparalleled integration of primary and mental health care and the many wraparound services that are offered veterans.
Finally, the congressional budgeting process must include adequate funding for both VHA services and its integrated-community care accounts. The practice of reallocating funds from VHA health care accounts to pay for non-VA care cannot continue.
Making significant, lasting improvements in how VHA provides health care within its facilities and with partners in the community is unquestionably the right thing to do. It honors the sacred obligation we owe to veterans. Congress must be willing to invest in the VHA and provide veterans with the type of high-quality, veteran-centered care that serves their complex needs.
In late August, the President signed legislation that provided $2.1 billion to extend a program that gives veterans enrolled in the VHA a “Choice” in where they receive care. In the next few months, Congress will consider various plans to redesign the Veterans Choice Program. As policy makers consider these options, they should assess not only the plan’s ability to remedy any problems in veterans’ access to care, but also its broader impact. Congress must ensure that the next Choice Program does not compromise VHA’s overall quality of health care services delivered to veterans—care that has been demonstrated, with geographic variations, to be equal to, and often superior to, non-VA care.
Launched in 2014 as part of the Veterans Access, Choice and Accountability Act, the temporary Choice Program was meant to remedy a crisis of limited capacity, access, and excessive delays reported at many VHA facilities. The program offered non-VA options to veterans who had to wait long or travel far for their care. To date, the program has provided health care services to more than 1.6 million veterans.
As Senate and House VA committees began to draft new authorizing language for the program, many have spoken out about these issues and highlighted the unique importance of the VHA’s comprehensive, integrated model of care—one that is focused on the specific problems of veterans. NOVA, alongside its partners—Association of VA Psychologist Leaders, Association of VA Social Workers, and the organization Fighting for Veterans Healthcare—has provided their thoughts on the best solution to continue providing veterans timely access to this type of high-quality health care.
Congress must ensure far more than simply preserving the VHA’s innovative, integrated-care model. It must guarantee that the VHA’s system for clinically training the majority of U.S. health care professionals is maintained. The program funding must include a robust research department whose mission not only benefits veterans, but also the health care provided to every American. It must ensure that the community has the capacity to absorb an influx of veterans in a timely manner.
Community providers must be required to meet VHA’s elevated standards, use evidence-based treatments driven by measurement-based care, have knowledge of military culture and competence in veteran-specific problems, perform needed screenings, and be subject to the same training and continuing education requirements as VHA providers.
Given that non-VA care is more expensive than VHA care, Congress must ensure that any Choice care that veterans are offered is done so judiciously. Otherwise, the cost of Choice could wind up eroding VHA’s level of services. Finally, Congress also must ensure that the VHA is improved, not dismantled. As surveys and studies have shown, this is what the majority of veterans prefer and what they have been promised by administration and congressional leaders.
As VA nurses providing and coordinating care for veterans, we have a stake in how Choice and all community care is provided. As an organization, NOVA understands that community providers are a crucial part of an integrated network set up to provide care where there are shortages, but VHA must remain the first point of access and coordinator of that care.
Any new legislation addressing community-integrated care must include measures that hold providers accountable for performance and timeliness of care and services. It also must take into account the VHA’s unparalleled integration of primary and mental health care and the many wraparound services that are offered veterans.
Finally, the congressional budgeting process must include adequate funding for both VHA services and its integrated-community care accounts. The practice of reallocating funds from VHA health care accounts to pay for non-VA care cannot continue.
Making significant, lasting improvements in how VHA provides health care within its facilities and with partners in the community is unquestionably the right thing to do. It honors the sacred obligation we owe to veterans. Congress must be willing to invest in the VHA and provide veterans with the type of high-quality, veteran-centered care that serves their complex needs.
In late August, the President signed legislation that provided $2.1 billion to extend a program that gives veterans enrolled in the VHA a “Choice” in where they receive care. In the next few months, Congress will consider various plans to redesign the Veterans Choice Program. As policy makers consider these options, they should assess not only the plan’s ability to remedy any problems in veterans’ access to care, but also its broader impact. Congress must ensure that the next Choice Program does not compromise VHA’s overall quality of health care services delivered to veterans—care that has been demonstrated, with geographic variations, to be equal to, and often superior to, non-VA care.
Launched in 2014 as part of the Veterans Access, Choice and Accountability Act, the temporary Choice Program was meant to remedy a crisis of limited capacity, access, and excessive delays reported at many VHA facilities. The program offered non-VA options to veterans who had to wait long or travel far for their care. To date, the program has provided health care services to more than 1.6 million veterans.
As Senate and House VA committees began to draft new authorizing language for the program, many have spoken out about these issues and highlighted the unique importance of the VHA’s comprehensive, integrated model of care—one that is focused on the specific problems of veterans. NOVA, alongside its partners—Association of VA Psychologist Leaders, Association of VA Social Workers, and the organization Fighting for Veterans Healthcare—has provided their thoughts on the best solution to continue providing veterans timely access to this type of high-quality health care.
Congress must ensure far more than simply preserving the VHA’s innovative, integrated-care model. It must guarantee that the VHA’s system for clinically training the majority of U.S. health care professionals is maintained. The program funding must include a robust research department whose mission not only benefits veterans, but also the health care provided to every American. It must ensure that the community has the capacity to absorb an influx of veterans in a timely manner.
Community providers must be required to meet VHA’s elevated standards, use evidence-based treatments driven by measurement-based care, have knowledge of military culture and competence in veteran-specific problems, perform needed screenings, and be subject to the same training and continuing education requirements as VHA providers.
Given that non-VA care is more expensive than VHA care, Congress must ensure that any Choice care that veterans are offered is done so judiciously. Otherwise, the cost of Choice could wind up eroding VHA’s level of services. Finally, Congress also must ensure that the VHA is improved, not dismantled. As surveys and studies have shown, this is what the majority of veterans prefer and what they have been promised by administration and congressional leaders.
As VA nurses providing and coordinating care for veterans, we have a stake in how Choice and all community care is provided. As an organization, NOVA understands that community providers are a crucial part of an integrated network set up to provide care where there are shortages, but VHA must remain the first point of access and coordinator of that care.
Any new legislation addressing community-integrated care must include measures that hold providers accountable for performance and timeliness of care and services. It also must take into account the VHA’s unparalleled integration of primary and mental health care and the many wraparound services that are offered veterans.
Finally, the congressional budgeting process must include adequate funding for both VHA services and its integrated-community care accounts. The practice of reallocating funds from VHA health care accounts to pay for non-VA care cannot continue.
Making significant, lasting improvements in how VHA provides health care within its facilities and with partners in the community is unquestionably the right thing to do. It honors the sacred obligation we owe to veterans. Congress must be willing to invest in the VHA and provide veterans with the type of high-quality, veteran-centered care that serves their complex needs.
VIDEO: Gastroenterologist survey shows opportunity to expand Lynch syndrome testing
ORLANDO – A large percentage of U.S. gastroenterologists said that they don’t routinely order genetic testing for Lynch syndrome for patients with early-onset colorectal cancer, often because the physicians believe that the test is too expensive, or because they are unfamiliar with interpreting or applying the results, according to survey replies from 442 gastroenterologists.
Another factor hindering broader screening for Lynch syndrome (also known as hereditary nonpolyposis colorectal cancer) is that many of the surveyed gastroenterologists did not see themselves as having primary responsibility for ordering Lynch syndrome testing in patients who develop colorectal cancer before reaching age 50 years, Jordan J. Karlitz, MD, and his associates reported in a poster at the World Congress of Gastroenterology at ACG 2017.
The survey results showed that only a third of the survey respondents believed it primarily was the attending gastroenterologist’s responsibility to order testing for Lynch syndrome using either a microsatellite DNA instability test or by immunohistochemistry. A larger percentage, 38%, said that ordering one of these tests was something that a pathologist should arrange, 15% said it was primarily the responsibility of the attending medical oncologist, and the remaining respondents cited a surgeon or genetic counselor as having primary responsibility for ordering the test.
This absence of a clear consensus on who orders the test shows a “diffusion of responsibility” that often means testing is never ordered, Dr. Karlitz said in a video interview. What’s needed instead is “reflex testing” that’s done automatically for appropriate patients, an approach that has become standard at several U.S. medical centers, he noted.
The survey Dr. Karlitz and his associates ran stemmed from a report they published in 2015 that focused on management of the 274 patients diagnosed with early-onset colorectal cancer in Louisiana during 2011, defined as cancers diagnosed in patients aged 50 years or younger. Data collected in the Louisiana Tumor Registry showed that Lynch syndrome testing occurred for only 23% of these patients, the researchers reported (Am J Gastroenterol. 2015 Jul;110[7]:948-55).
To better understand the underpinnings of this low testing rate they sent a survey about Lynch syndrome testing by email in March 2017 to nearly 12,000 physicians on the membership roster of the American College of Gastroenterology. They received 455 replies, with 442 (97%) of the responses from gastroenterologists. When asked why they might not order Lynch syndrome testing for patients with early-onset colorectal cancer, 22% said the cost of testing was prohibitive, 18% blamed their lack of familiarity with the Lynch syndrome tests and how to properly interpret their results, and 15% attributed their decision to a lack of easy access to genetic counseling for their patients, with additional reasons cited by fewer respondents.
Dr. Karlitz noted that current recommendations from the National Comprehensive Cancer Network call for Lynch syndrome testing for all patients who develop colorectal cancer regardless of their age at diagnosis.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
mzoler@frontlinemedcom.com
On Twitter @mitchelzoler
ORLANDO – A large percentage of U.S. gastroenterologists said that they don’t routinely order genetic testing for Lynch syndrome for patients with early-onset colorectal cancer, often because the physicians believe that the test is too expensive, or because they are unfamiliar with interpreting or applying the results, according to survey replies from 442 gastroenterologists.
Another factor hindering broader screening for Lynch syndrome (also known as hereditary nonpolyposis colorectal cancer) is that many of the surveyed gastroenterologists did not see themselves as having primary responsibility for ordering Lynch syndrome testing in patients who develop colorectal cancer before reaching age 50 years, Jordan J. Karlitz, MD, and his associates reported in a poster at the World Congress of Gastroenterology at ACG 2017.
The survey results showed that only a third of the survey respondents believed it primarily was the attending gastroenterologist’s responsibility to order testing for Lynch syndrome using either a microsatellite DNA instability test or by immunohistochemistry. A larger percentage, 38%, said that ordering one of these tests was something that a pathologist should arrange, 15% said it was primarily the responsibility of the attending medical oncologist, and the remaining respondents cited a surgeon or genetic counselor as having primary responsibility for ordering the test.
This absence of a clear consensus on who orders the test shows a “diffusion of responsibility” that often means testing is never ordered, Dr. Karlitz said in a video interview. What’s needed instead is “reflex testing” that’s done automatically for appropriate patients, an approach that has become standard at several U.S. medical centers, he noted.
The survey Dr. Karlitz and his associates ran stemmed from a report they published in 2015 that focused on management of the 274 patients diagnosed with early-onset colorectal cancer in Louisiana during 2011, defined as cancers diagnosed in patients aged 50 years or younger. Data collected in the Louisiana Tumor Registry showed that Lynch syndrome testing occurred for only 23% of these patients, the researchers reported (Am J Gastroenterol. 2015 Jul;110[7]:948-55).
To better understand the underpinnings of this low testing rate they sent a survey about Lynch syndrome testing by email in March 2017 to nearly 12,000 physicians on the membership roster of the American College of Gastroenterology. They received 455 replies, with 442 (97%) of the responses from gastroenterologists. When asked why they might not order Lynch syndrome testing for patients with early-onset colorectal cancer, 22% said the cost of testing was prohibitive, 18% blamed their lack of familiarity with the Lynch syndrome tests and how to properly interpret their results, and 15% attributed their decision to a lack of easy access to genetic counseling for their patients, with additional reasons cited by fewer respondents.
Dr. Karlitz noted that current recommendations from the National Comprehensive Cancer Network call for Lynch syndrome testing for all patients who develop colorectal cancer regardless of their age at diagnosis.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
mzoler@frontlinemedcom.com
On Twitter @mitchelzoler
ORLANDO – A large percentage of U.S. gastroenterologists said that they don’t routinely order genetic testing for Lynch syndrome for patients with early-onset colorectal cancer, often because the physicians believe that the test is too expensive, or because they are unfamiliar with interpreting or applying the results, according to survey replies from 442 gastroenterologists.
Another factor hindering broader screening for Lynch syndrome (also known as hereditary nonpolyposis colorectal cancer) is that many of the surveyed gastroenterologists did not see themselves as having primary responsibility for ordering Lynch syndrome testing in patients who develop colorectal cancer before reaching age 50 years, Jordan J. Karlitz, MD, and his associates reported in a poster at the World Congress of Gastroenterology at ACG 2017.
The survey results showed that only a third of the survey respondents believed it primarily was the attending gastroenterologist’s responsibility to order testing for Lynch syndrome using either a microsatellite DNA instability test or by immunohistochemistry. A larger percentage, 38%, said that ordering one of these tests was something that a pathologist should arrange, 15% said it was primarily the responsibility of the attending medical oncologist, and the remaining respondents cited a surgeon or genetic counselor as having primary responsibility for ordering the test.
This absence of a clear consensus on who orders the test shows a “diffusion of responsibility” that often means testing is never ordered, Dr. Karlitz said in a video interview. What’s needed instead is “reflex testing” that’s done automatically for appropriate patients, an approach that has become standard at several U.S. medical centers, he noted.
The survey Dr. Karlitz and his associates ran stemmed from a report they published in 2015 that focused on management of the 274 patients diagnosed with early-onset colorectal cancer in Louisiana during 2011, defined as cancers diagnosed in patients aged 50 years or younger. Data collected in the Louisiana Tumor Registry showed that Lynch syndrome testing occurred for only 23% of these patients, the researchers reported (Am J Gastroenterol. 2015 Jul;110[7]:948-55).
To better understand the underpinnings of this low testing rate they sent a survey about Lynch syndrome testing by email in March 2017 to nearly 12,000 physicians on the membership roster of the American College of Gastroenterology. They received 455 replies, with 442 (97%) of the responses from gastroenterologists. When asked why they might not order Lynch syndrome testing for patients with early-onset colorectal cancer, 22% said the cost of testing was prohibitive, 18% blamed their lack of familiarity with the Lynch syndrome tests and how to properly interpret their results, and 15% attributed their decision to a lack of easy access to genetic counseling for their patients, with additional reasons cited by fewer respondents.
Dr. Karlitz noted that current recommendations from the National Comprehensive Cancer Network call for Lynch syndrome testing for all patients who develop colorectal cancer regardless of their age at diagnosis.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
mzoler@frontlinemedcom.com
On Twitter @mitchelzoler
AT THE WORLD CONGRESS OF GASTROENTEROLOGY
Key clinical point:
Major finding: Among gastroenterologist survey respondents, one-third said they had primary responsibility for ordering Lynch syndrome testing.
Data source: Survey emailed to members of the American College of Gastroenterology and completed by 455 physicians and surgeons.
Disclosures: Dr. Karlitz has been a speaker on behalf of Myriad Genetics, a company that markets genetic tests for Lynch syndrome.
FDA Approves Patient-Assisted Mammography
Women of all ages and sizes will be glad to know that they now have some say in the amount of pressure applied to the breast during a mammography. The FDA has cleared Senographe Pristina with Self-Compression, the first patient-assisted 2D digital mammography system.
Digital mammograms use a computer along with x-rays. During an exam with the new system, the technologist positions the patient and initiates compression, then guides the patient in using the handheld wireless remote control to adjust the compression to a comfortable level. The technologist makes the final decision on whether the compression is adequate.
A clinical validation demonstrated that the addition of a remote to allow self-compression did not negatively affect image quality. Nor did allowing the patient to help with adjustments make the exam take significantly longer.
Women of all ages and sizes will be glad to know that they now have some say in the amount of pressure applied to the breast during a mammography. The FDA has cleared Senographe Pristina with Self-Compression, the first patient-assisted 2D digital mammography system.
Digital mammograms use a computer along with x-rays. During an exam with the new system, the technologist positions the patient and initiates compression, then guides the patient in using the handheld wireless remote control to adjust the compression to a comfortable level. The technologist makes the final decision on whether the compression is adequate.
A clinical validation demonstrated that the addition of a remote to allow self-compression did not negatively affect image quality. Nor did allowing the patient to help with adjustments make the exam take significantly longer.
Women of all ages and sizes will be glad to know that they now have some say in the amount of pressure applied to the breast during a mammography. The FDA has cleared Senographe Pristina with Self-Compression, the first patient-assisted 2D digital mammography system.
Digital mammograms use a computer along with x-rays. During an exam with the new system, the technologist positions the patient and initiates compression, then guides the patient in using the handheld wireless remote control to adjust the compression to a comfortable level. The technologist makes the final decision on whether the compression is adequate.
A clinical validation demonstrated that the addition of a remote to allow self-compression did not negatively affect image quality. Nor did allowing the patient to help with adjustments make the exam take significantly longer.
Blood donors’ pregnancy history may impact recipients’ risk of death
In a large study, men who received blood from women with a history of pregnancy had an increased risk of death after transfusion.
However, receiving blood from a woman who was never pregnant did not carry the same risk.
And female recipients of blood transfusions had a similar risk of death whether they received blood from women with or without a history of pregnancy.
Rutger A. Middelburg, PhD, of Sanquin Research in Leiden, Netherlands, and his colleagues reported these results in JAMA.
The researchers noted that the most common cause of transfusion-related mortality is transfusion-related acute lung injury, which has been associated with transfusions from female donors, specifically those with a history of pregnancy.
Therefore, Dr Middleburg and his colleagues set out to determine whether an increased risk of mortality after red blood cell transfusions could depend on the donor’s history of pregnancy.
The team studied first-time transfusion recipients at 6 major Dutch hospitals.
When the researchers looked only at patients who received blood from a single type of donor (male/female with pregnancy history/female without), there was a significantly higher risk of death among men who received blood from females with a history of pregnancy.
Male recipients
There were 1722 deaths among the 12,212 males who only received blood from male donors (hazard ratio [HR]=1.00).
There were 1873 deaths among the 13,669 males who only received blood from females with a history of pregnancy (HR=1.13, P=0.03).
And there were 1831 deaths among the 13,538 men who only received blood from females without a history of pregnancy (HR=0.93, P=0.29).
Female recipients
There were 1752 deaths among the 13,332 females who only received blood from male donors (HR=1.00).
There were 1871 deaths among the 14,770 females who only received blood from females with a history of pregnancy (HR=0.99, P=0.92).
And there were 1868 deaths among the 14,685 females who only received blood from females with no history of pregnancy (HR=1.01, P=0.92).
Role of age
The researchers also found the association between donor pregnancy history and recipient death was only observed for men younger than 51.
There were 107 deaths among the 2251 males ages 0 to 17 who received blood from male donors (HR=1.00). And there were 124 deaths among the 2556 males ages 0 to 17 who received blood from females with a history of pregnancy (HR=1.63, P=0.04).
There were 84 deaths among the 1170 males ages 18 to 50 who received blood from male donors (HR=1.00). And there were 94 deaths among the 1296 males ages 18 to 50 who received blood from females with a history of pregnancy (HR=1.50, P=0.06).
There were 598 deaths among the 4292 males ages 51 to 70 who received blood from male donors (HR=1.00). And there were 645 deaths among the 4775 males ages 51 to 70 who received blood from females with a history of pregnancy (HR=1.10, P=0.31).
There were 933 deaths among the 4499 males ages 71 and older who received blood from male donors (HR=1.00). And there were 1010 deaths among the 5042 males ages 71 and older who received blood from females with a history of pregnancy (HR=1.06, P=0.47).
The researchers said more work is required to replicate these findings, determine their clinical significance, and identify the underlying mechanism.
In a large study, men who received blood from women with a history of pregnancy had an increased risk of death after transfusion.
However, receiving blood from a woman who was never pregnant did not carry the same risk.
And female recipients of blood transfusions had a similar risk of death whether they received blood from women with or without a history of pregnancy.
Rutger A. Middelburg, PhD, of Sanquin Research in Leiden, Netherlands, and his colleagues reported these results in JAMA.
The researchers noted that the most common cause of transfusion-related mortality is transfusion-related acute lung injury, which has been associated with transfusions from female donors, specifically those with a history of pregnancy.
Therefore, Dr Middleburg and his colleagues set out to determine whether an increased risk of mortality after red blood cell transfusions could depend on the donor’s history of pregnancy.
The team studied first-time transfusion recipients at 6 major Dutch hospitals.
When the researchers looked only at patients who received blood from a single type of donor (male/female with pregnancy history/female without), there was a significantly higher risk of death among men who received blood from females with a history of pregnancy.
Male recipients
There were 1722 deaths among the 12,212 males who only received blood from male donors (hazard ratio [HR]=1.00).
There were 1873 deaths among the 13,669 males who only received blood from females with a history of pregnancy (HR=1.13, P=0.03).
And there were 1831 deaths among the 13,538 men who only received blood from females without a history of pregnancy (HR=0.93, P=0.29).
Female recipients
There were 1752 deaths among the 13,332 females who only received blood from male donors (HR=1.00).
There were 1871 deaths among the 14,770 females who only received blood from females with a history of pregnancy (HR=0.99, P=0.92).
And there were 1868 deaths among the 14,685 females who only received blood from females with no history of pregnancy (HR=1.01, P=0.92).
Role of age
The researchers also found the association between donor pregnancy history and recipient death was only observed for men younger than 51.
There were 107 deaths among the 2251 males ages 0 to 17 who received blood from male donors (HR=1.00). And there were 124 deaths among the 2556 males ages 0 to 17 who received blood from females with a history of pregnancy (HR=1.63, P=0.04).
There were 84 deaths among the 1170 males ages 18 to 50 who received blood from male donors (HR=1.00). And there were 94 deaths among the 1296 males ages 18 to 50 who received blood from females with a history of pregnancy (HR=1.50, P=0.06).
There were 598 deaths among the 4292 males ages 51 to 70 who received blood from male donors (HR=1.00). And there were 645 deaths among the 4775 males ages 51 to 70 who received blood from females with a history of pregnancy (HR=1.10, P=0.31).
There were 933 deaths among the 4499 males ages 71 and older who received blood from male donors (HR=1.00). And there were 1010 deaths among the 5042 males ages 71 and older who received blood from females with a history of pregnancy (HR=1.06, P=0.47).
The researchers said more work is required to replicate these findings, determine their clinical significance, and identify the underlying mechanism.
In a large study, men who received blood from women with a history of pregnancy had an increased risk of death after transfusion.
However, receiving blood from a woman who was never pregnant did not carry the same risk.
And female recipients of blood transfusions had a similar risk of death whether they received blood from women with or without a history of pregnancy.
Rutger A. Middelburg, PhD, of Sanquin Research in Leiden, Netherlands, and his colleagues reported these results in JAMA.
The researchers noted that the most common cause of transfusion-related mortality is transfusion-related acute lung injury, which has been associated with transfusions from female donors, specifically those with a history of pregnancy.
Therefore, Dr Middleburg and his colleagues set out to determine whether an increased risk of mortality after red blood cell transfusions could depend on the donor’s history of pregnancy.
The team studied first-time transfusion recipients at 6 major Dutch hospitals.
When the researchers looked only at patients who received blood from a single type of donor (male/female with pregnancy history/female without), there was a significantly higher risk of death among men who received blood from females with a history of pregnancy.
Male recipients
There were 1722 deaths among the 12,212 males who only received blood from male donors (hazard ratio [HR]=1.00).
There were 1873 deaths among the 13,669 males who only received blood from females with a history of pregnancy (HR=1.13, P=0.03).
And there were 1831 deaths among the 13,538 men who only received blood from females without a history of pregnancy (HR=0.93, P=0.29).
Female recipients
There were 1752 deaths among the 13,332 females who only received blood from male donors (HR=1.00).
There were 1871 deaths among the 14,770 females who only received blood from females with a history of pregnancy (HR=0.99, P=0.92).
And there were 1868 deaths among the 14,685 females who only received blood from females with no history of pregnancy (HR=1.01, P=0.92).
Role of age
The researchers also found the association between donor pregnancy history and recipient death was only observed for men younger than 51.
There were 107 deaths among the 2251 males ages 0 to 17 who received blood from male donors (HR=1.00). And there were 124 deaths among the 2556 males ages 0 to 17 who received blood from females with a history of pregnancy (HR=1.63, P=0.04).
There were 84 deaths among the 1170 males ages 18 to 50 who received blood from male donors (HR=1.00). And there were 94 deaths among the 1296 males ages 18 to 50 who received blood from females with a history of pregnancy (HR=1.50, P=0.06).
There were 598 deaths among the 4292 males ages 51 to 70 who received blood from male donors (HR=1.00). And there were 645 deaths among the 4775 males ages 51 to 70 who received blood from females with a history of pregnancy (HR=1.10, P=0.31).
There were 933 deaths among the 4499 males ages 71 and older who received blood from male donors (HR=1.00). And there were 1010 deaths among the 5042 males ages 71 and older who received blood from females with a history of pregnancy (HR=1.06, P=0.47).
The researchers said more work is required to replicate these findings, determine their clinical significance, and identify the underlying mechanism.
Study shows similar safety with DOACs and warfarin
New research suggests patients with venous thromboembolism (VTE) have similar safety outcomes whether they receive direct oral anticoagulants (DOACs) or warfarin.
The population-based study showed there was no significant difference in the risk of major bleeding and all-cause mortality at 90 days whether patients received warfarin or DOACs (apixaban, dabigatran, and rivaroxaban).
Brenda R. Hemmelgarn, MD, PhD, of the University of Calgary in Alberta, Canada, and her colleagues reported results from this study in The BMJ.
The researchers noted that recent clinical trials have shown similar effectiveness and a reduced or similar risk of major bleeding complications for DOACs compared with warfarin. However, clinical trials involve a highly selected group of patients, so the rate of safety events reported in trials may not reflect those seen in everyday clinical practice.
With this in mind, Dr Hemmelgarn and her colleagues conducted a population-based study to determine the safety of DOAC and warfarin use in adults diagnosed with VTE between January 1, 2009, and March 31, 2016.
Using healthcare data from 6 jurisdictions in Canada and the US, the researchers identified 59,525 adults with a new diagnosis of VTE and a prescription for a DOAC (n=12,489) or warfarin (n=47,036) within 30 days of diagnosis.
Participants were followed for an average of 85.2 days. Of the 59,525 participants, 1967 (3.3%) had a major bleed, and 1029 (1.7%) died during the follow-up period.
Bleeding rates at 30 days ranged between 0.2% and 2.9% for DOACs and 0.2% and 2.9% for warfarin.
Bleeding rates at 60 days ranged between 0.4% and 4.3% for DOACs and 0.4% and 4.3% for warfarin.
The hazard ratio for major bleeding at 90 days was 0.92 (favoring DOACs). The hazard ratio for all-cause mortality at 90 days was 0.99.
The researchers said there was no evidence of heterogeneity across treatment centers, between patients with and without chronic kidney disease, across age groups, or between male and female patients.
The team noted that this is an observational study, so no firm conclusions can be drawn about cause and effect. And they could not rule out the possibility that their results may be due to confounding factors.
New research suggests patients with venous thromboembolism (VTE) have similar safety outcomes whether they receive direct oral anticoagulants (DOACs) or warfarin.
The population-based study showed there was no significant difference in the risk of major bleeding and all-cause mortality at 90 days whether patients received warfarin or DOACs (apixaban, dabigatran, and rivaroxaban).
Brenda R. Hemmelgarn, MD, PhD, of the University of Calgary in Alberta, Canada, and her colleagues reported results from this study in The BMJ.
The researchers noted that recent clinical trials have shown similar effectiveness and a reduced or similar risk of major bleeding complications for DOACs compared with warfarin. However, clinical trials involve a highly selected group of patients, so the rate of safety events reported in trials may not reflect those seen in everyday clinical practice.
With this in mind, Dr Hemmelgarn and her colleagues conducted a population-based study to determine the safety of DOAC and warfarin use in adults diagnosed with VTE between January 1, 2009, and March 31, 2016.
Using healthcare data from 6 jurisdictions in Canada and the US, the researchers identified 59,525 adults with a new diagnosis of VTE and a prescription for a DOAC (n=12,489) or warfarin (n=47,036) within 30 days of diagnosis.
Participants were followed for an average of 85.2 days. Of the 59,525 participants, 1967 (3.3%) had a major bleed, and 1029 (1.7%) died during the follow-up period.
Bleeding rates at 30 days ranged between 0.2% and 2.9% for DOACs and 0.2% and 2.9% for warfarin.
Bleeding rates at 60 days ranged between 0.4% and 4.3% for DOACs and 0.4% and 4.3% for warfarin.
The hazard ratio for major bleeding at 90 days was 0.92 (favoring DOACs). The hazard ratio for all-cause mortality at 90 days was 0.99.
The researchers said there was no evidence of heterogeneity across treatment centers, between patients with and without chronic kidney disease, across age groups, or between male and female patients.
The team noted that this is an observational study, so no firm conclusions can be drawn about cause and effect. And they could not rule out the possibility that their results may be due to confounding factors.
New research suggests patients with venous thromboembolism (VTE) have similar safety outcomes whether they receive direct oral anticoagulants (DOACs) or warfarin.
The population-based study showed there was no significant difference in the risk of major bleeding and all-cause mortality at 90 days whether patients received warfarin or DOACs (apixaban, dabigatran, and rivaroxaban).
Brenda R. Hemmelgarn, MD, PhD, of the University of Calgary in Alberta, Canada, and her colleagues reported results from this study in The BMJ.
The researchers noted that recent clinical trials have shown similar effectiveness and a reduced or similar risk of major bleeding complications for DOACs compared with warfarin. However, clinical trials involve a highly selected group of patients, so the rate of safety events reported in trials may not reflect those seen in everyday clinical practice.
With this in mind, Dr Hemmelgarn and her colleagues conducted a population-based study to determine the safety of DOAC and warfarin use in adults diagnosed with VTE between January 1, 2009, and March 31, 2016.
Using healthcare data from 6 jurisdictions in Canada and the US, the researchers identified 59,525 adults with a new diagnosis of VTE and a prescription for a DOAC (n=12,489) or warfarin (n=47,036) within 30 days of diagnosis.
Participants were followed for an average of 85.2 days. Of the 59,525 participants, 1967 (3.3%) had a major bleed, and 1029 (1.7%) died during the follow-up period.
Bleeding rates at 30 days ranged between 0.2% and 2.9% for DOACs and 0.2% and 2.9% for warfarin.
Bleeding rates at 60 days ranged between 0.4% and 4.3% for DOACs and 0.4% and 4.3% for warfarin.
The hazard ratio for major bleeding at 90 days was 0.92 (favoring DOACs). The hazard ratio for all-cause mortality at 90 days was 0.99.
The researchers said there was no evidence of heterogeneity across treatment centers, between patients with and without chronic kidney disease, across age groups, or between male and female patients.
The team noted that this is an observational study, so no firm conclusions can be drawn about cause and effect. And they could not rule out the possibility that their results may be due to confounding factors.
Women may ask fewer questions at scientific conferences
Women may ask fewer questions than men at scientific conferences, according to research published in PLOS ONE.
Researchers studied question-asking behavior at a large international conference and found that men asked 80% more questions than women.
“Previous research has shown that men are more likely to be invited to speak at conferences, which is likely to lead to them having a higher social reputation than their female peers,” said study author Amy Hinsley, PhD, of the University of Oxford in the UK.
“If women feel that they are low-status and have suffered discrimination and bias throughout their career, then they may be less likely to participate in public discussions, which will, in turn, affect their scientific reputation. This negative feedback loop can affect women and men, but the evidence in this study suggests that women are affected more.”
For this study, Dr Hinsley and her colleagues looked at question-asking behavior at the 2015 International Congress for Conservation Biology. The conference had a clear code of conduct for its 2000 attendees, which promoted equality and prohibited discrimination.
The authors observed 31 sessions across the 4-day conference, counting how many questions were asked and whether men or women were asking them.
Accounting for the number of men and women in the audience, men asked 1.8 questions for each question asked by a woman.
The same pattern was observed in younger researchers (1.8 to 1), suggesting it is not simply due to senior researchers, a large proportion of whom are men, asking all the questions.
Dr Hinsley and her colleagues feel this study should be used as an opportunity to raise awareness of the disparity in question-asking behavior between men and women and inspire discussion about why it is happening.
“We want our research to inspire conference organizers to encourage participation among all attendees,” said Alison Johnston, PhD, of Cambridge University in the UK.
“For example, questions over Twitter or other creative solutions could be tested. Session chairs could also be encouraged to pick participants that represent the gender in the audience. However, these patterns of behavior we observed are only a symptom of the bigger issue. Addressing this alone will not solve the problem [of gender inequality].”
“We should continue to research and investigate the underlying causes so we can implement actions that change the bigger picture for women in science. If we are to level the playing field for women in STEM, the complex issue of gender inequality has to stay on the agenda.”
Women may ask fewer questions than men at scientific conferences, according to research published in PLOS ONE.
Researchers studied question-asking behavior at a large international conference and found that men asked 80% more questions than women.
“Previous research has shown that men are more likely to be invited to speak at conferences, which is likely to lead to them having a higher social reputation than their female peers,” said study author Amy Hinsley, PhD, of the University of Oxford in the UK.
“If women feel that they are low-status and have suffered discrimination and bias throughout their career, then they may be less likely to participate in public discussions, which will, in turn, affect their scientific reputation. This negative feedback loop can affect women and men, but the evidence in this study suggests that women are affected more.”
For this study, Dr Hinsley and her colleagues looked at question-asking behavior at the 2015 International Congress for Conservation Biology. The conference had a clear code of conduct for its 2000 attendees, which promoted equality and prohibited discrimination.
The authors observed 31 sessions across the 4-day conference, counting how many questions were asked and whether men or women were asking them.
Accounting for the number of men and women in the audience, men asked 1.8 questions for each question asked by a woman.
The same pattern was observed in younger researchers (1.8 to 1), suggesting it is not simply due to senior researchers, a large proportion of whom are men, asking all the questions.
Dr Hinsley and her colleagues feel this study should be used as an opportunity to raise awareness of the disparity in question-asking behavior between men and women and inspire discussion about why it is happening.
“We want our research to inspire conference organizers to encourage participation among all attendees,” said Alison Johnston, PhD, of Cambridge University in the UK.
“For example, questions over Twitter or other creative solutions could be tested. Session chairs could also be encouraged to pick participants that represent the gender in the audience. However, these patterns of behavior we observed are only a symptom of the bigger issue. Addressing this alone will not solve the problem [of gender inequality].”
“We should continue to research and investigate the underlying causes so we can implement actions that change the bigger picture for women in science. If we are to level the playing field for women in STEM, the complex issue of gender inequality has to stay on the agenda.”
Women may ask fewer questions than men at scientific conferences, according to research published in PLOS ONE.
Researchers studied question-asking behavior at a large international conference and found that men asked 80% more questions than women.
“Previous research has shown that men are more likely to be invited to speak at conferences, which is likely to lead to them having a higher social reputation than their female peers,” said study author Amy Hinsley, PhD, of the University of Oxford in the UK.
“If women feel that they are low-status and have suffered discrimination and bias throughout their career, then they may be less likely to participate in public discussions, which will, in turn, affect their scientific reputation. This negative feedback loop can affect women and men, but the evidence in this study suggests that women are affected more.”
For this study, Dr Hinsley and her colleagues looked at question-asking behavior at the 2015 International Congress for Conservation Biology. The conference had a clear code of conduct for its 2000 attendees, which promoted equality and prohibited discrimination.
The authors observed 31 sessions across the 4-day conference, counting how many questions were asked and whether men or women were asking them.
Accounting for the number of men and women in the audience, men asked 1.8 questions for each question asked by a woman.
The same pattern was observed in younger researchers (1.8 to 1), suggesting it is not simply due to senior researchers, a large proportion of whom are men, asking all the questions.
Dr Hinsley and her colleagues feel this study should be used as an opportunity to raise awareness of the disparity in question-asking behavior between men and women and inspire discussion about why it is happening.
“We want our research to inspire conference organizers to encourage participation among all attendees,” said Alison Johnston, PhD, of Cambridge University in the UK.
“For example, questions over Twitter or other creative solutions could be tested. Session chairs could also be encouraged to pick participants that represent the gender in the audience. However, these patterns of behavior we observed are only a symptom of the bigger issue. Addressing this alone will not solve the problem [of gender inequality].”
“We should continue to research and investigate the underlying causes so we can implement actions that change the bigger picture for women in science. If we are to level the playing field for women in STEM, the complex issue of gender inequality has to stay on the agenda.”
High ‘nocebo’ effect observed when patients knowingly switch to a biosimilar
Evidence suggests that patients who switch from an originator biologic to open-label treatment with its biosimilar have an increase in subjective but not objective assessments and also discontinue the drug at a high rate, possibly reflecting a “nocebo” response to switching.
If patients’ own negative expectations induced “negative symptoms (hyperalgesia or adverse events) during treatment, the so-called nocebo response,” and was the main contributing factor to the high discontinuation rate, then it will be very important for clinicians to improve communication with patients and their expectations in order to raise acceptance and persistence rates, the investigators said.
Of the 47 patients who discontinued CT-P13, 26 did so because of a perceived lack of effect, 11 because of adverse events, and 10 because of a combination of both of these factors.
Univariate Cox regression analyses showed that shorter infliximab infusion interval, higher 28-joint Disease Activity Scores (DAS28, based on either C-reactive protein [CRP] or erythrocyte sedimentation rate), higher swollen joint count, and patients’ global disease activity score at baseline were associated with CT-P13 discontinuation.
However, patients’ and clinicians’ awareness of the switch could have influenced these factors, the investigators said. For instance, they found that patients who discontinued CT-P13 reported a significant increase in “subjective” assessments such as tender joint count and patient’s global disease activity but not “objective” measures such as swollen joint count or CRP.
While the mean Bath Ankylosing Spondylitis Disease Activity Index score increased from 3.8 to 4.3, the mean DAS28-CRP in rheumatoid arthritis and psoriatic arthritis patients remained stable at 2.2 from baseline to month 6; CRP and anti-infliximab antibody levels also did not change.
“If immunogenicity would have caused CT-P13 discontinuation, we would have expected to find more patients with objectively active disease and/or allergic reactions,” the study authors wrote.
Three of the authors reported receiving speaking and consultancy fees from several pharmaceutical companies. The study was not supported by an outside grant.
Evidence suggests that patients who switch from an originator biologic to open-label treatment with its biosimilar have an increase in subjective but not objective assessments and also discontinue the drug at a high rate, possibly reflecting a “nocebo” response to switching.
If patients’ own negative expectations induced “negative symptoms (hyperalgesia or adverse events) during treatment, the so-called nocebo response,” and was the main contributing factor to the high discontinuation rate, then it will be very important for clinicians to improve communication with patients and their expectations in order to raise acceptance and persistence rates, the investigators said.
Of the 47 patients who discontinued CT-P13, 26 did so because of a perceived lack of effect, 11 because of adverse events, and 10 because of a combination of both of these factors.
Univariate Cox regression analyses showed that shorter infliximab infusion interval, higher 28-joint Disease Activity Scores (DAS28, based on either C-reactive protein [CRP] or erythrocyte sedimentation rate), higher swollen joint count, and patients’ global disease activity score at baseline were associated with CT-P13 discontinuation.
However, patients’ and clinicians’ awareness of the switch could have influenced these factors, the investigators said. For instance, they found that patients who discontinued CT-P13 reported a significant increase in “subjective” assessments such as tender joint count and patient’s global disease activity but not “objective” measures such as swollen joint count or CRP.
While the mean Bath Ankylosing Spondylitis Disease Activity Index score increased from 3.8 to 4.3, the mean DAS28-CRP in rheumatoid arthritis and psoriatic arthritis patients remained stable at 2.2 from baseline to month 6; CRP and anti-infliximab antibody levels also did not change.
“If immunogenicity would have caused CT-P13 discontinuation, we would have expected to find more patients with objectively active disease and/or allergic reactions,” the study authors wrote.
Three of the authors reported receiving speaking and consultancy fees from several pharmaceutical companies. The study was not supported by an outside grant.
Evidence suggests that patients who switch from an originator biologic to open-label treatment with its biosimilar have an increase in subjective but not objective assessments and also discontinue the drug at a high rate, possibly reflecting a “nocebo” response to switching.
If patients’ own negative expectations induced “negative symptoms (hyperalgesia or adverse events) during treatment, the so-called nocebo response,” and was the main contributing factor to the high discontinuation rate, then it will be very important for clinicians to improve communication with patients and their expectations in order to raise acceptance and persistence rates, the investigators said.
Of the 47 patients who discontinued CT-P13, 26 did so because of a perceived lack of effect, 11 because of adverse events, and 10 because of a combination of both of these factors.
Univariate Cox regression analyses showed that shorter infliximab infusion interval, higher 28-joint Disease Activity Scores (DAS28, based on either C-reactive protein [CRP] or erythrocyte sedimentation rate), higher swollen joint count, and patients’ global disease activity score at baseline were associated with CT-P13 discontinuation.
However, patients’ and clinicians’ awareness of the switch could have influenced these factors, the investigators said. For instance, they found that patients who discontinued CT-P13 reported a significant increase in “subjective” assessments such as tender joint count and patient’s global disease activity but not “objective” measures such as swollen joint count or CRP.
While the mean Bath Ankylosing Spondylitis Disease Activity Index score increased from 3.8 to 4.3, the mean DAS28-CRP in rheumatoid arthritis and psoriatic arthritis patients remained stable at 2.2 from baseline to month 6; CRP and anti-infliximab antibody levels also did not change.
“If immunogenicity would have caused CT-P13 discontinuation, we would have expected to find more patients with objectively active disease and/or allergic reactions,” the study authors wrote.
Three of the authors reported receiving speaking and consultancy fees from several pharmaceutical companies. The study was not supported by an outside grant.
FROM ARTHRITIS & RHEUMATOLOGY
Key clinical point:
Major finding: Nearly a quarter of 192 patients with rheumatoid arthritis, psoriatic arthritis, or ankylosing spondylitis, who knowingly switched from originator infliximab to its biosimilar CT-P13 discontinued the biosimilar during 6 months of follow-up.
Data source: A multicenter, prospective cohort study of 192 infliximab-treated patients who transitioned to the infliximab biosimilar CT-P13.
Disclosures: Three of the authors reported receiving speaking and consultancy fees from several pharmaceutical companies. The study was not supported by an outside grant.
CardioMEMS shows real-world success as use expands
DALLAS – Management of outpatients with advanced heart failure using an implanted pulmonary artery pressure monitor continues to show real-world efficacy and safety at least as impressive as in the pivotal trial for the device.
Data from the first waves of patients to receive the CardioMEMS implanted pulmonary artery pressure (PAP) monitor since it got Food and Drug Administration marketing approval in May 2014 also showed steady uptake of this fluid volume management strategy for patients with advanced heart failure, despite Medicare reimbursement issues in some U.S. regions, J. Thomas Heywood, MD, said at the at the annual scientific meeting of the Heart Failure Society of America. He estimated that more than 6,000 U.S. heart failure patients have now had a CardioMEMS PAP monitor implanted.
“The clinicians using CardioMEMS now have a lot more experience” than they had during the trial, he said in an interview. “They have more experience using the device, they know what treatments to use to lower PAP more effectively, and they are now convinced that patients will benefit from reducing diastolic PAP.”
Dr. Heywood estimated that tens of thousands more U.S. heart failure patients with New York Heart Association class III disease and a recent history of at least one heart failure hospitalization are eligible to receive an implanted PAP monitor, dwarfing the more than 6,000 patients who received a device so far.
The postapproval study
The newest efficacy data come from the first 300 patients enrolled in the CardioMEMS HF System Post Approval Study, a registry of patients receiving an implanted PAP monitor funded by the device’s manufacturer and scheduled to include a total of 1,200 patients. Dr. Heywood said full enrollment was on track for completion by the end of October 2017.
The first 300 patients enrolled in the postapproval study were older than the CHAMPION cohort; they averaged about 69 years of age, compared with about 62 years in CHAMPION, were more often women (38% vs. 28% in CHAMPION), and were more likely to have heart failure with preserved ejection fraction (41% vs. about 22%).
A similar pattern existed for the 6-month cumulative tally of PAP area under the curve, which showed an average rise of 42 mm Hg/day in the CHAMPION control patients, an average drop of 160 mm Hg/day in the CHAMPION patients managed using their CardioMEMS data, and a drop of 281 mm Hg/day in the 300 postapproval study patients.
“We’re now using the implanted sensor in a broader population of patients, and one wonders whether the effect will be diluted. What we see is at least as good as in the CHAMPION trial. This is just an early snapshot, but it is exciting that we see no erosion of the benefit. It’s a great indication that the correct patients are receiving it,” Dr. Raval said while presenting a poster at the meeting.
Further scrutiny of the same 300 patients showed another feature of the impact of PAP monitoring on patient outcomes: The first 90 days with the PAP monitor in place led to a greater number of tweaks in patient treatment and a steady fall in PAP. During days 91-180, PAP tended to level off, the number of medication adjustments dropped, and heart failure hospitalizations fell even more than in the first 90 days, Joanna M. Joly, MD, reported in a separate poster at the meeting.
The data showed “effective reduction” of PAP during the second half of the study despite fewer medication adjustments. How was that possible? Patients who transmit data on their PAPs undergo “modeling of their behavior” based on the feedback they receive from the device, Dr. Joly suggested. Regular measurement of their PAP and seeing how the number relates to their clinical status helps patients “understand the impact of their nonadherence to diet and their medications.” Another factor could be the growing familiarity clinicians develop over time with PAP fluctuations that individual patients display repeatedly that are usually self-correcting. Also, patients may undergo “hemodynamic remodeling” that results in improved self-correction of minor shifts in fluid volume and vascular tone, she said.
This pattern of a reduced need for interventions after the first 90 days with a PAP implant suggests that many patients managed this way may be able to transition to care largely delivered by local providers, or even play a greater role in their own self-care once their PAP and clinical state stabilizes, Dr. Joly said.
The findings imply that by the end of the first 90 days, “patients accept the device and manage themselves better. It becomes basically a behavioral device” that helps patients better optimize their diet and behavior, Dr. Raval observed.
Safety holds steady
Continued real-world use of PAP monitoring has also resulted in new safety insights. During the first 3 years when the CardioMEMS device was on the U.S. market, May 2014–May 2017, the FDA’s adverse event reporting system for devices, the Manufacturer and User Facility Device Experience (MAUDE) received reports on 177 unique adverse events in 155 patients implanted with a PAP monitor, Muthiah Vaduganathan, MD, reported at the meeting. During the same 3-year period, he estimated that at least 5,500 U.S. patients had received a CardioMEMS device, based on data Dr. Vaduganathan obtained from the manufacturer, Abbott. This works out to an adverse event rate of about 2.8%, virtually identical to the rate reported from CHAMPION, noted Dr. Vaduganathan, a cardiologist also at Brigham and Women’s.
Analysis of both the 22 deaths as well as the episodes of pulmonary artery injury or hemoptysis showed that the preponderance occurred relatively early after introduction for U.S. use, suggesting that “a learning curve may exist for the most serious complications,” he said. “Improved safety and device durability may result from careful patient selection, increased operator training, and refined technologies.”
Dr. Vaduganathan cautioned that the MAUDE database is limited by its bias toward serious adverse events, selective reporting, and lack of adjudication for the reported events. Concurrently with his report at the meeting, a written version appeared online (JAMA Cardiol. 2017 Sep 18. doi:10.1001/jamacardio.2017.3791).
“The adverse event rate was reassuringly low, well below the accepted threshold for device safety. It bodes favorably for the device,” he said in an interview.
“But with a passive surveillance system like MAUDE, adverse events are likely underreported; we see in MAUDE the most severe adverse events. There is certainly a larger spectrum of more minor events that we are not seeing, but I think these numbers accurately reflect serious events.” A full registry of every U.S. patient who receives the device, similar to what’s in place for U.S. patients who undergo transcatheter aortic valve replacement, would provide a more complete picture of the risks, Dr. Vaduganathan suggested.
He also voiced some surprise about the frequency of pulmonary artery injury, which was not as apparent in the 550 total patients enrolled in CHAMPION. Clinicians who place the PAP monitor are required to first take a training program, but the manufacturer has no mandated minimum number of placements an operator must assist on before launching a new CardioMEMS practice, Dr. Vaduganathan said. Many of the pulmonary artery injuries reported to MAUDE resulted from wire perforations that resulted from loss of wire control, he noted.
Clarifying the optimal CardioMEMS recipients
PAP monitoring for patients with advanced heart failure “is a major advance for certain patients who have historically been very challenging to manage,” especially patients with heart failure with preserved ejection fraction, which has few other treatment options. But “it’s often difficult to know when to pull the trigger” and proceed with placing a PAP monitor in an eligible patient, he said. “Greater experience will help us better understand that,” Dr. Vaduganathan predicted.
Dr. Heywood said that, in addition to the standard criteria of NYHA class III symptoms and a recent history of a heart failure hospitalization, the other clinical feature he looks for in a patient who is a possible CardioMEMS recipient is a persistently elevated systolic PAP as measured using echocardiography.
“These are patients with evidence of an ongoing hemodynamic problem despite treatment, and I need more data to do a better job of getting their PAP down.” Although the PAP that patients self-measure once they have the device in place is their diastolic PAP, measuring systolic PAP by echo is usually a good surrogate for finding patients who also have a persistently elevated diastolic PAP, he explained.
Another important selection criterion is to look for the patients who are dying from heart failure rather than with heart failure, Dr. Heywood added.
“If heart failure is the major thing wrong, then we can improve their quality of life” by guiding fluid management with regular PAP measurement, especially patients with preserved left ventricular ejection fraction who have few other treatment options right now, he said.
The CardioMEMS HF System Post Approval Study is sponsored by Abbott, which markets CardioMEMS. Dr Heywood has been a consultant to and/or has received research funding from Abbott as well as Impedimed, Medtronic, Novartis, and Otsuka. Dr. Raval has been a consultant to Abbott. Dr. Joly and Dr. Vaduganathan had no disclosures.
mzoler@frontlinemedcom.com
On Twitter @mitchelzoler
DALLAS – Management of outpatients with advanced heart failure using an implanted pulmonary artery pressure monitor continues to show real-world efficacy and safety at least as impressive as in the pivotal trial for the device.
Data from the first waves of patients to receive the CardioMEMS implanted pulmonary artery pressure (PAP) monitor since it got Food and Drug Administration marketing approval in May 2014 also showed steady uptake of this fluid volume management strategy for patients with advanced heart failure, despite Medicare reimbursement issues in some U.S. regions, J. Thomas Heywood, MD, said at the at the annual scientific meeting of the Heart Failure Society of America. He estimated that more than 6,000 U.S. heart failure patients have now had a CardioMEMS PAP monitor implanted.
“The clinicians using CardioMEMS now have a lot more experience” than they had during the trial, he said in an interview. “They have more experience using the device, they know what treatments to use to lower PAP more effectively, and they are now convinced that patients will benefit from reducing diastolic PAP.”
Dr. Heywood estimated that tens of thousands more U.S. heart failure patients with New York Heart Association class III disease and a recent history of at least one heart failure hospitalization are eligible to receive an implanted PAP monitor, dwarfing the more than 6,000 patients who received a device so far.
The postapproval study
The newest efficacy data come from the first 300 patients enrolled in the CardioMEMS HF System Post Approval Study, a registry of patients receiving an implanted PAP monitor funded by the device’s manufacturer and scheduled to include a total of 1,200 patients. Dr. Heywood said full enrollment was on track for completion by the end of October 2017.
The first 300 patients enrolled in the postapproval study were older than the CHAMPION cohort; they averaged about 69 years of age, compared with about 62 years in CHAMPION, were more often women (38% vs. 28% in CHAMPION), and were more likely to have heart failure with preserved ejection fraction (41% vs. about 22%).
A similar pattern existed for the 6-month cumulative tally of PAP area under the curve, which showed an average rise of 42 mm Hg/day in the CHAMPION control patients, an average drop of 160 mm Hg/day in the CHAMPION patients managed using their CardioMEMS data, and a drop of 281 mm Hg/day in the 300 postapproval study patients.
“We’re now using the implanted sensor in a broader population of patients, and one wonders whether the effect will be diluted. What we see is at least as good as in the CHAMPION trial. This is just an early snapshot, but it is exciting that we see no erosion of the benefit. It’s a great indication that the correct patients are receiving it,” Dr. Raval said while presenting a poster at the meeting.
Further scrutiny of the same 300 patients showed another feature of the impact of PAP monitoring on patient outcomes: The first 90 days with the PAP monitor in place led to a greater number of tweaks in patient treatment and a steady fall in PAP. During days 91-180, PAP tended to level off, the number of medication adjustments dropped, and heart failure hospitalizations fell even more than in the first 90 days, Joanna M. Joly, MD, reported in a separate poster at the meeting.
The data showed “effective reduction” of PAP during the second half of the study despite fewer medication adjustments. How was that possible? Patients who transmit data on their PAPs undergo “modeling of their behavior” based on the feedback they receive from the device, Dr. Joly suggested. Regular measurement of their PAP and seeing how the number relates to their clinical status helps patients “understand the impact of their nonadherence to diet and their medications.” Another factor could be the growing familiarity clinicians develop over time with PAP fluctuations that individual patients display repeatedly that are usually self-correcting. Also, patients may undergo “hemodynamic remodeling” that results in improved self-correction of minor shifts in fluid volume and vascular tone, she said.
This pattern of a reduced need for interventions after the first 90 days with a PAP implant suggests that many patients managed this way may be able to transition to care largely delivered by local providers, or even play a greater role in their own self-care once their PAP and clinical state stabilizes, Dr. Joly said.
The findings imply that by the end of the first 90 days, “patients accept the device and manage themselves better. It becomes basically a behavioral device” that helps patients better optimize their diet and behavior, Dr. Raval observed.
Safety holds steady
Continued real-world use of PAP monitoring has also resulted in new safety insights. During the first 3 years when the CardioMEMS device was on the U.S. market, May 2014–May 2017, the FDA’s adverse event reporting system for devices, the Manufacturer and User Facility Device Experience (MAUDE) received reports on 177 unique adverse events in 155 patients implanted with a PAP monitor, Muthiah Vaduganathan, MD, reported at the meeting. During the same 3-year period, he estimated that at least 5,500 U.S. patients had received a CardioMEMS device, based on data Dr. Vaduganathan obtained from the manufacturer, Abbott. This works out to an adverse event rate of about 2.8%, virtually identical to the rate reported from CHAMPION, noted Dr. Vaduganathan, a cardiologist also at Brigham and Women’s.
Analysis of both the 22 deaths as well as the episodes of pulmonary artery injury or hemoptysis showed that the preponderance occurred relatively early after introduction for U.S. use, suggesting that “a learning curve may exist for the most serious complications,” he said. “Improved safety and device durability may result from careful patient selection, increased operator training, and refined technologies.”
Dr. Vaduganathan cautioned that the MAUDE database is limited by its bias toward serious adverse events, selective reporting, and lack of adjudication for the reported events. Concurrently with his report at the meeting, a written version appeared online (JAMA Cardiol. 2017 Sep 18. doi:10.1001/jamacardio.2017.3791).
“The adverse event rate was reassuringly low, well below the accepted threshold for device safety. It bodes favorably for the device,” he said in an interview.
“But with a passive surveillance system like MAUDE, adverse events are likely underreported; we see in MAUDE the most severe adverse events. There is certainly a larger spectrum of more minor events that we are not seeing, but I think these numbers accurately reflect serious events.” A full registry of every U.S. patient who receives the device, similar to what’s in place for U.S. patients who undergo transcatheter aortic valve replacement, would provide a more complete picture of the risks, Dr. Vaduganathan suggested.
He also voiced some surprise about the frequency of pulmonary artery injury, which was not as apparent in the 550 total patients enrolled in CHAMPION. Clinicians who place the PAP monitor are required to first take a training program, but the manufacturer has no mandated minimum number of placements an operator must assist on before launching a new CardioMEMS practice, Dr. Vaduganathan said. Many of the pulmonary artery injuries reported to MAUDE resulted from wire perforations that resulted from loss of wire control, he noted.
Clarifying the optimal CardioMEMS recipients
PAP monitoring for patients with advanced heart failure “is a major advance for certain patients who have historically been very challenging to manage,” especially patients with heart failure with preserved ejection fraction, which has few other treatment options. But “it’s often difficult to know when to pull the trigger” and proceed with placing a PAP monitor in an eligible patient, he said. “Greater experience will help us better understand that,” Dr. Vaduganathan predicted.
Dr. Heywood said that, in addition to the standard criteria of NYHA class III symptoms and a recent history of a heart failure hospitalization, the other clinical feature he looks for in a patient who is a possible CardioMEMS recipient is a persistently elevated systolic PAP as measured using echocardiography.
“These are patients with evidence of an ongoing hemodynamic problem despite treatment, and I need more data to do a better job of getting their PAP down.” Although the PAP that patients self-measure once they have the device in place is their diastolic PAP, measuring systolic PAP by echo is usually a good surrogate for finding patients who also have a persistently elevated diastolic PAP, he explained.
Another important selection criterion is to look for the patients who are dying from heart failure rather than with heart failure, Dr. Heywood added.
“If heart failure is the major thing wrong, then we can improve their quality of life” by guiding fluid management with regular PAP measurement, especially patients with preserved left ventricular ejection fraction who have few other treatment options right now, he said.
The CardioMEMS HF System Post Approval Study is sponsored by Abbott, which markets CardioMEMS. Dr Heywood has been a consultant to and/or has received research funding from Abbott as well as Impedimed, Medtronic, Novartis, and Otsuka. Dr. Raval has been a consultant to Abbott. Dr. Joly and Dr. Vaduganathan had no disclosures.
mzoler@frontlinemedcom.com
On Twitter @mitchelzoler
DALLAS – Management of outpatients with advanced heart failure using an implanted pulmonary artery pressure monitor continues to show real-world efficacy and safety at least as impressive as in the pivotal trial for the device.
Data from the first waves of patients to receive the CardioMEMS implanted pulmonary artery pressure (PAP) monitor since it got Food and Drug Administration marketing approval in May 2014 also showed steady uptake of this fluid volume management strategy for patients with advanced heart failure, despite Medicare reimbursement issues in some U.S. regions, J. Thomas Heywood, MD, said at the at the annual scientific meeting of the Heart Failure Society of America. He estimated that more than 6,000 U.S. heart failure patients have now had a CardioMEMS PAP monitor implanted.
“The clinicians using CardioMEMS now have a lot more experience” than they had during the trial, he said in an interview. “They have more experience using the device, they know what treatments to use to lower PAP more effectively, and they are now convinced that patients will benefit from reducing diastolic PAP.”
Dr. Heywood estimated that tens of thousands more U.S. heart failure patients with New York Heart Association class III disease and a recent history of at least one heart failure hospitalization are eligible to receive an implanted PAP monitor, dwarfing the more than 6,000 patients who received a device so far.
The postapproval study
The newest efficacy data come from the first 300 patients enrolled in the CardioMEMS HF System Post Approval Study, a registry of patients receiving an implanted PAP monitor funded by the device’s manufacturer and scheduled to include a total of 1,200 patients. Dr. Heywood said full enrollment was on track for completion by the end of October 2017.
The first 300 patients enrolled in the postapproval study were older than the CHAMPION cohort; they averaged about 69 years of age, compared with about 62 years in CHAMPION, were more often women (38% vs. 28% in CHAMPION), and were more likely to have heart failure with preserved ejection fraction (41% vs. about 22%).
A similar pattern existed for the 6-month cumulative tally of PAP area under the curve, which showed an average rise of 42 mm Hg/day in the CHAMPION control patients, an average drop of 160 mm Hg/day in the CHAMPION patients managed using their CardioMEMS data, and a drop of 281 mm Hg/day in the 300 postapproval study patients.
“We’re now using the implanted sensor in a broader population of patients, and one wonders whether the effect will be diluted. What we see is at least as good as in the CHAMPION trial. This is just an early snapshot, but it is exciting that we see no erosion of the benefit. It’s a great indication that the correct patients are receiving it,” Dr. Raval said while presenting a poster at the meeting.
Further scrutiny of the same 300 patients showed another feature of the impact of PAP monitoring on patient outcomes: The first 90 days with the PAP monitor in place led to a greater number of tweaks in patient treatment and a steady fall in PAP. During days 91-180, PAP tended to level off, the number of medication adjustments dropped, and heart failure hospitalizations fell even more than in the first 90 days, Joanna M. Joly, MD, reported in a separate poster at the meeting.
The data showed “effective reduction” of PAP during the second half of the study despite fewer medication adjustments. How was that possible? Patients who transmit data on their PAPs undergo “modeling of their behavior” based on the feedback they receive from the device, Dr. Joly suggested. Regular measurement of their PAP and seeing how the number relates to their clinical status helps patients “understand the impact of their nonadherence to diet and their medications.” Another factor could be the growing familiarity clinicians develop over time with PAP fluctuations that individual patients display repeatedly that are usually self-correcting. Also, patients may undergo “hemodynamic remodeling” that results in improved self-correction of minor shifts in fluid volume and vascular tone, she said.
This pattern of a reduced need for interventions after the first 90 days with a PAP implant suggests that many patients managed this way may be able to transition to care largely delivered by local providers, or even play a greater role in their own self-care once their PAP and clinical state stabilizes, Dr. Joly said.
The findings imply that by the end of the first 90 days, “patients accept the device and manage themselves better. It becomes basically a behavioral device” that helps patients better optimize their diet and behavior, Dr. Raval observed.
Safety holds steady
Continued real-world use of PAP monitoring has also resulted in new safety insights. During the first 3 years when the CardioMEMS device was on the U.S. market, May 2014–May 2017, the FDA’s adverse event reporting system for devices, the Manufacturer and User Facility Device Experience (MAUDE) received reports on 177 unique adverse events in 155 patients implanted with a PAP monitor, Muthiah Vaduganathan, MD, reported at the meeting. During the same 3-year period, he estimated that at least 5,500 U.S. patients had received a CardioMEMS device, based on data Dr. Vaduganathan obtained from the manufacturer, Abbott. This works out to an adverse event rate of about 2.8%, virtually identical to the rate reported from CHAMPION, noted Dr. Vaduganathan, a cardiologist also at Brigham and Women’s.
Analysis of both the 22 deaths as well as the episodes of pulmonary artery injury or hemoptysis showed that the preponderance occurred relatively early after introduction for U.S. use, suggesting that “a learning curve may exist for the most serious complications,” he said. “Improved safety and device durability may result from careful patient selection, increased operator training, and refined technologies.”
Dr. Vaduganathan cautioned that the MAUDE database is limited by its bias toward serious adverse events, selective reporting, and lack of adjudication for the reported events. Concurrently with his report at the meeting, a written version appeared online (JAMA Cardiol. 2017 Sep 18. doi:10.1001/jamacardio.2017.3791).
“The adverse event rate was reassuringly low, well below the accepted threshold for device safety. It bodes favorably for the device,” he said in an interview.
“But with a passive surveillance system like MAUDE, adverse events are likely underreported; we see in MAUDE the most severe adverse events. There is certainly a larger spectrum of more minor events that we are not seeing, but I think these numbers accurately reflect serious events.” A full registry of every U.S. patient who receives the device, similar to what’s in place for U.S. patients who undergo transcatheter aortic valve replacement, would provide a more complete picture of the risks, Dr. Vaduganathan suggested.
He also voiced some surprise about the frequency of pulmonary artery injury, which was not as apparent in the 550 total patients enrolled in CHAMPION. Clinicians who place the PAP monitor are required to first take a training program, but the manufacturer has no mandated minimum number of placements an operator must assist on before launching a new CardioMEMS practice, Dr. Vaduganathan said. Many of the pulmonary artery injuries reported to MAUDE resulted from wire perforations that resulted from loss of wire control, he noted.
Clarifying the optimal CardioMEMS recipients
PAP monitoring for patients with advanced heart failure “is a major advance for certain patients who have historically been very challenging to manage,” especially patients with heart failure with preserved ejection fraction, which has few other treatment options. But “it’s often difficult to know when to pull the trigger” and proceed with placing a PAP monitor in an eligible patient, he said. “Greater experience will help us better understand that,” Dr. Vaduganathan predicted.
Dr. Heywood said that, in addition to the standard criteria of NYHA class III symptoms and a recent history of a heart failure hospitalization, the other clinical feature he looks for in a patient who is a possible CardioMEMS recipient is a persistently elevated systolic PAP as measured using echocardiography.
“These are patients with evidence of an ongoing hemodynamic problem despite treatment, and I need more data to do a better job of getting their PAP down.” Although the PAP that patients self-measure once they have the device in place is their diastolic PAP, measuring systolic PAP by echo is usually a good surrogate for finding patients who also have a persistently elevated diastolic PAP, he explained.
Another important selection criterion is to look for the patients who are dying from heart failure rather than with heart failure, Dr. Heywood added.
“If heart failure is the major thing wrong, then we can improve their quality of life” by guiding fluid management with regular PAP measurement, especially patients with preserved left ventricular ejection fraction who have few other treatment options right now, he said.
The CardioMEMS HF System Post Approval Study is sponsored by Abbott, which markets CardioMEMS. Dr Heywood has been a consultant to and/or has received research funding from Abbott as well as Impedimed, Medtronic, Novartis, and Otsuka. Dr. Raval has been a consultant to Abbott. Dr. Joly and Dr. Vaduganathan had no disclosures.
mzoler@frontlinemedcom.com
On Twitter @mitchelzoler
AT THE HFSA ANNUAL SCIENTIFIC MEETING
VIDEO: Mobile stroke units aren’t just expensive toys
SAN DIEGO – Mobile stroke units are specially equipped ambulance units designed to respond and deliver treatment to stroke patients as swiftly as possible. They are outfitted with a portable CT scanner, a mobile lab, and specialized personnel, including a telemedicine unit to assist with diagnosis. If a patient is experiencing an ischemic stroke, the unit can deliver thrombolytic therapy on the spot, circumventing travel to an emergency department.
But are they cost effective? There are 13 active units in the United States, and they’re not cheap. They cost about $3.5 million to build and operate over 5 years, according to James Grotta, MD, a neurologist with the Memorial Hermann Medical Group and director of stroke research at Memorial Hermann–Texas Medical Center, both in Houston.
In a video interview at the annual meeting of the American Neurological Association, Dr. Grotta described how his group is studying the impact of mobile stroke units on time to treatment and the long-term costs and cost savings associated with them in an ongoing clinical trial that is comparing outcomes in patients eligible for tissue plasminogen activator when treated by a mobile stroke unit versus standard prehospital triage and transport by emergency medical services. The study is comparing outcomes when the mobile stroke unit and emergency medical services are the primary responders on alternating weeks. Primary outcomes include cost-effectiveness, the change in Rankin scale score from baseline to 90 days, and the diagnostic agreement between a vascular neurologist in the mobile stroke unit and a telemedicine vascular neurologist consulted from the unit.
Mobile stroke units can even supplement existing health care in case of an emergency. Dr. Grotta also recounted how one unit assisted during the aftermath of Hurricane Harvey.
SAN DIEGO – Mobile stroke units are specially equipped ambulance units designed to respond and deliver treatment to stroke patients as swiftly as possible. They are outfitted with a portable CT scanner, a mobile lab, and specialized personnel, including a telemedicine unit to assist with diagnosis. If a patient is experiencing an ischemic stroke, the unit can deliver thrombolytic therapy on the spot, circumventing travel to an emergency department.
But are they cost effective? There are 13 active units in the United States, and they’re not cheap. They cost about $3.5 million to build and operate over 5 years, according to James Grotta, MD, a neurologist with the Memorial Hermann Medical Group and director of stroke research at Memorial Hermann–Texas Medical Center, both in Houston.
In a video interview at the annual meeting of the American Neurological Association, Dr. Grotta described how his group is studying the impact of mobile stroke units on time to treatment and the long-term costs and cost savings associated with them in an ongoing clinical trial that is comparing outcomes in patients eligible for tissue plasminogen activator when treated by a mobile stroke unit versus standard prehospital triage and transport by emergency medical services. The study is comparing outcomes when the mobile stroke unit and emergency medical services are the primary responders on alternating weeks. Primary outcomes include cost-effectiveness, the change in Rankin scale score from baseline to 90 days, and the diagnostic agreement between a vascular neurologist in the mobile stroke unit and a telemedicine vascular neurologist consulted from the unit.
Mobile stroke units can even supplement existing health care in case of an emergency. Dr. Grotta also recounted how one unit assisted during the aftermath of Hurricane Harvey.
SAN DIEGO – Mobile stroke units are specially equipped ambulance units designed to respond and deliver treatment to stroke patients as swiftly as possible. They are outfitted with a portable CT scanner, a mobile lab, and specialized personnel, including a telemedicine unit to assist with diagnosis. If a patient is experiencing an ischemic stroke, the unit can deliver thrombolytic therapy on the spot, circumventing travel to an emergency department.
But are they cost effective? There are 13 active units in the United States, and they’re not cheap. They cost about $3.5 million to build and operate over 5 years, according to James Grotta, MD, a neurologist with the Memorial Hermann Medical Group and director of stroke research at Memorial Hermann–Texas Medical Center, both in Houston.
In a video interview at the annual meeting of the American Neurological Association, Dr. Grotta described how his group is studying the impact of mobile stroke units on time to treatment and the long-term costs and cost savings associated with them in an ongoing clinical trial that is comparing outcomes in patients eligible for tissue plasminogen activator when treated by a mobile stroke unit versus standard prehospital triage and transport by emergency medical services. The study is comparing outcomes when the mobile stroke unit and emergency medical services are the primary responders on alternating weeks. Primary outcomes include cost-effectiveness, the change in Rankin scale score from baseline to 90 days, and the diagnostic agreement between a vascular neurologist in the mobile stroke unit and a telemedicine vascular neurologist consulted from the unit.
Mobile stroke units can even supplement existing health care in case of an emergency. Dr. Grotta also recounted how one unit assisted during the aftermath of Hurricane Harvey.
AT ANA 2017
SHM’s RADEO Program aids safer opioid prescribing
In January 2017, the U.S. Centers for Medicare & Medicaid Services honored SHM for its hospital patient safety and quality improvement efforts. A big reason for the plaudits was the society’s successful program and implementation toolkit called Reducing Adverse Drug Events related to Opioids (RADEO), now in its second phase.
Kevin Vuernick, senior project manager of SHM’s Center for Hospital Innovation and Improvement, says that the freely available RADEO guide explains how to develop and carry out quality improvement projects related to inpatient opioid prescribing. One of the first steps was devising interventions that hospitalists could implement to reduce opioid-related adverse events. An independent evaluator will help analyze the program’s data, best practices, and outcomes.
Keri Holmes-Maybank, MD, MSCR, FHM, an academic hospitalist at the Medical University of South Carolina, Charleston, said that the RADEO guide has a been a “phenomenal” resource. Dr. Holmes-Maybank, who led her medical center’s involvement in RADEO’s first round, says the guide helped her identify areas that her institution could work on. For one project, the medical university implemented the Pasero Opioid-Induced Sedation Scale to help prevent adverse opioid-related events, such as life-threatening respiratory depression. For a second project, the center combined existing discharge information into a more complete document that could be given to patients to educate them and their caregivers better.
St. Anthony Hospital in Oklahoma City first used RADEO to revisit how it was evaluating patients’ pain and then widened the scope to reassess how it was managing its opioid treatment and narcotic use. “We just kept swinging at the tree, trying to hit the low-hanging fruit and seeing what we could improve upon,” said Matthew Jared, MD, a hospitalist at St. Anthony and its program lead during its involvement in phase one of RADEO.
Dr. Jared is hoping to build on the momentum with a plan to develop better in-house protocols for monitoring pain, employing alternative treatments, and establishing clear lines of communication. “That’s our next step forward: really taking what we’ve learned and beginning to implement it into a holistic type of pain management within the hospital that each physician can tailor to the individual patient but still have the framework to support them,” he said. This ambitious plan is precisely the goal of RADEO, Mr. Vuernick said: providing the catalyst for change not just for hospital medicine but also for entire institutions.
In January 2017, the U.S. Centers for Medicare & Medicaid Services honored SHM for its hospital patient safety and quality improvement efforts. A big reason for the plaudits was the society’s successful program and implementation toolkit called Reducing Adverse Drug Events related to Opioids (RADEO), now in its second phase.
Kevin Vuernick, senior project manager of SHM’s Center for Hospital Innovation and Improvement, says that the freely available RADEO guide explains how to develop and carry out quality improvement projects related to inpatient opioid prescribing. One of the first steps was devising interventions that hospitalists could implement to reduce opioid-related adverse events. An independent evaluator will help analyze the program’s data, best practices, and outcomes.
Keri Holmes-Maybank, MD, MSCR, FHM, an academic hospitalist at the Medical University of South Carolina, Charleston, said that the RADEO guide has a been a “phenomenal” resource. Dr. Holmes-Maybank, who led her medical center’s involvement in RADEO’s first round, says the guide helped her identify areas that her institution could work on. For one project, the medical university implemented the Pasero Opioid-Induced Sedation Scale to help prevent adverse opioid-related events, such as life-threatening respiratory depression. For a second project, the center combined existing discharge information into a more complete document that could be given to patients to educate them and their caregivers better.
St. Anthony Hospital in Oklahoma City first used RADEO to revisit how it was evaluating patients’ pain and then widened the scope to reassess how it was managing its opioid treatment and narcotic use. “We just kept swinging at the tree, trying to hit the low-hanging fruit and seeing what we could improve upon,” said Matthew Jared, MD, a hospitalist at St. Anthony and its program lead during its involvement in phase one of RADEO.
Dr. Jared is hoping to build on the momentum with a plan to develop better in-house protocols for monitoring pain, employing alternative treatments, and establishing clear lines of communication. “That’s our next step forward: really taking what we’ve learned and beginning to implement it into a holistic type of pain management within the hospital that each physician can tailor to the individual patient but still have the framework to support them,” he said. This ambitious plan is precisely the goal of RADEO, Mr. Vuernick said: providing the catalyst for change not just for hospital medicine but also for entire institutions.
In January 2017, the U.S. Centers for Medicare & Medicaid Services honored SHM for its hospital patient safety and quality improvement efforts. A big reason for the plaudits was the society’s successful program and implementation toolkit called Reducing Adverse Drug Events related to Opioids (RADEO), now in its second phase.
Kevin Vuernick, senior project manager of SHM’s Center for Hospital Innovation and Improvement, says that the freely available RADEO guide explains how to develop and carry out quality improvement projects related to inpatient opioid prescribing. One of the first steps was devising interventions that hospitalists could implement to reduce opioid-related adverse events. An independent evaluator will help analyze the program’s data, best practices, and outcomes.
Keri Holmes-Maybank, MD, MSCR, FHM, an academic hospitalist at the Medical University of South Carolina, Charleston, said that the RADEO guide has a been a “phenomenal” resource. Dr. Holmes-Maybank, who led her medical center’s involvement in RADEO’s first round, says the guide helped her identify areas that her institution could work on. For one project, the medical university implemented the Pasero Opioid-Induced Sedation Scale to help prevent adverse opioid-related events, such as life-threatening respiratory depression. For a second project, the center combined existing discharge information into a more complete document that could be given to patients to educate them and their caregivers better.
St. Anthony Hospital in Oklahoma City first used RADEO to revisit how it was evaluating patients’ pain and then widened the scope to reassess how it was managing its opioid treatment and narcotic use. “We just kept swinging at the tree, trying to hit the low-hanging fruit and seeing what we could improve upon,” said Matthew Jared, MD, a hospitalist at St. Anthony and its program lead during its involvement in phase one of RADEO.
Dr. Jared is hoping to build on the momentum with a plan to develop better in-house protocols for monitoring pain, employing alternative treatments, and establishing clear lines of communication. “That’s our next step forward: really taking what we’ve learned and beginning to implement it into a holistic type of pain management within the hospital that each physician can tailor to the individual patient but still have the framework to support them,” he said. This ambitious plan is precisely the goal of RADEO, Mr. Vuernick said: providing the catalyst for change not just for hospital medicine but also for entire institutions.