Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.

mdcard
Main menu
MD Card Main Menu
Explore menu
MD Card Explore Menu
Proclivity ID
18854001
Unpublish
Negative Keywords Excluded Elements
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'main-prefix')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
div[contains(@class, 'view-medstat-quiz-listing-panes')]
div[contains(@class, 'pane-article-sidebar-latest-news')]
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Disqus Exclude
Best Practices
CE/CME
Medical Education Library
Education Center
Enable Disqus
Display Author and Disclosure Link
Publication Type
News
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Non-Overridden Topics
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Expire Announcement Bar
Use larger logo size
On
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Gating Strategy
First Peek Free
Challenge Center
Disable Inline Native ads
survey writer start date

Immediate statin after acute stroke reduces disability

Article Type
Changed

Giving intensive statin therapy to patients with acute mild ischemic stroke or with high-risk for transient ischemic attack (TIA) immediately after onset significantly reduces the risk for a poor functional outcome compared with delaying treatment, without compromising safety, results of the INSPIRES trial show.

The research, presented at the annual European Stroke Organisation Conference, also showed that intensive antiplatelet therapy reduced the risk for recurrent stroke albeit at an increased in bleeding risk versus standard treatment.

The study involved more than 6,000 patients with acute mild ischemic stroke or TIA and intracranial or extracranial atherosclerosis (ICAS/ECAS), who were randomly assigned in a 2 x 2 factorial design to compare intensive versus standard antiplatelet therapy and intensive statin therapy within 24 hours versus waiting up to 72 hours after onset.

Intensive antiplatelet therapy with clopidogrel plus aspirin reduced the risk for recurrent stroke within 90 days by 21% versus standard single-agent therapy, although it also doubled the risk for moderate to severe bleeding.

Starting intensive statin therapy with atorvastatin within 24 hours of onset had no impact on recurrent stroke risk but did reduce the risk for a poor functional outcome versus waiting up to 72 hours by 16%.

Moreover, it was “safe, with no increased risk of bleeding, hepatotoxicity, or muscle toxicity,” said study presenter Yilong Wang, MD, department of neurology, Beijing Tiantan Hospital, National Clinical Research Center.

There was, however, a suggestion of an interaction between intensive antiplatelet therapy and immediate intensive statin therapy, he noted, with a trend toward increased bleeding vs delaying the start of statin therapy.

Approached for comment, session cochair Carlos Molina, MD, director of the stroke unit and brain hemodynamics in Hospital Universitari Vall d’Hebron, Barcelona, said that the study is “important because when we look at studies of minor stroke and TIA, they are just focused on long-term outcomes in terms of recurrent stroke.”

He said in an interview that “putting statins in the equation and looking at their impact on long-term outcomes, the study demonstrates that statins are associated ... in particular with reductions in disabling stoke, and that’s good.”
 

Recurrence and progression

Dr. Wang began by highlighting that acute mild stroke and high-risk TIA are common and underestimated, with a relatively high risk for recurrence and progression, often caused by ICAS/ECAS.

Numerous guidelines recommend intensive antiplatelet therapy in the first 24 hours after the event, but Wang pointed out that there is little evidence to support this, and a meta-analysis suggested the window for effective treatment may be up to 72 hours.

In addition, intense statin therapy appears to be beneficial for the secondary prevention of atherosclerotic stroke in the nonacute phase, although there is no evidence for any neuroprotective effects in the acute phase nor for the optimal timing of starting the drugs.

Dr. Wang also noted that there is the potential for an interaction between intensive antiplatelet and statin therapy that could increase the risk for bleeding.

To investigate further, the researchers conducted a multicenter study involving patients aged 35-80 years with acute ischemic stroke or TIA.

The former was defined as an acute single infarction with 50% or greater stenosis of a major intracranial or extracranial artery that “probably account for the infarction and symptoms,” or multiple infarctions of large artery origin, including nonstenotic vulnerable plaques.

Patients were required to have a National Institutes of Health Stroke Scale score of 4-5 24 hours or less from acute stoke onset or 0-5 between 24 and 72 hours of onset.

TIA was defined as 50% or more stenosis of major intracranial or extracranial arteries that probably account for the symptoms, and an ABCD2 score for stroke risk of 4 or more within 24-72 hours of onset.

Patients were excluded if they had received dual antiplatelet therapy with aspirin and clopidogrel or high-intensity statin therapy within 14 days of random assignment or had intravenous thrombolysis or endovascular therapy after acute stroke or TIA onset.

Those included in the trial were randomly assigned in a 2 x 2 factorial design to receive:

  • Intensive or dual antiplatelet therapy with clopidogrel and aspirin plus immediate high-intensity statin therapy with atorvastatin
  • Intensive antiplatelet therapy plus delayed high-intensity statin therapy
  • Standard antiplatelet therapy with aspirin alone plus immediate high-intensity statin therapy
  • Standard antiplatelet therapy plus delayed high-intensity statin therapy

In all, 6,100 patients were enrolled from 222 hospitals in 99 cities across 25 provinces in China. The mean age was 65 years, and 34.6%-37.0% were women. TIA was recorded in 12.2%-14.1% of patients; 19.5%-19.7% had a single acute infarction, and 66.4%-68.1% had acute multiple infarctions.

The time to randomization was 24 hours or less after event onset in 12.5%-13.2% of cases versus 24-48 hours in 41.2%-42.5% and 48 hours or more in 44.9%-45.7% of patients.

The primary efficacy outcome, defined as stroke at 90 days, was significantly less common with intensive versus standard antiplatelet therapy, at a cumulative probability of 9.2% versus 7.3% (hazard ratio, 0.79; 95% confidence interval, 0.66-0.94; P = .007).

Clopidogrel plus aspirin was also associated with a significant reduction in a composite vascular event of stroke, myocardial infarction, or vascular death versus aspirin alone, at 7.5% versus 9.3% (HR, 0.80; 95% CI, 0.67-0.95, P = .01), as well as a reduction in rates of ischemic stroke (P = .002), and TIA (P = .02).

The primary safety outcome, defined as moderate to severe bleeding on the GUSTO criteria, was increased with intensive antiplatelet therapy, at 0.9% versus 0.4% for aspirin alone (HR, 2.08; 95% CI, 1.07-4.03; P = .02).

Turning to statin use, Dr. Wang showed that there was no significant difference in rates of stroke at 90 days between delayed and immediate intensive therapy, at a cumulative probability of 8.4% versus 8.1% (HR, 0.95; P = .58).

There was also no difference in rates of moderate to severe bleeding, at 0.8% with immediate versus 0.6% for delayed intensive statin therapy (HR, 1.36; 95% CI, 0.73-2.54; P = .34).

Dr. Wang reported that there were no significant differences in key secondary efficacy and safety outcomes.

Analysis of the distribution of modified Rankin Scale scores at 90 days, however, indicated that there was a significant reduction in the risk for poor functional outcome, defined as a score of 2-6, with immediate versus delayed statin therapy (odds ratio, 0.84; 95% CI, 0.72-0.99; P = .04).

Finally, it was found that combining dual antiplatelet therapy with immediate intensive statin therapy was associated with an increase in moderate to severe bleeding versus delayed statin therapy, affecting 1.1% versus 0.7% of patients. The association nonetheless did not reach statistical significance (HR, 1.70; 95% CI, 0.78-3.71; P = .18).

The study was funded by the National Natural Science Foundation of China, the National Key R&D Program of China, the Beijing Outstanding Young Scientist Program, the Beijing Youth Scholar Program, and the Beijing Talent Project. The drug was provided by Sanofi and Jialin Pharmaceutical. No relevant financial relationships were declared.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Giving intensive statin therapy to patients with acute mild ischemic stroke or with high-risk for transient ischemic attack (TIA) immediately after onset significantly reduces the risk for a poor functional outcome compared with delaying treatment, without compromising safety, results of the INSPIRES trial show.

The research, presented at the annual European Stroke Organisation Conference, also showed that intensive antiplatelet therapy reduced the risk for recurrent stroke albeit at an increased in bleeding risk versus standard treatment.

The study involved more than 6,000 patients with acute mild ischemic stroke or TIA and intracranial or extracranial atherosclerosis (ICAS/ECAS), who were randomly assigned in a 2 x 2 factorial design to compare intensive versus standard antiplatelet therapy and intensive statin therapy within 24 hours versus waiting up to 72 hours after onset.

Intensive antiplatelet therapy with clopidogrel plus aspirin reduced the risk for recurrent stroke within 90 days by 21% versus standard single-agent therapy, although it also doubled the risk for moderate to severe bleeding.

Starting intensive statin therapy with atorvastatin within 24 hours of onset had no impact on recurrent stroke risk but did reduce the risk for a poor functional outcome versus waiting up to 72 hours by 16%.

Moreover, it was “safe, with no increased risk of bleeding, hepatotoxicity, or muscle toxicity,” said study presenter Yilong Wang, MD, department of neurology, Beijing Tiantan Hospital, National Clinical Research Center.

There was, however, a suggestion of an interaction between intensive antiplatelet therapy and immediate intensive statin therapy, he noted, with a trend toward increased bleeding vs delaying the start of statin therapy.

Approached for comment, session cochair Carlos Molina, MD, director of the stroke unit and brain hemodynamics in Hospital Universitari Vall d’Hebron, Barcelona, said that the study is “important because when we look at studies of minor stroke and TIA, they are just focused on long-term outcomes in terms of recurrent stroke.”

He said in an interview that “putting statins in the equation and looking at their impact on long-term outcomes, the study demonstrates that statins are associated ... in particular with reductions in disabling stoke, and that’s good.”
 

Recurrence and progression

Dr. Wang began by highlighting that acute mild stroke and high-risk TIA are common and underestimated, with a relatively high risk for recurrence and progression, often caused by ICAS/ECAS.

Numerous guidelines recommend intensive antiplatelet therapy in the first 24 hours after the event, but Wang pointed out that there is little evidence to support this, and a meta-analysis suggested the window for effective treatment may be up to 72 hours.

In addition, intense statin therapy appears to be beneficial for the secondary prevention of atherosclerotic stroke in the nonacute phase, although there is no evidence for any neuroprotective effects in the acute phase nor for the optimal timing of starting the drugs.

Dr. Wang also noted that there is the potential for an interaction between intensive antiplatelet and statin therapy that could increase the risk for bleeding.

To investigate further, the researchers conducted a multicenter study involving patients aged 35-80 years with acute ischemic stroke or TIA.

The former was defined as an acute single infarction with 50% or greater stenosis of a major intracranial or extracranial artery that “probably account for the infarction and symptoms,” or multiple infarctions of large artery origin, including nonstenotic vulnerable plaques.

Patients were required to have a National Institutes of Health Stroke Scale score of 4-5 24 hours or less from acute stoke onset or 0-5 between 24 and 72 hours of onset.

TIA was defined as 50% or more stenosis of major intracranial or extracranial arteries that probably account for the symptoms, and an ABCD2 score for stroke risk of 4 or more within 24-72 hours of onset.

Patients were excluded if they had received dual antiplatelet therapy with aspirin and clopidogrel or high-intensity statin therapy within 14 days of random assignment or had intravenous thrombolysis or endovascular therapy after acute stroke or TIA onset.

Those included in the trial were randomly assigned in a 2 x 2 factorial design to receive:

  • Intensive or dual antiplatelet therapy with clopidogrel and aspirin plus immediate high-intensity statin therapy with atorvastatin
  • Intensive antiplatelet therapy plus delayed high-intensity statin therapy
  • Standard antiplatelet therapy with aspirin alone plus immediate high-intensity statin therapy
  • Standard antiplatelet therapy plus delayed high-intensity statin therapy

In all, 6,100 patients were enrolled from 222 hospitals in 99 cities across 25 provinces in China. The mean age was 65 years, and 34.6%-37.0% were women. TIA was recorded in 12.2%-14.1% of patients; 19.5%-19.7% had a single acute infarction, and 66.4%-68.1% had acute multiple infarctions.

The time to randomization was 24 hours or less after event onset in 12.5%-13.2% of cases versus 24-48 hours in 41.2%-42.5% and 48 hours or more in 44.9%-45.7% of patients.

The primary efficacy outcome, defined as stroke at 90 days, was significantly less common with intensive versus standard antiplatelet therapy, at a cumulative probability of 9.2% versus 7.3% (hazard ratio, 0.79; 95% confidence interval, 0.66-0.94; P = .007).

Clopidogrel plus aspirin was also associated with a significant reduction in a composite vascular event of stroke, myocardial infarction, or vascular death versus aspirin alone, at 7.5% versus 9.3% (HR, 0.80; 95% CI, 0.67-0.95, P = .01), as well as a reduction in rates of ischemic stroke (P = .002), and TIA (P = .02).

The primary safety outcome, defined as moderate to severe bleeding on the GUSTO criteria, was increased with intensive antiplatelet therapy, at 0.9% versus 0.4% for aspirin alone (HR, 2.08; 95% CI, 1.07-4.03; P = .02).

Turning to statin use, Dr. Wang showed that there was no significant difference in rates of stroke at 90 days between delayed and immediate intensive therapy, at a cumulative probability of 8.4% versus 8.1% (HR, 0.95; P = .58).

There was also no difference in rates of moderate to severe bleeding, at 0.8% with immediate versus 0.6% for delayed intensive statin therapy (HR, 1.36; 95% CI, 0.73-2.54; P = .34).

Dr. Wang reported that there were no significant differences in key secondary efficacy and safety outcomes.

Analysis of the distribution of modified Rankin Scale scores at 90 days, however, indicated that there was a significant reduction in the risk for poor functional outcome, defined as a score of 2-6, with immediate versus delayed statin therapy (odds ratio, 0.84; 95% CI, 0.72-0.99; P = .04).

Finally, it was found that combining dual antiplatelet therapy with immediate intensive statin therapy was associated with an increase in moderate to severe bleeding versus delayed statin therapy, affecting 1.1% versus 0.7% of patients. The association nonetheless did not reach statistical significance (HR, 1.70; 95% CI, 0.78-3.71; P = .18).

The study was funded by the National Natural Science Foundation of China, the National Key R&D Program of China, the Beijing Outstanding Young Scientist Program, the Beijing Youth Scholar Program, and the Beijing Talent Project. The drug was provided by Sanofi and Jialin Pharmaceutical. No relevant financial relationships were declared.

A version of this article originally appeared on Medscape.com.

Giving intensive statin therapy to patients with acute mild ischemic stroke or with high-risk for transient ischemic attack (TIA) immediately after onset significantly reduces the risk for a poor functional outcome compared with delaying treatment, without compromising safety, results of the INSPIRES trial show.

The research, presented at the annual European Stroke Organisation Conference, also showed that intensive antiplatelet therapy reduced the risk for recurrent stroke albeit at an increased in bleeding risk versus standard treatment.

The study involved more than 6,000 patients with acute mild ischemic stroke or TIA and intracranial or extracranial atherosclerosis (ICAS/ECAS), who were randomly assigned in a 2 x 2 factorial design to compare intensive versus standard antiplatelet therapy and intensive statin therapy within 24 hours versus waiting up to 72 hours after onset.

Intensive antiplatelet therapy with clopidogrel plus aspirin reduced the risk for recurrent stroke within 90 days by 21% versus standard single-agent therapy, although it also doubled the risk for moderate to severe bleeding.

Starting intensive statin therapy with atorvastatin within 24 hours of onset had no impact on recurrent stroke risk but did reduce the risk for a poor functional outcome versus waiting up to 72 hours by 16%.

Moreover, it was “safe, with no increased risk of bleeding, hepatotoxicity, or muscle toxicity,” said study presenter Yilong Wang, MD, department of neurology, Beijing Tiantan Hospital, National Clinical Research Center.

There was, however, a suggestion of an interaction between intensive antiplatelet therapy and immediate intensive statin therapy, he noted, with a trend toward increased bleeding vs delaying the start of statin therapy.

Approached for comment, session cochair Carlos Molina, MD, director of the stroke unit and brain hemodynamics in Hospital Universitari Vall d’Hebron, Barcelona, said that the study is “important because when we look at studies of minor stroke and TIA, they are just focused on long-term outcomes in terms of recurrent stroke.”

He said in an interview that “putting statins in the equation and looking at their impact on long-term outcomes, the study demonstrates that statins are associated ... in particular with reductions in disabling stoke, and that’s good.”
 

Recurrence and progression

Dr. Wang began by highlighting that acute mild stroke and high-risk TIA are common and underestimated, with a relatively high risk for recurrence and progression, often caused by ICAS/ECAS.

Numerous guidelines recommend intensive antiplatelet therapy in the first 24 hours after the event, but Wang pointed out that there is little evidence to support this, and a meta-analysis suggested the window for effective treatment may be up to 72 hours.

In addition, intense statin therapy appears to be beneficial for the secondary prevention of atherosclerotic stroke in the nonacute phase, although there is no evidence for any neuroprotective effects in the acute phase nor for the optimal timing of starting the drugs.

Dr. Wang also noted that there is the potential for an interaction between intensive antiplatelet and statin therapy that could increase the risk for bleeding.

To investigate further, the researchers conducted a multicenter study involving patients aged 35-80 years with acute ischemic stroke or TIA.

The former was defined as an acute single infarction with 50% or greater stenosis of a major intracranial or extracranial artery that “probably account for the infarction and symptoms,” or multiple infarctions of large artery origin, including nonstenotic vulnerable plaques.

Patients were required to have a National Institutes of Health Stroke Scale score of 4-5 24 hours or less from acute stoke onset or 0-5 between 24 and 72 hours of onset.

TIA was defined as 50% or more stenosis of major intracranial or extracranial arteries that probably account for the symptoms, and an ABCD2 score for stroke risk of 4 or more within 24-72 hours of onset.

Patients were excluded if they had received dual antiplatelet therapy with aspirin and clopidogrel or high-intensity statin therapy within 14 days of random assignment or had intravenous thrombolysis or endovascular therapy after acute stroke or TIA onset.

Those included in the trial were randomly assigned in a 2 x 2 factorial design to receive:

  • Intensive or dual antiplatelet therapy with clopidogrel and aspirin plus immediate high-intensity statin therapy with atorvastatin
  • Intensive antiplatelet therapy plus delayed high-intensity statin therapy
  • Standard antiplatelet therapy with aspirin alone plus immediate high-intensity statin therapy
  • Standard antiplatelet therapy plus delayed high-intensity statin therapy

In all, 6,100 patients were enrolled from 222 hospitals in 99 cities across 25 provinces in China. The mean age was 65 years, and 34.6%-37.0% were women. TIA was recorded in 12.2%-14.1% of patients; 19.5%-19.7% had a single acute infarction, and 66.4%-68.1% had acute multiple infarctions.

The time to randomization was 24 hours or less after event onset in 12.5%-13.2% of cases versus 24-48 hours in 41.2%-42.5% and 48 hours or more in 44.9%-45.7% of patients.

The primary efficacy outcome, defined as stroke at 90 days, was significantly less common with intensive versus standard antiplatelet therapy, at a cumulative probability of 9.2% versus 7.3% (hazard ratio, 0.79; 95% confidence interval, 0.66-0.94; P = .007).

Clopidogrel plus aspirin was also associated with a significant reduction in a composite vascular event of stroke, myocardial infarction, or vascular death versus aspirin alone, at 7.5% versus 9.3% (HR, 0.80; 95% CI, 0.67-0.95, P = .01), as well as a reduction in rates of ischemic stroke (P = .002), and TIA (P = .02).

The primary safety outcome, defined as moderate to severe bleeding on the GUSTO criteria, was increased with intensive antiplatelet therapy, at 0.9% versus 0.4% for aspirin alone (HR, 2.08; 95% CI, 1.07-4.03; P = .02).

Turning to statin use, Dr. Wang showed that there was no significant difference in rates of stroke at 90 days between delayed and immediate intensive therapy, at a cumulative probability of 8.4% versus 8.1% (HR, 0.95; P = .58).

There was also no difference in rates of moderate to severe bleeding, at 0.8% with immediate versus 0.6% for delayed intensive statin therapy (HR, 1.36; 95% CI, 0.73-2.54; P = .34).

Dr. Wang reported that there were no significant differences in key secondary efficacy and safety outcomes.

Analysis of the distribution of modified Rankin Scale scores at 90 days, however, indicated that there was a significant reduction in the risk for poor functional outcome, defined as a score of 2-6, with immediate versus delayed statin therapy (odds ratio, 0.84; 95% CI, 0.72-0.99; P = .04).

Finally, it was found that combining dual antiplatelet therapy with immediate intensive statin therapy was associated with an increase in moderate to severe bleeding versus delayed statin therapy, affecting 1.1% versus 0.7% of patients. The association nonetheless did not reach statistical significance (HR, 1.70; 95% CI, 0.78-3.71; P = .18).

The study was funded by the National Natural Science Foundation of China, the National Key R&D Program of China, the Beijing Outstanding Young Scientist Program, the Beijing Youth Scholar Program, and the Beijing Talent Project. The drug was provided by Sanofi and Jialin Pharmaceutical. No relevant financial relationships were declared.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT ESOC 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Antibody linked to spontaneous reversal of ATTR-CM

Article Type
Changed

The identification of an antibody linked to spontaneous reversal of cardiac transthyretin amyloidosis may represent a novel approach to treatment of this normally universally progressive and fatal condition.

Cardiac transthyretin amyloidosis (also called ATTR amyloidosis cardiomyopathy or ATTR-CM) is a progressive disease and a cause of heart failure resulting from accumulation of the protein transthyretin, which misfolds and forms amyloid deposits on the walls of the heart, causing both systolic and diastolic dysfunction.

The condition is progressive and normally fatal within a few years of diagnosis. Treatment options are limited and aimed at slowing progression; nothing has been shown to reverse the course of the disease.

However, an international team of researchers is now reporting the discovery of three patients with ATTR-CM–associated heart failure in whom the condition resolved spontaneously, with reversion to near normal cardiac structure and function. On further investigation, it was found that these three patients had developed circulating polyclonal IgG antibodies to human ATTR amyloid.

They are hopeful that a monoclonal form of these antibodies could be developed and may represent a novel treatment, or even a cure, for the condition.

The researchers report their findings in a letter to the New England Journal of Medicine.

“We are very optimistic about this discovery of these antibodies. They could become the first treatment to clear the amyloid that causes this horribly progressive and fatal condition,” senior author Julian Gillmore, MD, head of the University College London Centre for Amyloidosis, based at the Royal Free Hospital, said in an interview.

“Obviously, there is a lot of work to do before we can say this is the case, but it is very exciting,” he added.

Dr. Gillmore explained how the antibodies were discovered. “This disease has a universally progressive course, but we had one patient who on a repeat appointment said he felt better and on detailed cardiac MRI imaging, we found that the amyloid in his heart had reduced. That is totally unheard of,” he said.

“We then looked back at our cohort of 1,663 patients with ATTR-cardiomyopathy, and we discovered two others who had also improved both on imaging and clinically,” Dr. Gillmore said.

Each of these three patients reported a reduction in symptoms, although they had not received any new or potentially disease-modifying treatments. None of the patients had had recent vaccinations, notable infections, or any clinical suggestion of myocarditis.

Clinical recovery was corroborated by substantial improvement or normalization of findings on echocardiography, serum biomarker levels, and results of cardiopulmonary exercise tests and scintigraphy.

Serial cardiac MRI scans confirmed near-complete regression of myocardial extracellular volume, coupled with remodeling to near-normal cardiac structure and function without scarring.

The researchers wondered whether the changes in these patients may have been brought about by an antibody response. On further investigation, they found antibodies in the three patients that bound specifically to ATTR amyloid deposits in a transgenic mouse model of the condition, and to synthetic ATTR amyloid. No such antibodies were present in the other 350 patients in the cohort with a typical clinical course.

“The cause and clinical significance of the anti-ATTR amyloid antibodies are intriguing and presently unclear. However, the clinical recovery of these patients establishes the unanticipated potential for reversibility of ATTR-CM and raises expectations for its treatment,” the researchers conclude.

Dr. Gillmore said they didn’t know why these three patients had these antibodies, while all the other patients did not. “There must be something different about these patients. We don’t know what that is at present, but we are looking hard.”

The researchers are hoping that after this publication, other centers caring for patients with ATTR-cardiomyopathy will look in their cohorts and see if they can identify other cases where there has been improvement.

“It is very plausible that they do have such cases, but they will be rare, as we all think of this disease as universally progressive and fatal,” Dr. Gillmore noted.

“We haven’t absolutely proven that the antibodies have caused the clearance of amyloid in these patients, but we strongly suspect this to be the case,” Dr. Gillmore said. The researchers are planning to try to confirm this by isolating the antibodies and treating the transgenic mice.

Dr. Gillmore attributed the current discovery to the development of novel imaging cardiac MRI techniques. “This allowed us to monitor closely the amyloid burden in the heart. The observation that this had diminished in these three patients was the breakthrough that led us to look for antibodies.”

Another antibody product directed against ATTR cardiomyopathy is also in development by Neurimmune, a Swiss biopharmaceutical company. A phase 1 study of this agent was recently published, suggesting that it appeared to reduce the amount of amyloid protein deposited in the heart.

Dr. Gillmore said the antibody they have detected is different from the Neurimmune product.

The research was supported by a British Heart Foundation Intermediate Clinical Research Fellowship, a Medical Research Council Career Development Award, and a project grant from the British Heart Foundation. Dr. Gillmore reports being a consultant or expert advisory board member for Alnylam Pharmaceuticals, AstraZeneca, ATTRalus, Eidos Therapeutics, Intellia Therapeutics, Ionis Pharmaceuticals, and Pfizer.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

The identification of an antibody linked to spontaneous reversal of cardiac transthyretin amyloidosis may represent a novel approach to treatment of this normally universally progressive and fatal condition.

Cardiac transthyretin amyloidosis (also called ATTR amyloidosis cardiomyopathy or ATTR-CM) is a progressive disease and a cause of heart failure resulting from accumulation of the protein transthyretin, which misfolds and forms amyloid deposits on the walls of the heart, causing both systolic and diastolic dysfunction.

The condition is progressive and normally fatal within a few years of diagnosis. Treatment options are limited and aimed at slowing progression; nothing has been shown to reverse the course of the disease.

However, an international team of researchers is now reporting the discovery of three patients with ATTR-CM–associated heart failure in whom the condition resolved spontaneously, with reversion to near normal cardiac structure and function. On further investigation, it was found that these three patients had developed circulating polyclonal IgG antibodies to human ATTR amyloid.

They are hopeful that a monoclonal form of these antibodies could be developed and may represent a novel treatment, or even a cure, for the condition.

The researchers report their findings in a letter to the New England Journal of Medicine.

“We are very optimistic about this discovery of these antibodies. They could become the first treatment to clear the amyloid that causes this horribly progressive and fatal condition,” senior author Julian Gillmore, MD, head of the University College London Centre for Amyloidosis, based at the Royal Free Hospital, said in an interview.

“Obviously, there is a lot of work to do before we can say this is the case, but it is very exciting,” he added.

Dr. Gillmore explained how the antibodies were discovered. “This disease has a universally progressive course, but we had one patient who on a repeat appointment said he felt better and on detailed cardiac MRI imaging, we found that the amyloid in his heart had reduced. That is totally unheard of,” he said.

“We then looked back at our cohort of 1,663 patients with ATTR-cardiomyopathy, and we discovered two others who had also improved both on imaging and clinically,” Dr. Gillmore said.

Each of these three patients reported a reduction in symptoms, although they had not received any new or potentially disease-modifying treatments. None of the patients had had recent vaccinations, notable infections, or any clinical suggestion of myocarditis.

Clinical recovery was corroborated by substantial improvement or normalization of findings on echocardiography, serum biomarker levels, and results of cardiopulmonary exercise tests and scintigraphy.

Serial cardiac MRI scans confirmed near-complete regression of myocardial extracellular volume, coupled with remodeling to near-normal cardiac structure and function without scarring.

The researchers wondered whether the changes in these patients may have been brought about by an antibody response. On further investigation, they found antibodies in the three patients that bound specifically to ATTR amyloid deposits in a transgenic mouse model of the condition, and to synthetic ATTR amyloid. No such antibodies were present in the other 350 patients in the cohort with a typical clinical course.

“The cause and clinical significance of the anti-ATTR amyloid antibodies are intriguing and presently unclear. However, the clinical recovery of these patients establishes the unanticipated potential for reversibility of ATTR-CM and raises expectations for its treatment,” the researchers conclude.

Dr. Gillmore said they didn’t know why these three patients had these antibodies, while all the other patients did not. “There must be something different about these patients. We don’t know what that is at present, but we are looking hard.”

The researchers are hoping that after this publication, other centers caring for patients with ATTR-cardiomyopathy will look in their cohorts and see if they can identify other cases where there has been improvement.

“It is very plausible that they do have such cases, but they will be rare, as we all think of this disease as universally progressive and fatal,” Dr. Gillmore noted.

“We haven’t absolutely proven that the antibodies have caused the clearance of amyloid in these patients, but we strongly suspect this to be the case,” Dr. Gillmore said. The researchers are planning to try to confirm this by isolating the antibodies and treating the transgenic mice.

Dr. Gillmore attributed the current discovery to the development of novel imaging cardiac MRI techniques. “This allowed us to monitor closely the amyloid burden in the heart. The observation that this had diminished in these three patients was the breakthrough that led us to look for antibodies.”

Another antibody product directed against ATTR cardiomyopathy is also in development by Neurimmune, a Swiss biopharmaceutical company. A phase 1 study of this agent was recently published, suggesting that it appeared to reduce the amount of amyloid protein deposited in the heart.

Dr. Gillmore said the antibody they have detected is different from the Neurimmune product.

The research was supported by a British Heart Foundation Intermediate Clinical Research Fellowship, a Medical Research Council Career Development Award, and a project grant from the British Heart Foundation. Dr. Gillmore reports being a consultant or expert advisory board member for Alnylam Pharmaceuticals, AstraZeneca, ATTRalus, Eidos Therapeutics, Intellia Therapeutics, Ionis Pharmaceuticals, and Pfizer.

A version of this article originally appeared on Medscape.com.

The identification of an antibody linked to spontaneous reversal of cardiac transthyretin amyloidosis may represent a novel approach to treatment of this normally universally progressive and fatal condition.

Cardiac transthyretin amyloidosis (also called ATTR amyloidosis cardiomyopathy or ATTR-CM) is a progressive disease and a cause of heart failure resulting from accumulation of the protein transthyretin, which misfolds and forms amyloid deposits on the walls of the heart, causing both systolic and diastolic dysfunction.

The condition is progressive and normally fatal within a few years of diagnosis. Treatment options are limited and aimed at slowing progression; nothing has been shown to reverse the course of the disease.

However, an international team of researchers is now reporting the discovery of three patients with ATTR-CM–associated heart failure in whom the condition resolved spontaneously, with reversion to near normal cardiac structure and function. On further investigation, it was found that these three patients had developed circulating polyclonal IgG antibodies to human ATTR amyloid.

They are hopeful that a monoclonal form of these antibodies could be developed and may represent a novel treatment, or even a cure, for the condition.

The researchers report their findings in a letter to the New England Journal of Medicine.

“We are very optimistic about this discovery of these antibodies. They could become the first treatment to clear the amyloid that causes this horribly progressive and fatal condition,” senior author Julian Gillmore, MD, head of the University College London Centre for Amyloidosis, based at the Royal Free Hospital, said in an interview.

“Obviously, there is a lot of work to do before we can say this is the case, but it is very exciting,” he added.

Dr. Gillmore explained how the antibodies were discovered. “This disease has a universally progressive course, but we had one patient who on a repeat appointment said he felt better and on detailed cardiac MRI imaging, we found that the amyloid in his heart had reduced. That is totally unheard of,” he said.

“We then looked back at our cohort of 1,663 patients with ATTR-cardiomyopathy, and we discovered two others who had also improved both on imaging and clinically,” Dr. Gillmore said.

Each of these three patients reported a reduction in symptoms, although they had not received any new or potentially disease-modifying treatments. None of the patients had had recent vaccinations, notable infections, or any clinical suggestion of myocarditis.

Clinical recovery was corroborated by substantial improvement or normalization of findings on echocardiography, serum biomarker levels, and results of cardiopulmonary exercise tests and scintigraphy.

Serial cardiac MRI scans confirmed near-complete regression of myocardial extracellular volume, coupled with remodeling to near-normal cardiac structure and function without scarring.

The researchers wondered whether the changes in these patients may have been brought about by an antibody response. On further investigation, they found antibodies in the three patients that bound specifically to ATTR amyloid deposits in a transgenic mouse model of the condition, and to synthetic ATTR amyloid. No such antibodies were present in the other 350 patients in the cohort with a typical clinical course.

“The cause and clinical significance of the anti-ATTR amyloid antibodies are intriguing and presently unclear. However, the clinical recovery of these patients establishes the unanticipated potential for reversibility of ATTR-CM and raises expectations for its treatment,” the researchers conclude.

Dr. Gillmore said they didn’t know why these three patients had these antibodies, while all the other patients did not. “There must be something different about these patients. We don’t know what that is at present, but we are looking hard.”

The researchers are hoping that after this publication, other centers caring for patients with ATTR-cardiomyopathy will look in their cohorts and see if they can identify other cases where there has been improvement.

“It is very plausible that they do have such cases, but they will be rare, as we all think of this disease as universally progressive and fatal,” Dr. Gillmore noted.

“We haven’t absolutely proven that the antibodies have caused the clearance of amyloid in these patients, but we strongly suspect this to be the case,” Dr. Gillmore said. The researchers are planning to try to confirm this by isolating the antibodies and treating the transgenic mice.

Dr. Gillmore attributed the current discovery to the development of novel imaging cardiac MRI techniques. “This allowed us to monitor closely the amyloid burden in the heart. The observation that this had diminished in these three patients was the breakthrough that led us to look for antibodies.”

Another antibody product directed against ATTR cardiomyopathy is also in development by Neurimmune, a Swiss biopharmaceutical company. A phase 1 study of this agent was recently published, suggesting that it appeared to reduce the amount of amyloid protein deposited in the heart.

Dr. Gillmore said the antibody they have detected is different from the Neurimmune product.

The research was supported by a British Heart Foundation Intermediate Clinical Research Fellowship, a Medical Research Council Career Development Award, and a project grant from the British Heart Foundation. Dr. Gillmore reports being a consultant or expert advisory board member for Alnylam Pharmaceuticals, AstraZeneca, ATTRalus, Eidos Therapeutics, Intellia Therapeutics, Ionis Pharmaceuticals, and Pfizer.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEW ENGLAND JOURNAL OF MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Big boost in sodium excretion with HF diuretic protocol 

Article Type
Changed

In patients with acute heart failure, a urine sodium-guided diuretic protocol, currently recommended in guidelines from the Heart Failure Association of the European Society of Cardiology (HFA-ESC), led to significant increases in natriuresis and diuresis over 2 days in the prospective ENACT-HF clinical trial.

The guideline protocol was based on a 2019 HFA position paper with expert consensus, but it had not been tested prospectively, Jeroen Dauw, MD, of AZ Sint-Lucas Ghent (Belgium), explained in a presentation at HFA-ESC 2023.

“We had 282 millimoles of sodium excretion after one day, which is an increase of 64%, compared with standard of care,” Dr. Dauw told meeting attendees. “We wanted to power for 15%, so we’re way above it, with a P value of lower than 0.001.”

The effect was consistent across predefined subgroups, he said. “In addition, there’s an even higher benefit in patients with a lower eGFR [estimated glomerular filtration rate] and a higher home dose of loop diuretics, which might signal more diuretic resistance and more benefit of the protocol.”

After 2 days, the investigators saw 52% higher natriuresis and 33% higher diuresis, compared with usual care.

In an interview, Dr. Dauw said, “The protocol is feasible, safe, and very effective. Cardiologists might consider how to implement a similar protocol in their center to improve the care of their acute heart failure patients.”
 

Twice the oral home dose

The investigators conducted a multicenter, open-label, nonrandomized pragmatic trial at 29 centers in 18 countries globally. “We aimed to recruit 500 to detect a 15% difference in natriuresis,” Dr. Dauw said in his presentation, “but because we were a really low-budget trial, we had to stop after 3 years of recruitment.”

Therefore, 401 patients participated, 254 in the SOC arm and 147 in the protocol arm, because of the sequential nature of the study; that is, patients in the SOC arm of the two-phase study were recruited first.

Patients’ mean age was 70 years, 38% were women, and they all had at least one sign of volume overload. They were on a maintenance daily diuretic dose of 40 mg of furosemide for a month or more, and the NT-proBNP was above 1,000.

In phase 1 of the study, all centers treated 10 consecutive patients according to the local standard of care, at the discretion of the physician. In phase 2, the centers again recruited and treated at least 10 consecutive patients, this time according to the standardized diuretic protocol.

In the protocol phase, patients were treated with twice the oral home dose as an IV bolus. “This meant if, for example, you have 40 mg of furosemide at home, then you receive 80 mg as a first bolus,” Dr. Dauw told attendees. A spot urine sample was taken after 2 hours, and the response was evaluated after 6 hours. A urine sodium above 50 millimoles per liter was considered a good response.

On the second day, patients were reevaluated in the morning using urine output as a measure of diuretic response. If it was above 3 L, then the same bolus was repeated again twice daily, with 6-12 hours between administrations.

As noted, after one day, natriuresis was 174 millimoles in the SOC arm versus 282 millimoles in the protocol group – an increase of 64%. The effect was consistent across subgroups, and those with a lower eGFR and a higher home dose of loop diuretics benefited more.

Furthermore, Dr. Dauw said, there was no interaction on the endpoints with SGLT2 inhibitor use at baseline.

After two days, natriuresis was 52% higher in the protocol group and diuresis was 33% higher.

However, there was no significant difference in weight loss and no difference in the congestion score.

“We did expect to see a difference in weight loss between the study groups, as higher natriuresis and diuresis would normally be associated with higher weight loss in the protocol group,” Dr. Dauw told this news organization. “However, looking back at the study design, weight was collected from the electronic health records and not rigorously collected by study nurses. Previous studies have shown discrepancies between fluid loss and weight loss, so this is an ‘explainable’ finding.”

Participants also had a relatively high congestion score at baseline, with edema above the knee and also some pleural effusion, he told meeting attendees. Therefore, it might take more time to see a change in congestion score in those patients.

The protocol also led to a shorter length of stay – one day less in the hospital – and was very safe on renal endpoints, Dr. Dauw concluded.

A session chair asked why only patients already on diuretics were included in the study, noting that in his clinic, about half of the admissions are de novo.

Dr. Dauw said that patients already taking diuretics chronically would benefit most from the protocol. “If patients are diuretic-naive, they probably will respond well to whatever you do; if you just give a higher dose, they will respond well,” he said. “We expected that the largest benefit would be in patients already taking diuretics because they have a higher chance of not responding well.”

“There also was a big difference in the starting dose,” he added. “In the SOC arm, the baseline dose was about 60 mg, whereas we gave 120 mg, and we could already see a high difference in the effect. So, in those patients, I think the gain is bigger if you follow the protocol.”
 

 

 

More data coming

Looking ahead, “we only showed efficacy in the first 2 days of treatment and a shorter length of stay, probably reflecting a faster decongestion, but we don’t know for sure,” Dr. Dauw told this news organization.

“It would be important to have a study where the protocol is followed until full decongestion is reached,” he said. “That way, we can directly prove that decongestion is better and/or faster with the protocol.”

“A good decongestive strategy is one that is fast, safe and effective in decreasing signs and symptoms that patients suffer from,” he added. “We believe our protocol can achieve that, but our study is only one piece of the puzzle.”

More data on natriuresis-guided decongestion is coming this year, he said, with the PUSH-AHF study from Groningen, the European DECONGEST study, and the U.S. ESCALATE study.

The study had no funding. Dr. Dauw declared no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

In patients with acute heart failure, a urine sodium-guided diuretic protocol, currently recommended in guidelines from the Heart Failure Association of the European Society of Cardiology (HFA-ESC), led to significant increases in natriuresis and diuresis over 2 days in the prospective ENACT-HF clinical trial.

The guideline protocol was based on a 2019 HFA position paper with expert consensus, but it had not been tested prospectively, Jeroen Dauw, MD, of AZ Sint-Lucas Ghent (Belgium), explained in a presentation at HFA-ESC 2023.

“We had 282 millimoles of sodium excretion after one day, which is an increase of 64%, compared with standard of care,” Dr. Dauw told meeting attendees. “We wanted to power for 15%, so we’re way above it, with a P value of lower than 0.001.”

The effect was consistent across predefined subgroups, he said. “In addition, there’s an even higher benefit in patients with a lower eGFR [estimated glomerular filtration rate] and a higher home dose of loop diuretics, which might signal more diuretic resistance and more benefit of the protocol.”

After 2 days, the investigators saw 52% higher natriuresis and 33% higher diuresis, compared with usual care.

In an interview, Dr. Dauw said, “The protocol is feasible, safe, and very effective. Cardiologists might consider how to implement a similar protocol in their center to improve the care of their acute heart failure patients.”
 

Twice the oral home dose

The investigators conducted a multicenter, open-label, nonrandomized pragmatic trial at 29 centers in 18 countries globally. “We aimed to recruit 500 to detect a 15% difference in natriuresis,” Dr. Dauw said in his presentation, “but because we were a really low-budget trial, we had to stop after 3 years of recruitment.”

Therefore, 401 patients participated, 254 in the SOC arm and 147 in the protocol arm, because of the sequential nature of the study; that is, patients in the SOC arm of the two-phase study were recruited first.

Patients’ mean age was 70 years, 38% were women, and they all had at least one sign of volume overload. They were on a maintenance daily diuretic dose of 40 mg of furosemide for a month or more, and the NT-proBNP was above 1,000.

In phase 1 of the study, all centers treated 10 consecutive patients according to the local standard of care, at the discretion of the physician. In phase 2, the centers again recruited and treated at least 10 consecutive patients, this time according to the standardized diuretic protocol.

In the protocol phase, patients were treated with twice the oral home dose as an IV bolus. “This meant if, for example, you have 40 mg of furosemide at home, then you receive 80 mg as a first bolus,” Dr. Dauw told attendees. A spot urine sample was taken after 2 hours, and the response was evaluated after 6 hours. A urine sodium above 50 millimoles per liter was considered a good response.

On the second day, patients were reevaluated in the morning using urine output as a measure of diuretic response. If it was above 3 L, then the same bolus was repeated again twice daily, with 6-12 hours between administrations.

As noted, after one day, natriuresis was 174 millimoles in the SOC arm versus 282 millimoles in the protocol group – an increase of 64%. The effect was consistent across subgroups, and those with a lower eGFR and a higher home dose of loop diuretics benefited more.

Furthermore, Dr. Dauw said, there was no interaction on the endpoints with SGLT2 inhibitor use at baseline.

After two days, natriuresis was 52% higher in the protocol group and diuresis was 33% higher.

However, there was no significant difference in weight loss and no difference in the congestion score.

“We did expect to see a difference in weight loss between the study groups, as higher natriuresis and diuresis would normally be associated with higher weight loss in the protocol group,” Dr. Dauw told this news organization. “However, looking back at the study design, weight was collected from the electronic health records and not rigorously collected by study nurses. Previous studies have shown discrepancies between fluid loss and weight loss, so this is an ‘explainable’ finding.”

Participants also had a relatively high congestion score at baseline, with edema above the knee and also some pleural effusion, he told meeting attendees. Therefore, it might take more time to see a change in congestion score in those patients.

The protocol also led to a shorter length of stay – one day less in the hospital – and was very safe on renal endpoints, Dr. Dauw concluded.

A session chair asked why only patients already on diuretics were included in the study, noting that in his clinic, about half of the admissions are de novo.

Dr. Dauw said that patients already taking diuretics chronically would benefit most from the protocol. “If patients are diuretic-naive, they probably will respond well to whatever you do; if you just give a higher dose, they will respond well,” he said. “We expected that the largest benefit would be in patients already taking diuretics because they have a higher chance of not responding well.”

“There also was a big difference in the starting dose,” he added. “In the SOC arm, the baseline dose was about 60 mg, whereas we gave 120 mg, and we could already see a high difference in the effect. So, in those patients, I think the gain is bigger if you follow the protocol.”
 

 

 

More data coming

Looking ahead, “we only showed efficacy in the first 2 days of treatment and a shorter length of stay, probably reflecting a faster decongestion, but we don’t know for sure,” Dr. Dauw told this news organization.

“It would be important to have a study where the protocol is followed until full decongestion is reached,” he said. “That way, we can directly prove that decongestion is better and/or faster with the protocol.”

“A good decongestive strategy is one that is fast, safe and effective in decreasing signs and symptoms that patients suffer from,” he added. “We believe our protocol can achieve that, but our study is only one piece of the puzzle.”

More data on natriuresis-guided decongestion is coming this year, he said, with the PUSH-AHF study from Groningen, the European DECONGEST study, and the U.S. ESCALATE study.

The study had no funding. Dr. Dauw declared no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

In patients with acute heart failure, a urine sodium-guided diuretic protocol, currently recommended in guidelines from the Heart Failure Association of the European Society of Cardiology (HFA-ESC), led to significant increases in natriuresis and diuresis over 2 days in the prospective ENACT-HF clinical trial.

The guideline protocol was based on a 2019 HFA position paper with expert consensus, but it had not been tested prospectively, Jeroen Dauw, MD, of AZ Sint-Lucas Ghent (Belgium), explained in a presentation at HFA-ESC 2023.

“We had 282 millimoles of sodium excretion after one day, which is an increase of 64%, compared with standard of care,” Dr. Dauw told meeting attendees. “We wanted to power for 15%, so we’re way above it, with a P value of lower than 0.001.”

The effect was consistent across predefined subgroups, he said. “In addition, there’s an even higher benefit in patients with a lower eGFR [estimated glomerular filtration rate] and a higher home dose of loop diuretics, which might signal more diuretic resistance and more benefit of the protocol.”

After 2 days, the investigators saw 52% higher natriuresis and 33% higher diuresis, compared with usual care.

In an interview, Dr. Dauw said, “The protocol is feasible, safe, and very effective. Cardiologists might consider how to implement a similar protocol in their center to improve the care of their acute heart failure patients.”
 

Twice the oral home dose

The investigators conducted a multicenter, open-label, nonrandomized pragmatic trial at 29 centers in 18 countries globally. “We aimed to recruit 500 to detect a 15% difference in natriuresis,” Dr. Dauw said in his presentation, “but because we were a really low-budget trial, we had to stop after 3 years of recruitment.”

Therefore, 401 patients participated, 254 in the SOC arm and 147 in the protocol arm, because of the sequential nature of the study; that is, patients in the SOC arm of the two-phase study were recruited first.

Patients’ mean age was 70 years, 38% were women, and they all had at least one sign of volume overload. They were on a maintenance daily diuretic dose of 40 mg of furosemide for a month or more, and the NT-proBNP was above 1,000.

In phase 1 of the study, all centers treated 10 consecutive patients according to the local standard of care, at the discretion of the physician. In phase 2, the centers again recruited and treated at least 10 consecutive patients, this time according to the standardized diuretic protocol.

In the protocol phase, patients were treated with twice the oral home dose as an IV bolus. “This meant if, for example, you have 40 mg of furosemide at home, then you receive 80 mg as a first bolus,” Dr. Dauw told attendees. A spot urine sample was taken after 2 hours, and the response was evaluated after 6 hours. A urine sodium above 50 millimoles per liter was considered a good response.

On the second day, patients were reevaluated in the morning using urine output as a measure of diuretic response. If it was above 3 L, then the same bolus was repeated again twice daily, with 6-12 hours between administrations.

As noted, after one day, natriuresis was 174 millimoles in the SOC arm versus 282 millimoles in the protocol group – an increase of 64%. The effect was consistent across subgroups, and those with a lower eGFR and a higher home dose of loop diuretics benefited more.

Furthermore, Dr. Dauw said, there was no interaction on the endpoints with SGLT2 inhibitor use at baseline.

After two days, natriuresis was 52% higher in the protocol group and diuresis was 33% higher.

However, there was no significant difference in weight loss and no difference in the congestion score.

“We did expect to see a difference in weight loss between the study groups, as higher natriuresis and diuresis would normally be associated with higher weight loss in the protocol group,” Dr. Dauw told this news organization. “However, looking back at the study design, weight was collected from the electronic health records and not rigorously collected by study nurses. Previous studies have shown discrepancies between fluid loss and weight loss, so this is an ‘explainable’ finding.”

Participants also had a relatively high congestion score at baseline, with edema above the knee and also some pleural effusion, he told meeting attendees. Therefore, it might take more time to see a change in congestion score in those patients.

The protocol also led to a shorter length of stay – one day less in the hospital – and was very safe on renal endpoints, Dr. Dauw concluded.

A session chair asked why only patients already on diuretics were included in the study, noting that in his clinic, about half of the admissions are de novo.

Dr. Dauw said that patients already taking diuretics chronically would benefit most from the protocol. “If patients are diuretic-naive, they probably will respond well to whatever you do; if you just give a higher dose, they will respond well,” he said. “We expected that the largest benefit would be in patients already taking diuretics because they have a higher chance of not responding well.”

“There also was a big difference in the starting dose,” he added. “In the SOC arm, the baseline dose was about 60 mg, whereas we gave 120 mg, and we could already see a high difference in the effect. So, in those patients, I think the gain is bigger if you follow the protocol.”
 

 

 

More data coming

Looking ahead, “we only showed efficacy in the first 2 days of treatment and a shorter length of stay, probably reflecting a faster decongestion, but we don’t know for sure,” Dr. Dauw told this news organization.

“It would be important to have a study where the protocol is followed until full decongestion is reached,” he said. “That way, we can directly prove that decongestion is better and/or faster with the protocol.”

“A good decongestive strategy is one that is fast, safe and effective in decreasing signs and symptoms that patients suffer from,” he added. “We believe our protocol can achieve that, but our study is only one piece of the puzzle.”

More data on natriuresis-guided decongestion is coming this year, he said, with the PUSH-AHF study from Groningen, the European DECONGEST study, and the U.S. ESCALATE study.

The study had no funding. Dr. Dauw declared no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HFA-ESC 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

High Lp(a) tied to higher coronary plaque volume, progression

Article Type
Changed

Patients with high lipoprotein(a) (Lp[a]) levels not only have an almost twofold higher coronary plaque burden than those with low levels but also a faster rate of plaque progression, an observational imaging study shows.

This could explain the greater risk for major adverse cardiovascular events seen in patients with high Lp(a) levels, suggests the research, presented during the annual European Atherosclerosis Society Congress.

The team performed follow-up coronary CT angiography (CCTA) on almost 275 patients who had undergone imaging approximately 10 years earlier, finding that almost one-third had high Lp(a) levels.

At baseline, per cent plaque volumes were 1.8 times greater in high Lp(a) patients versus those with low levels of the protein. After 10 years, plaque volumes were 3.3 times larger in patients with high Lp(a) levels.

Over this period, the rate of increase of plaque volume was 1.9 times greater in patients with high Lp(a) levels.

Study presenter Nick S. Nurmohamed, MD, PhD candidate, department of vascular medicine, Amsterdam University Medical Centers, also showed that high Lp(a) levels were associated with a 2.1-fold increase in rates of MACE.

He said in an interview that this finding could be related to Lp(a) increasing inflammatory signaling in the plaque, “making it more prone to rupture, and we saw that on the CCTA scans,” where high Lp(a) levels were associated with the presence of more high-risk plaques.

He added that in the absence of drugs that target Lp(a) levels directly, the results underline the need to focus on other means of lipid-lowering, as well as “creating awareness that Lp(a) is associated with plaque formation.”

Dr. Nurmohamed said that “for the moment, we have to treat patients with high Lp(a) with other risk-lowering therapies, such as low-density lipoprotein [LDL] cholesterol–lowering drugs, and the management of other risk factors.”

However, he noted that “there are a couple of Lp(a)-lowering medications in trials,” with results expected in the next 2-3 years.

“Then we will have the means to treat those patients, and with CCTA we can identify the patients with the biggest risk,” Dr. Nurmohamed added.
 

Plaque burden

Philippe Moulin, MD, PhD, head of endocrinology and professor of human nutrition at Faculté Lyon Est, Claude Bernard Lyon (France) 1 University, said that the association between Lp(a) and plaque burden has been seen previously in the literature in a very similar study but with only 1-year follow-up.

Similarly, registry data have suggested that Lp(a) is associated with worsening plaque progression over time.

“Here, with 10-year follow-up, [the study] is much more interesting,” due to its greater statistical power, he said in an interview. It is also “well-documented” and uses an “appropriate” methodology.

But Dr. Moulin underlined that the number of patients with high Lp(a) levels included in the study is relatively small.

Consequently, the researchers were not able to look at the level and rate of progression of atherosclerosis between different quartiles of Lp(a), “so you have no dose-response analysis.”

It also does not “establish causality,” as it remains an observational study, despite being longitudinal, “well done, and so on.”

Dr. Moulin added that the study nevertheless adds “one more stone” to the construct of the idea of high risk around high Lp(a) levels, and “prepares the ground” for the availability of two drugs to decrease Lp(a) levels, expected in 2026 and 2027.

These are expected to substantially reduce Lp(a) levels and achieve a reduction in cardiovascular risk of around 20%-40%, “which would be interesting,” especially as “we have patients who have Lp(a) levels four times above the upper normal value.”

Crucially, they may already have normal LDL cholesterol levels, meaning that, for some patients, “there is clearly a need for such treatment, as long as it is proven that it will decrease cardiovascular risk.”

For the moment, however, the strategy for managing patients with high Lp(a) remains to increase the dose of statin and to have more stringent targets, although Dr. Moulin pointed out that, “when you give statins, you raise slightly Lp(a) levels.”

Dr. Nurmohamed said in an interview that “we know from largely genetic and observational studies that Lp(a) is causally associated with atherosclerotic cardiovascular disease.”

What is less clear is the exact underlying mechanism, he said, noting that there have been several imaging studies in high and low Lp(a) patients that have yielded conflicting results in terms of the relationship with plaque burden.

To investigate the impact of Lp(a) levels on long-term coronary plaque progression, the team invited patients who had taken part in a previous CCTA study to undergo repeat CCTA, regardless of their underlying symptoms.

In all, 299 patients underwent follow-up imaging a median of 10.2 years after their original scan. Plaque volumes were quantified and adjusted for vessel volumes, and the patients were classified as having high (≥ 70 nmol/L) or low (< 70 nmol/L) Lp(a) levels.

After excluding patients who had undergone coronary artery bypass grafting, the team analyzed 274 patients with a mean age at baseline of 57 years. Of these, 159 (58%) were men. High Lp(a) levels were identified in 87 (32%) patients.

The team found that at baseline, patients with high Lp(a) levels had significantly larger percent atheroma volumes than those with low levels, at 3.92% versus 2.17%, or an absolute difference of 1.75% (P = .013).

The difference between the two groups was even greater at the follow-up, when percent atheroma volumes reached 8.75% in patients with high Lp(a) levels versus 3.90% for those with low levels, or an absolute difference of 4.85% (P = .005).

Similar findings were seen when looking separately at percentage of noncalcified and calcified plaque volumes as well as when analyzing for the presence of low-density plaques.

Multivariate analysis taking into account clinical risk factors, statin use, and CT tube voltage found that high Lp(a) levels were associated with a greater percent atheroma volume at baseline, at an odds ratio versus low Lp(a) of 1.83 (95% confidence interval, 0.12-3.54; P = .037).

High Lp(a) levels were also linked to a larger percent atheroma volume on follow-up imaging, at an odds ratio of 3.25 (95% CI, 0.80-5.71; P = .010), and a significantly greater change in atheroma volume from baseline to follow-up imaging, at an odds ratio of 1.86 (95% CI, 0.59-3.14; P = .005)

Finally, the team showed that, after adjusting for clinical risk factors, high baseline Lp(a) levels were associated with an increased risk of MACE during the follow-up period, at a hazard ratio versus low Lp(a) levels of 2.10 (95% CI, 1.01-4.29, P = .048).

No funding was declared. Dr. Nurmohamed is cofounder of Lipid Tools. Other authors declare relationships with Amgen, Novartis, Esperion, Sanofi-Regeneron, Ackee, Cleerly, GW Heart and Vascular Institute, Siemens Healthineers, and HeartFlow.

 

 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Patients with high lipoprotein(a) (Lp[a]) levels not only have an almost twofold higher coronary plaque burden than those with low levels but also a faster rate of plaque progression, an observational imaging study shows.

This could explain the greater risk for major adverse cardiovascular events seen in patients with high Lp(a) levels, suggests the research, presented during the annual European Atherosclerosis Society Congress.

The team performed follow-up coronary CT angiography (CCTA) on almost 275 patients who had undergone imaging approximately 10 years earlier, finding that almost one-third had high Lp(a) levels.

At baseline, per cent plaque volumes were 1.8 times greater in high Lp(a) patients versus those with low levels of the protein. After 10 years, plaque volumes were 3.3 times larger in patients with high Lp(a) levels.

Over this period, the rate of increase of plaque volume was 1.9 times greater in patients with high Lp(a) levels.

Study presenter Nick S. Nurmohamed, MD, PhD candidate, department of vascular medicine, Amsterdam University Medical Centers, also showed that high Lp(a) levels were associated with a 2.1-fold increase in rates of MACE.

He said in an interview that this finding could be related to Lp(a) increasing inflammatory signaling in the plaque, “making it more prone to rupture, and we saw that on the CCTA scans,” where high Lp(a) levels were associated with the presence of more high-risk plaques.

He added that in the absence of drugs that target Lp(a) levels directly, the results underline the need to focus on other means of lipid-lowering, as well as “creating awareness that Lp(a) is associated with plaque formation.”

Dr. Nurmohamed said that “for the moment, we have to treat patients with high Lp(a) with other risk-lowering therapies, such as low-density lipoprotein [LDL] cholesterol–lowering drugs, and the management of other risk factors.”

However, he noted that “there are a couple of Lp(a)-lowering medications in trials,” with results expected in the next 2-3 years.

“Then we will have the means to treat those patients, and with CCTA we can identify the patients with the biggest risk,” Dr. Nurmohamed added.
 

Plaque burden

Philippe Moulin, MD, PhD, head of endocrinology and professor of human nutrition at Faculté Lyon Est, Claude Bernard Lyon (France) 1 University, said that the association between Lp(a) and plaque burden has been seen previously in the literature in a very similar study but with only 1-year follow-up.

Similarly, registry data have suggested that Lp(a) is associated with worsening plaque progression over time.

“Here, with 10-year follow-up, [the study] is much more interesting,” due to its greater statistical power, he said in an interview. It is also “well-documented” and uses an “appropriate” methodology.

But Dr. Moulin underlined that the number of patients with high Lp(a) levels included in the study is relatively small.

Consequently, the researchers were not able to look at the level and rate of progression of atherosclerosis between different quartiles of Lp(a), “so you have no dose-response analysis.”

It also does not “establish causality,” as it remains an observational study, despite being longitudinal, “well done, and so on.”

Dr. Moulin added that the study nevertheless adds “one more stone” to the construct of the idea of high risk around high Lp(a) levels, and “prepares the ground” for the availability of two drugs to decrease Lp(a) levels, expected in 2026 and 2027.

These are expected to substantially reduce Lp(a) levels and achieve a reduction in cardiovascular risk of around 20%-40%, “which would be interesting,” especially as “we have patients who have Lp(a) levels four times above the upper normal value.”

Crucially, they may already have normal LDL cholesterol levels, meaning that, for some patients, “there is clearly a need for such treatment, as long as it is proven that it will decrease cardiovascular risk.”

For the moment, however, the strategy for managing patients with high Lp(a) remains to increase the dose of statin and to have more stringent targets, although Dr. Moulin pointed out that, “when you give statins, you raise slightly Lp(a) levels.”

Dr. Nurmohamed said in an interview that “we know from largely genetic and observational studies that Lp(a) is causally associated with atherosclerotic cardiovascular disease.”

What is less clear is the exact underlying mechanism, he said, noting that there have been several imaging studies in high and low Lp(a) patients that have yielded conflicting results in terms of the relationship with plaque burden.

To investigate the impact of Lp(a) levels on long-term coronary plaque progression, the team invited patients who had taken part in a previous CCTA study to undergo repeat CCTA, regardless of their underlying symptoms.

In all, 299 patients underwent follow-up imaging a median of 10.2 years after their original scan. Plaque volumes were quantified and adjusted for vessel volumes, and the patients were classified as having high (≥ 70 nmol/L) or low (< 70 nmol/L) Lp(a) levels.

After excluding patients who had undergone coronary artery bypass grafting, the team analyzed 274 patients with a mean age at baseline of 57 years. Of these, 159 (58%) were men. High Lp(a) levels were identified in 87 (32%) patients.

The team found that at baseline, patients with high Lp(a) levels had significantly larger percent atheroma volumes than those with low levels, at 3.92% versus 2.17%, or an absolute difference of 1.75% (P = .013).

The difference between the two groups was even greater at the follow-up, when percent atheroma volumes reached 8.75% in patients with high Lp(a) levels versus 3.90% for those with low levels, or an absolute difference of 4.85% (P = .005).

Similar findings were seen when looking separately at percentage of noncalcified and calcified plaque volumes as well as when analyzing for the presence of low-density plaques.

Multivariate analysis taking into account clinical risk factors, statin use, and CT tube voltage found that high Lp(a) levels were associated with a greater percent atheroma volume at baseline, at an odds ratio versus low Lp(a) of 1.83 (95% confidence interval, 0.12-3.54; P = .037).

High Lp(a) levels were also linked to a larger percent atheroma volume on follow-up imaging, at an odds ratio of 3.25 (95% CI, 0.80-5.71; P = .010), and a significantly greater change in atheroma volume from baseline to follow-up imaging, at an odds ratio of 1.86 (95% CI, 0.59-3.14; P = .005)

Finally, the team showed that, after adjusting for clinical risk factors, high baseline Lp(a) levels were associated with an increased risk of MACE during the follow-up period, at a hazard ratio versus low Lp(a) levels of 2.10 (95% CI, 1.01-4.29, P = .048).

No funding was declared. Dr. Nurmohamed is cofounder of Lipid Tools. Other authors declare relationships with Amgen, Novartis, Esperion, Sanofi-Regeneron, Ackee, Cleerly, GW Heart and Vascular Institute, Siemens Healthineers, and HeartFlow.

 

 

A version of this article first appeared on Medscape.com.

Patients with high lipoprotein(a) (Lp[a]) levels not only have an almost twofold higher coronary plaque burden than those with low levels but also a faster rate of plaque progression, an observational imaging study shows.

This could explain the greater risk for major adverse cardiovascular events seen in patients with high Lp(a) levels, suggests the research, presented during the annual European Atherosclerosis Society Congress.

The team performed follow-up coronary CT angiography (CCTA) on almost 275 patients who had undergone imaging approximately 10 years earlier, finding that almost one-third had high Lp(a) levels.

At baseline, per cent plaque volumes were 1.8 times greater in high Lp(a) patients versus those with low levels of the protein. After 10 years, plaque volumes were 3.3 times larger in patients with high Lp(a) levels.

Over this period, the rate of increase of plaque volume was 1.9 times greater in patients with high Lp(a) levels.

Study presenter Nick S. Nurmohamed, MD, PhD candidate, department of vascular medicine, Amsterdam University Medical Centers, also showed that high Lp(a) levels were associated with a 2.1-fold increase in rates of MACE.

He said in an interview that this finding could be related to Lp(a) increasing inflammatory signaling in the plaque, “making it more prone to rupture, and we saw that on the CCTA scans,” where high Lp(a) levels were associated with the presence of more high-risk plaques.

He added that in the absence of drugs that target Lp(a) levels directly, the results underline the need to focus on other means of lipid-lowering, as well as “creating awareness that Lp(a) is associated with plaque formation.”

Dr. Nurmohamed said that “for the moment, we have to treat patients with high Lp(a) with other risk-lowering therapies, such as low-density lipoprotein [LDL] cholesterol–lowering drugs, and the management of other risk factors.”

However, he noted that “there are a couple of Lp(a)-lowering medications in trials,” with results expected in the next 2-3 years.

“Then we will have the means to treat those patients, and with CCTA we can identify the patients with the biggest risk,” Dr. Nurmohamed added.
 

Plaque burden

Philippe Moulin, MD, PhD, head of endocrinology and professor of human nutrition at Faculté Lyon Est, Claude Bernard Lyon (France) 1 University, said that the association between Lp(a) and plaque burden has been seen previously in the literature in a very similar study but with only 1-year follow-up.

Similarly, registry data have suggested that Lp(a) is associated with worsening plaque progression over time.

“Here, with 10-year follow-up, [the study] is much more interesting,” due to its greater statistical power, he said in an interview. It is also “well-documented” and uses an “appropriate” methodology.

But Dr. Moulin underlined that the number of patients with high Lp(a) levels included in the study is relatively small.

Consequently, the researchers were not able to look at the level and rate of progression of atherosclerosis between different quartiles of Lp(a), “so you have no dose-response analysis.”

It also does not “establish causality,” as it remains an observational study, despite being longitudinal, “well done, and so on.”

Dr. Moulin added that the study nevertheless adds “one more stone” to the construct of the idea of high risk around high Lp(a) levels, and “prepares the ground” for the availability of two drugs to decrease Lp(a) levels, expected in 2026 and 2027.

These are expected to substantially reduce Lp(a) levels and achieve a reduction in cardiovascular risk of around 20%-40%, “which would be interesting,” especially as “we have patients who have Lp(a) levels four times above the upper normal value.”

Crucially, they may already have normal LDL cholesterol levels, meaning that, for some patients, “there is clearly a need for such treatment, as long as it is proven that it will decrease cardiovascular risk.”

For the moment, however, the strategy for managing patients with high Lp(a) remains to increase the dose of statin and to have more stringent targets, although Dr. Moulin pointed out that, “when you give statins, you raise slightly Lp(a) levels.”

Dr. Nurmohamed said in an interview that “we know from largely genetic and observational studies that Lp(a) is causally associated with atherosclerotic cardiovascular disease.”

What is less clear is the exact underlying mechanism, he said, noting that there have been several imaging studies in high and low Lp(a) patients that have yielded conflicting results in terms of the relationship with plaque burden.

To investigate the impact of Lp(a) levels on long-term coronary plaque progression, the team invited patients who had taken part in a previous CCTA study to undergo repeat CCTA, regardless of their underlying symptoms.

In all, 299 patients underwent follow-up imaging a median of 10.2 years after their original scan. Plaque volumes were quantified and adjusted for vessel volumes, and the patients were classified as having high (≥ 70 nmol/L) or low (< 70 nmol/L) Lp(a) levels.

After excluding patients who had undergone coronary artery bypass grafting, the team analyzed 274 patients with a mean age at baseline of 57 years. Of these, 159 (58%) were men. High Lp(a) levels were identified in 87 (32%) patients.

The team found that at baseline, patients with high Lp(a) levels had significantly larger percent atheroma volumes than those with low levels, at 3.92% versus 2.17%, or an absolute difference of 1.75% (P = .013).

The difference between the two groups was even greater at the follow-up, when percent atheroma volumes reached 8.75% in patients with high Lp(a) levels versus 3.90% for those with low levels, or an absolute difference of 4.85% (P = .005).

Similar findings were seen when looking separately at percentage of noncalcified and calcified plaque volumes as well as when analyzing for the presence of low-density plaques.

Multivariate analysis taking into account clinical risk factors, statin use, and CT tube voltage found that high Lp(a) levels were associated with a greater percent atheroma volume at baseline, at an odds ratio versus low Lp(a) of 1.83 (95% confidence interval, 0.12-3.54; P = .037).

High Lp(a) levels were also linked to a larger percent atheroma volume on follow-up imaging, at an odds ratio of 3.25 (95% CI, 0.80-5.71; P = .010), and a significantly greater change in atheroma volume from baseline to follow-up imaging, at an odds ratio of 1.86 (95% CI, 0.59-3.14; P = .005)

Finally, the team showed that, after adjusting for clinical risk factors, high baseline Lp(a) levels were associated with an increased risk of MACE during the follow-up period, at a hazard ratio versus low Lp(a) levels of 2.10 (95% CI, 1.01-4.29, P = .048).

No funding was declared. Dr. Nurmohamed is cofounder of Lipid Tools. Other authors declare relationships with Amgen, Novartis, Esperion, Sanofi-Regeneron, Ackee, Cleerly, GW Heart and Vascular Institute, Siemens Healthineers, and HeartFlow.

 

 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT EAS 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Is ChatGPT a friend or foe of medical publishing?

Article Type
Changed

 

Researchers may use artificial intelligence (AI) language models such as ChatGPT to write and revise scientific manuscripts, according to a new announcement from the International Committee of Medical Journal Editors. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.

These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.

At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.

What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
 

A change in medical publishing

OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:

“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”

Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.

There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.

The consensus is that AI has no place on the author byline.

“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
 

 

 

Issues with AI

One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.

“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”

In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.

“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”

Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.

“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.

OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”

Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
 

A positive tool?

But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.

“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”

Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.

In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
 

 

 

New rules

But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.

“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.

While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.

“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.

The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.

It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”

Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

Researchers may use artificial intelligence (AI) language models such as ChatGPT to write and revise scientific manuscripts, according to a new announcement from the International Committee of Medical Journal Editors. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.

These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.

At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.

What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
 

A change in medical publishing

OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:

“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”

Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.

There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.

The consensus is that AI has no place on the author byline.

“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
 

 

 

Issues with AI

One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.

“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”

In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.

“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”

Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.

“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.

OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”

Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
 

A positive tool?

But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.

“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”

Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.

In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
 

 

 

New rules

But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.

“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.

While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.

“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.

The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.

It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”

Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

Researchers may use artificial intelligence (AI) language models such as ChatGPT to write and revise scientific manuscripts, according to a new announcement from the International Committee of Medical Journal Editors. These tools should not be listed as authors, and researchers must denote how AI-assisted technologies were used, the committee said.

These new guidelines are the latest effort for medical journals to define policies for using these large-scale language models (LLMs) in scientific publication. While these AI-assisted tools can help with tasks such as writing, analyzing data, catching mistakes, and much more, they are also prone to errors, noted Casey Greene, PhD, a professor of biomedical informatics at the University of Colorado at Denver, Aurora. It is also not totally clear how information is stored and processed in these kinds of tools, and who has access to that information, he noted.

At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing.

What experts can agree on, though, is that the use of AI tools is here to stay. “This is going to become a common tool,” Dr. Greene said. “I don’t think there’s a way out of that at this point.”
 

A change in medical publishing

OpenAI released ChatGPT in November 2022. In its own words, ChatGPT is “a deep learning model that has been trained on a massive amount of text data to understand and generate humanlike text.” Enter a question or a prompt, and it will respond. For example, when asked how the AI tool can be used in scientific publishing, ChatGPT responded:

“ChatGPT can aid scientific publishing by generating ideas, clarifying concepts, conducting preliminary literature reviews, providing proofreading and editing suggestions, and assisting with outlining and organizing scientific papers. However, it should be used as a supportive tool alongside domain expertise and validation from trusted scientific sources.”

Just a few months after ChatGPT became available, researchers began using this tool in their own work. One individual, Som Biswas, MD, a radiologist at the University of Tennessee Health Science Center in Memphis, reportedly used ChatGPT to author 16 scientific articles in just 4 months, according to the Daily Beast. Five of these articles have been published in four different journals. Dr. Biswas declined to be interviewed for this article.

There were also reports of papers with ChatGPT as one of the listed authors, which sparked backlash. In response, JAMA, Nature, and Science all published editorials in January outlining their policies for using ChatGPT and other large language models in the scientific authoring process. Editors from the journals of the American College of Cardiology and the American College of Rheumatology also updated their policies to reflect the influence of AI authoring tools.

The consensus is that AI has no place on the author byline.

“We think that’s not appropriate, because coauthorship means that you are taking responsibility for the analysis and the generation of data that are included in a manuscript. A machine that is dictated by AI can’t take responsibility,” said Daniel Solomon, MD, MPH, a rheumatologist at Brigham and Women’s Hospital, Boston, and the editor in chief of the ACR journal Arthritis & Rheumatology.
 

 

 

Issues with AI

One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. For example, Dr. Greene and colleague Milton Pividori, PhD, also of the University of Colorado, were writing a journal article about new software they developed that uses a large language model to revise scientific manuscripts.

“We used the same software to revise that article and at one point, it added a line that noted that the large language model had been fine-tuned on a data set of manuscripts from within the same field. This makes a lot of sense, and is absolutely something you could do, but was not something that we did,” Dr. Greene said. “Without a really careful review of the content, it becomes possible to invent things that were not actually done.”

In another case, ChatGPT falsely stated that a prominent law professor had been accused of sexual assault, citing a Washington Post article that did not exist.

“We live in a society where we are extremely concerned about fake news,” Dr. Pividori added, “and [these kinds of errors] could certainly exacerbate that in the scientific community, which is very concerning because science informs public policy.”

Another issue is the lack of transparency around how large language models like ChatGPT process and store data used to make queries.

“We have no idea how they are recording all the prompts and things that we input into ChatGPT and their systems,” Dr. Pividori said.

OpenAI recently addressed some privacy concerns by allowing users to turn off their chat history with the AI chatbot, so conversations cannot be used to train or improve the company’s models. But Dr. Greene noted that the terms of service “still remain pretty nebulous.”

Dr. Solomon is also concerned with researchers using these AI tools in authoring without knowing how they work. “The thing we are really concerned about is that fact that [LLMs] are a bit of a black box – people don’t really understand the methodologies,” he said.
 

A positive tool?

But despite these concerns, many think that these types of AI-assisted tools could have a positive impact on medical publishing, particularly for researchers for whom English is not their first language, noted Catherine Gao, MD, a pulmonary and critical care instructor at Northwestern University, Chicago. She recently led research comparing scientific abstracts written by ChatGPT and real abstracts and discovered that reviewers found it “surprisingly difficult” to differentiate the two.

“The majority of research is published in English,” she said in an email. “Responsible use of LLMs can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers.”

Dr. Pividori agreed, adding that as a non-native English speaker, he spends much more time working on the structure and grammar of sentences when authoring a manuscript, compared with people who speak English as a first language. He noted that these tools can also be used to automate some of the more monotonous tasks that come along with writing manuscripts and allow researchers to focus on the more creative aspects.

In the future, “I want to focus more on the things that only a human can do and let these tools do all the rest of it,” he said.
 

 

 

New rules

But despite how individual researchers feel about LLMs, they agree that these AI tools are here to stay.

“I think that we should anticipate that they will become part of the medical research establishment over time, when we figure out how to use them appropriately,” Dr. Solomon said.

While the debate of how to best use AI in medical publications will continue, journal editors agree that all authors of a manuscript are solely responsible for content in articles that used AI-assisted technology.

“Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased,” the ICMJE guidelines state. “Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.” This includes appropriate attribution of all cited materials.

The committee also recommends that authors write in both the cover letter and submitted work how AI was used in the manuscript writing process. Recently updated guidelines from the World Association of Medical Editors recommend that all prompts used to generate new text or analytical work should be provided in submitted work. Dr. Greene also noted that if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs.

It is similar to a preprint, he said, but rather than publishing a version of a paper prior to peer review, someone is showing a version of a manuscript before it was reviewed and revised by AI. “This type of practice could be a path that lets us benefit from these models,” he said, “without having the drawbacks that many are concerned about.”

Dr. Solomon has financial relationships with AbbVie, Amgen, Janssen, CorEvitas, and Moderna. Both Dr. Greene and Dr. Pividori are inventors in the U.S. Provisional Patent Application No. 63/486,706 that the University of Colorado has filed for the “Publishing Infrastructure For AI-Assisted Academic Authoring” invention with the U.S. Patent and Trademark Office. Dr. Greene and Dr. Pividori also received a grant from the Alfred P. Sloan Foundation to improve their AI-based manuscript revision tool. Dr. Gao reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

More support for thrombectomy in large-core strokes: TESLA, MAGNA

Article Type
Changed

Although not quite meeting its primary endpoint, a new trial (TESLA) has added to evidence suggesting that patients with large ischemic strokes who have a significant amount of brain tissue damage may still benefit from thrombectomy. 

And a new meta-analysis (MAGNA) of previous studies in a similar population has provided more detailed estimates of the treatment benefit of thrombectomy in these patients. 

The TESLA trial, which included patients with large-core infarcts (ASPECTS score 2-5) within 24 hours of symptom onset, showed encouraging trends towards a benefit with thrombectomy for the primary outcome of 90-day utility-weighted scores on the modified Rankin scale (mRS), but this did not reach the prespecified Bayesian superiority threshold.

Several secondary efficacy endpoints also showed suggestions of benefits with thrombectomy.

“The interventional group had higher mean or average utility-weighted mRS scores than the control group which means that their functional recovery at 90 days was trending for better outcome and less disability,” lead TESLA investigator, Osama Zaidat, MD, neuroscience & stroke director at Mercy St. Vincent Medical Center, Toledo, Ohio, said in an interview. “They also showed better neurological improvement and a higher chance of achieving a good outcome (mRS 0-3).”

These patients with large-core infarct strokes were not included in the initial trials of endovascular therapy in patients presenting in the late time window, up to 24 hours, as it was thought they would not benefit. However, three recent trials (RESCUE-Japan LIMIT; ANGEL ASPECT; and SELECT 2) have shown that patients with large core infarcts can still benefit from endovascular thrombectomy.

While these three previous trials used sophisticated imaging techniques (MRI or CT perfusion) to select patients, and restricted patients included to those with an ASPECTS score of 3-5, the TESLA study had a more pragmatic design, using just noncontrast CT scan evaluation without advanced imaging to select patients, and extending the inclusion criteria to patients with an ASPECTS score of 2.

“Noncontrast CT scans are available at all stroke centers so this study is more practical, highly generalizable, and more applicable globally,” Dr. Zaidat commented.

“However, our results suggest that when using noncontrast CT only to select patients, the gain or treatment effect of thrombectomy seems to be smaller than when using sophisticated advanced imaging to make the decision to go for thrombectomy or not as in the other trials,” he added.

The TESLA trial results were presented at the recent European Stroke Organisation Conference, held in Munich.

The study included 300 stroke patients with anterior circulation large‐vessel occlusion (NIHSS of 6 or more) with a large‐core infarction (investigator read ASPECTS Score 2-5), selected on the basis of noncontrast CT scan, who were randomized to undergo intra-arterial thrombectomy or best medical management (control) up to 24 hours from last known well.

The trial had a Bayesian probabilities design, with a primary endpoint of the 90-day utility-weighted mRS (uw-mRS), a relatively new patient-centered outcome used in stroke trials, which includes a quality-of-life measurement. Utilities represent preferences for mRS health states and range from 0 (death) to 1 (perfect health), so in contrast to the traditional mRS scores, a higher uw-mRS score is better.

The 90-day uw-MRS scores were 2.93 in the thrombectomy group vs. 2.27 in the control group.  

The Bayesian probability of thrombectomy superiority was 0.957, which Dr. Zaidat said was “similar” to a P value of .043, but this was less than the prespecified superiority probability of > .975 to declare efficacy.

A separate analysis in a population of patients selected by core-lab read noncontrast CT scan, showed a Bayesian probability of benefit with thrombectomy of 0.98, “similar” to one-sided P value of .02. 

In terms of secondary endpoints, there were also some encouraging trends, including a suggestion of benefit in the 90-day mRS ordinal shift (odds ratio 1.40; P = .06). 

The number of patients achieving functional independence (mRS 0-2) was 14% in the thrombectomy groups vs. 9% in the control group (P = .09); and a good functional outcome (mRS 0-3) was achieved in 30% of thrombectomy patients vs. 20% of those in the control group (P = .03).  

Major neurological improvement (NIHSS scale of 0-2 or improvement of 8 points or more) occurred in 26% of thrombectomy patients vs. 13% of controls (P = .0008).

Quality of life, measured by the EuroQol 5-Dimension 5-Level survey, also showed a trend towards improvement in the thrombectomy group with mean scores of 53 vs. 46 (P = .058).  

In terms of safety, all-cause mortality was similar in the two groups (35% thrombectomy and 33% control) and symptomatic intracerebral hemmorhage (ICH) occurred in 3.97% of thrombectomy vs. 1.34% of control patients (relative risk, 2.96).

“Cost-effective analysis and additional subgroup studies will provide more insight about the training needs to read the CT scan and if there is any value to treat patients with an ASPECTS score of 2,” Dr. Zaidat concluded.

“Larger pooled analysis will also be very useful in understanding the threshold of brain volume with irreversible damage beyond which thrombectomy wouldn’t be helpful,” he added.
 

 

 

Meta-analysis of previous studies: MAGNA

Another presentation at the ESOC meeting reported an individual patient data meta-analysis (MAGNA) of the three previous trials suggesting benefit of thrombectomy in patients with large-core ischemic strokes of the anterior circulation up to 24 hours of last known well.

The RESCUE Japan Limit trial was conducted in Japan; the SELECT-2 trial in North America, Europe, Australia, and New Zealand; and the ANGEL ASPECT trial in China.

In total, the meta-analysis included 1,009 patients, half of whom received thrombectomy and half received medical management only.

Results showed that in the whole population in the three trials, the use of thrombectomy improved functional outcomes, with an adjusted odds ratio of 1.78 (P < .001).

Functional independence (mRS 0-2) was also increased (23% vs. 9%; adjusted risk ratio, 2.62; P < .001); as was independent ambulation (mRS, 0-3; 41% vs. 24%; aRR, 1.76; P < .001).

But early neurological worsening was more frequent with thrombectomy (aRR 1.42, 1.09-1.84, P = .010).

No difference in mortality was identified between thrombectomy (27%) and medical management (28%) or in rates of symptomatic ICH (1.8% thrombectomy vs. 1.6% medical management). 

“The results from the previously published large-core trials and from this pooled dataset provide unequivocal evidence on the efficacy and safety of endovascular thrombectomy in patients with large-core infarcts,” lead author of the MAGNA meta-analysis, Amrou Sarraj, MD, professor of neurology at University Hospitals Cleveland Medical Center, affiliate of Case Western Reserve University in Cleveland, concluded.

“The benefit persists across the spectrum of age, clinical severity, and time, with clear benefit up to an estimated ischemic core volume of 150 mL,” he added. “We have great hopes that these results will lead to more patients being treated and achieving improved functional outcomes.”

On how the TESLA results fit in with the previous three trials, Dr. Sarraj pointed out to this news organization that the TESLA trial was conducted in the United States and enrolled patients based on ASPECTS 2-5 on noncontrast CT.

“The primary outcome for intention-to-treat analysis did not reach the prespecified threshold for efficacy, but the results were largely in the same direction as shown in SELECT2, ANGEL ASPECT, and RESCUE Japan Limit,” he said. “These findings further emphasize the efficacy and safety of thrombectomy in patients with large ischemic core, at the same time reinforcing the need to provide results from pooled data from all large-core trials.”

He noted that results from two further trials of thrombectomy in large core strokes, TENSION and LASTE – both of which have now been stopped early because of the positive findings from the previous studies – are expected soon, and the MAGNA meta-analysis will be updated to include data from all six trials. 

“This will increase the accuracy of the estimation of the treatment effect and will give even more power to look further into the details related to subgroups and selection imaging modalities,” Dr. Sarraj added.

The research team hopes that this joint effort will eventually set the pathway for selection algorithms and treatment boundaries in patients with large-vessel occlusion.

TESLA was an investigator-initiated study funded by unrestricted grants from Cerenovus, Penumbra, Medtronic, Stryker, and Genentech. Dr. Zaidat is a consultant for Stryker, Cerenovus, Penumbra, and Medtronic.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Although not quite meeting its primary endpoint, a new trial (TESLA) has added to evidence suggesting that patients with large ischemic strokes who have a significant amount of brain tissue damage may still benefit from thrombectomy. 

And a new meta-analysis (MAGNA) of previous studies in a similar population has provided more detailed estimates of the treatment benefit of thrombectomy in these patients. 

The TESLA trial, which included patients with large-core infarcts (ASPECTS score 2-5) within 24 hours of symptom onset, showed encouraging trends towards a benefit with thrombectomy for the primary outcome of 90-day utility-weighted scores on the modified Rankin scale (mRS), but this did not reach the prespecified Bayesian superiority threshold.

Several secondary efficacy endpoints also showed suggestions of benefits with thrombectomy.

“The interventional group had higher mean or average utility-weighted mRS scores than the control group which means that their functional recovery at 90 days was trending for better outcome and less disability,” lead TESLA investigator, Osama Zaidat, MD, neuroscience & stroke director at Mercy St. Vincent Medical Center, Toledo, Ohio, said in an interview. “They also showed better neurological improvement and a higher chance of achieving a good outcome (mRS 0-3).”

These patients with large-core infarct strokes were not included in the initial trials of endovascular therapy in patients presenting in the late time window, up to 24 hours, as it was thought they would not benefit. However, three recent trials (RESCUE-Japan LIMIT; ANGEL ASPECT; and SELECT 2) have shown that patients with large core infarcts can still benefit from endovascular thrombectomy.

While these three previous trials used sophisticated imaging techniques (MRI or CT perfusion) to select patients, and restricted patients included to those with an ASPECTS score of 3-5, the TESLA study had a more pragmatic design, using just noncontrast CT scan evaluation without advanced imaging to select patients, and extending the inclusion criteria to patients with an ASPECTS score of 2.

“Noncontrast CT scans are available at all stroke centers so this study is more practical, highly generalizable, and more applicable globally,” Dr. Zaidat commented.

“However, our results suggest that when using noncontrast CT only to select patients, the gain or treatment effect of thrombectomy seems to be smaller than when using sophisticated advanced imaging to make the decision to go for thrombectomy or not as in the other trials,” he added.

The TESLA trial results were presented at the recent European Stroke Organisation Conference, held in Munich.

The study included 300 stroke patients with anterior circulation large‐vessel occlusion (NIHSS of 6 or more) with a large‐core infarction (investigator read ASPECTS Score 2-5), selected on the basis of noncontrast CT scan, who were randomized to undergo intra-arterial thrombectomy or best medical management (control) up to 24 hours from last known well.

The trial had a Bayesian probabilities design, with a primary endpoint of the 90-day utility-weighted mRS (uw-mRS), a relatively new patient-centered outcome used in stroke trials, which includes a quality-of-life measurement. Utilities represent preferences for mRS health states and range from 0 (death) to 1 (perfect health), so in contrast to the traditional mRS scores, a higher uw-mRS score is better.

The 90-day uw-MRS scores were 2.93 in the thrombectomy group vs. 2.27 in the control group.  

The Bayesian probability of thrombectomy superiority was 0.957, which Dr. Zaidat said was “similar” to a P value of .043, but this was less than the prespecified superiority probability of > .975 to declare efficacy.

A separate analysis in a population of patients selected by core-lab read noncontrast CT scan, showed a Bayesian probability of benefit with thrombectomy of 0.98, “similar” to one-sided P value of .02. 

In terms of secondary endpoints, there were also some encouraging trends, including a suggestion of benefit in the 90-day mRS ordinal shift (odds ratio 1.40; P = .06). 

The number of patients achieving functional independence (mRS 0-2) was 14% in the thrombectomy groups vs. 9% in the control group (P = .09); and a good functional outcome (mRS 0-3) was achieved in 30% of thrombectomy patients vs. 20% of those in the control group (P = .03).  

Major neurological improvement (NIHSS scale of 0-2 or improvement of 8 points or more) occurred in 26% of thrombectomy patients vs. 13% of controls (P = .0008).

Quality of life, measured by the EuroQol 5-Dimension 5-Level survey, also showed a trend towards improvement in the thrombectomy group with mean scores of 53 vs. 46 (P = .058).  

In terms of safety, all-cause mortality was similar in the two groups (35% thrombectomy and 33% control) and symptomatic intracerebral hemmorhage (ICH) occurred in 3.97% of thrombectomy vs. 1.34% of control patients (relative risk, 2.96).

“Cost-effective analysis and additional subgroup studies will provide more insight about the training needs to read the CT scan and if there is any value to treat patients with an ASPECTS score of 2,” Dr. Zaidat concluded.

“Larger pooled analysis will also be very useful in understanding the threshold of brain volume with irreversible damage beyond which thrombectomy wouldn’t be helpful,” he added.
 

 

 

Meta-analysis of previous studies: MAGNA

Another presentation at the ESOC meeting reported an individual patient data meta-analysis (MAGNA) of the three previous trials suggesting benefit of thrombectomy in patients with large-core ischemic strokes of the anterior circulation up to 24 hours of last known well.

The RESCUE Japan Limit trial was conducted in Japan; the SELECT-2 trial in North America, Europe, Australia, and New Zealand; and the ANGEL ASPECT trial in China.

In total, the meta-analysis included 1,009 patients, half of whom received thrombectomy and half received medical management only.

Results showed that in the whole population in the three trials, the use of thrombectomy improved functional outcomes, with an adjusted odds ratio of 1.78 (P < .001).

Functional independence (mRS 0-2) was also increased (23% vs. 9%; adjusted risk ratio, 2.62; P < .001); as was independent ambulation (mRS, 0-3; 41% vs. 24%; aRR, 1.76; P < .001).

But early neurological worsening was more frequent with thrombectomy (aRR 1.42, 1.09-1.84, P = .010).

No difference in mortality was identified between thrombectomy (27%) and medical management (28%) or in rates of symptomatic ICH (1.8% thrombectomy vs. 1.6% medical management). 

“The results from the previously published large-core trials and from this pooled dataset provide unequivocal evidence on the efficacy and safety of endovascular thrombectomy in patients with large-core infarcts,” lead author of the MAGNA meta-analysis, Amrou Sarraj, MD, professor of neurology at University Hospitals Cleveland Medical Center, affiliate of Case Western Reserve University in Cleveland, concluded.

“The benefit persists across the spectrum of age, clinical severity, and time, with clear benefit up to an estimated ischemic core volume of 150 mL,” he added. “We have great hopes that these results will lead to more patients being treated and achieving improved functional outcomes.”

On how the TESLA results fit in with the previous three trials, Dr. Sarraj pointed out to this news organization that the TESLA trial was conducted in the United States and enrolled patients based on ASPECTS 2-5 on noncontrast CT.

“The primary outcome for intention-to-treat analysis did not reach the prespecified threshold for efficacy, but the results were largely in the same direction as shown in SELECT2, ANGEL ASPECT, and RESCUE Japan Limit,” he said. “These findings further emphasize the efficacy and safety of thrombectomy in patients with large ischemic core, at the same time reinforcing the need to provide results from pooled data from all large-core trials.”

He noted that results from two further trials of thrombectomy in large core strokes, TENSION and LASTE – both of which have now been stopped early because of the positive findings from the previous studies – are expected soon, and the MAGNA meta-analysis will be updated to include data from all six trials. 

“This will increase the accuracy of the estimation of the treatment effect and will give even more power to look further into the details related to subgroups and selection imaging modalities,” Dr. Sarraj added.

The research team hopes that this joint effort will eventually set the pathway for selection algorithms and treatment boundaries in patients with large-vessel occlusion.

TESLA was an investigator-initiated study funded by unrestricted grants from Cerenovus, Penumbra, Medtronic, Stryker, and Genentech. Dr. Zaidat is a consultant for Stryker, Cerenovus, Penumbra, and Medtronic.

A version of this article first appeared on Medscape.com.

Although not quite meeting its primary endpoint, a new trial (TESLA) has added to evidence suggesting that patients with large ischemic strokes who have a significant amount of brain tissue damage may still benefit from thrombectomy. 

And a new meta-analysis (MAGNA) of previous studies in a similar population has provided more detailed estimates of the treatment benefit of thrombectomy in these patients. 

The TESLA trial, which included patients with large-core infarcts (ASPECTS score 2-5) within 24 hours of symptom onset, showed encouraging trends towards a benefit with thrombectomy for the primary outcome of 90-day utility-weighted scores on the modified Rankin scale (mRS), but this did not reach the prespecified Bayesian superiority threshold.

Several secondary efficacy endpoints also showed suggestions of benefits with thrombectomy.

“The interventional group had higher mean or average utility-weighted mRS scores than the control group which means that their functional recovery at 90 days was trending for better outcome and less disability,” lead TESLA investigator, Osama Zaidat, MD, neuroscience & stroke director at Mercy St. Vincent Medical Center, Toledo, Ohio, said in an interview. “They also showed better neurological improvement and a higher chance of achieving a good outcome (mRS 0-3).”

These patients with large-core infarct strokes were not included in the initial trials of endovascular therapy in patients presenting in the late time window, up to 24 hours, as it was thought they would not benefit. However, three recent trials (RESCUE-Japan LIMIT; ANGEL ASPECT; and SELECT 2) have shown that patients with large core infarcts can still benefit from endovascular thrombectomy.

While these three previous trials used sophisticated imaging techniques (MRI or CT perfusion) to select patients, and restricted patients included to those with an ASPECTS score of 3-5, the TESLA study had a more pragmatic design, using just noncontrast CT scan evaluation without advanced imaging to select patients, and extending the inclusion criteria to patients with an ASPECTS score of 2.

“Noncontrast CT scans are available at all stroke centers so this study is more practical, highly generalizable, and more applicable globally,” Dr. Zaidat commented.

“However, our results suggest that when using noncontrast CT only to select patients, the gain or treatment effect of thrombectomy seems to be smaller than when using sophisticated advanced imaging to make the decision to go for thrombectomy or not as in the other trials,” he added.

The TESLA trial results were presented at the recent European Stroke Organisation Conference, held in Munich.

The study included 300 stroke patients with anterior circulation large‐vessel occlusion (NIHSS of 6 or more) with a large‐core infarction (investigator read ASPECTS Score 2-5), selected on the basis of noncontrast CT scan, who were randomized to undergo intra-arterial thrombectomy or best medical management (control) up to 24 hours from last known well.

The trial had a Bayesian probabilities design, with a primary endpoint of the 90-day utility-weighted mRS (uw-mRS), a relatively new patient-centered outcome used in stroke trials, which includes a quality-of-life measurement. Utilities represent preferences for mRS health states and range from 0 (death) to 1 (perfect health), so in contrast to the traditional mRS scores, a higher uw-mRS score is better.

The 90-day uw-MRS scores were 2.93 in the thrombectomy group vs. 2.27 in the control group.  

The Bayesian probability of thrombectomy superiority was 0.957, which Dr. Zaidat said was “similar” to a P value of .043, but this was less than the prespecified superiority probability of > .975 to declare efficacy.

A separate analysis in a population of patients selected by core-lab read noncontrast CT scan, showed a Bayesian probability of benefit with thrombectomy of 0.98, “similar” to one-sided P value of .02. 

In terms of secondary endpoints, there were also some encouraging trends, including a suggestion of benefit in the 90-day mRS ordinal shift (odds ratio 1.40; P = .06). 

The number of patients achieving functional independence (mRS 0-2) was 14% in the thrombectomy groups vs. 9% in the control group (P = .09); and a good functional outcome (mRS 0-3) was achieved in 30% of thrombectomy patients vs. 20% of those in the control group (P = .03).  

Major neurological improvement (NIHSS scale of 0-2 or improvement of 8 points or more) occurred in 26% of thrombectomy patients vs. 13% of controls (P = .0008).

Quality of life, measured by the EuroQol 5-Dimension 5-Level survey, also showed a trend towards improvement in the thrombectomy group with mean scores of 53 vs. 46 (P = .058).  

In terms of safety, all-cause mortality was similar in the two groups (35% thrombectomy and 33% control) and symptomatic intracerebral hemmorhage (ICH) occurred in 3.97% of thrombectomy vs. 1.34% of control patients (relative risk, 2.96).

“Cost-effective analysis and additional subgroup studies will provide more insight about the training needs to read the CT scan and if there is any value to treat patients with an ASPECTS score of 2,” Dr. Zaidat concluded.

“Larger pooled analysis will also be very useful in understanding the threshold of brain volume with irreversible damage beyond which thrombectomy wouldn’t be helpful,” he added.
 

 

 

Meta-analysis of previous studies: MAGNA

Another presentation at the ESOC meeting reported an individual patient data meta-analysis (MAGNA) of the three previous trials suggesting benefit of thrombectomy in patients with large-core ischemic strokes of the anterior circulation up to 24 hours of last known well.

The RESCUE Japan Limit trial was conducted in Japan; the SELECT-2 trial in North America, Europe, Australia, and New Zealand; and the ANGEL ASPECT trial in China.

In total, the meta-analysis included 1,009 patients, half of whom received thrombectomy and half received medical management only.

Results showed that in the whole population in the three trials, the use of thrombectomy improved functional outcomes, with an adjusted odds ratio of 1.78 (P < .001).

Functional independence (mRS 0-2) was also increased (23% vs. 9%; adjusted risk ratio, 2.62; P < .001); as was independent ambulation (mRS, 0-3; 41% vs. 24%; aRR, 1.76; P < .001).

But early neurological worsening was more frequent with thrombectomy (aRR 1.42, 1.09-1.84, P = .010).

No difference in mortality was identified between thrombectomy (27%) and medical management (28%) or in rates of symptomatic ICH (1.8% thrombectomy vs. 1.6% medical management). 

“The results from the previously published large-core trials and from this pooled dataset provide unequivocal evidence on the efficacy and safety of endovascular thrombectomy in patients with large-core infarcts,” lead author of the MAGNA meta-analysis, Amrou Sarraj, MD, professor of neurology at University Hospitals Cleveland Medical Center, affiliate of Case Western Reserve University in Cleveland, concluded.

“The benefit persists across the spectrum of age, clinical severity, and time, with clear benefit up to an estimated ischemic core volume of 150 mL,” he added. “We have great hopes that these results will lead to more patients being treated and achieving improved functional outcomes.”

On how the TESLA results fit in with the previous three trials, Dr. Sarraj pointed out to this news organization that the TESLA trial was conducted in the United States and enrolled patients based on ASPECTS 2-5 on noncontrast CT.

“The primary outcome for intention-to-treat analysis did not reach the prespecified threshold for efficacy, but the results were largely in the same direction as shown in SELECT2, ANGEL ASPECT, and RESCUE Japan Limit,” he said. “These findings further emphasize the efficacy and safety of thrombectomy in patients with large ischemic core, at the same time reinforcing the need to provide results from pooled data from all large-core trials.”

He noted that results from two further trials of thrombectomy in large core strokes, TENSION and LASTE – both of which have now been stopped early because of the positive findings from the previous studies – are expected soon, and the MAGNA meta-analysis will be updated to include data from all six trials. 

“This will increase the accuracy of the estimation of the treatment effect and will give even more power to look further into the details related to subgroups and selection imaging modalities,” Dr. Sarraj added.

The research team hopes that this joint effort will eventually set the pathway for selection algorithms and treatment boundaries in patients with large-vessel occlusion.

TESLA was an investigator-initiated study funded by unrestricted grants from Cerenovus, Penumbra, Medtronic, Stryker, and Genentech. Dr. Zaidat is a consultant for Stryker, Cerenovus, Penumbra, and Medtronic.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ESOC 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Guide explains nonsurgical management of major hemorrhage

Article Type
Changed

A new guide offers recommendations for the nonsurgical management of major hemorrhage, which is a challenging clinical problem.

Major hemorrhage is a significant cause of death and can occur in a myriad of clinical settings.

“In Ontario, we’ve been collecting quality metrics on major hemorrhages to try and make sure that a higher percentage of patients gets the best possible care when they are experiencing significant bleeding,” author Jeannie Callum, MD, professor and director of transfusion medicine at Kingston (Ont.) Health Sciences Centre and Queen’s University, also in Kingston, said in an interview. “There were some gaps, so this is our effort to get open, clear information out to the emergency doctors, intensive care unit doctors, the surgeons, and everyone else involved in managing major hemorrhage, to help close these gaps.”

The guide was published in the Canadian Medical Association Journal.
 

Fast care essential

The guide aims to provide answers, based on the latest research, to questions such as when to activate a massive hemorrhage protocol (MHP), which patients should receive tranexamic acid (TXA), which blood products should be transfused before laboratory results are available, how to monitor the effects of blood transfusion, and when fibrinogen concentrate or prothrombin complex concentrate should be given.

Not all recommendations will be followed, Dr. Callum said, especially in rural hospitals with limited resources. But the guide is adaptable, and rural hospitals can create protocols that are customized to their unique circumstances.

Care must be “perfect and fast” in the first hour of major injury, said Dr. Callum. “You need to get a proclotting drug in that first hour if you have a traumatic or postpartum bleed. You have to make sure your clotting factors never fail you throughout your resuscitation. You have to be fast with the transfusion. You have to monitor for the complications of the transfusion, electrolyte disturbances, and the patient’s temperature dropping. It’s a complicated situation that needs a multidisciplinary team.”

Bleeding affects everybody in medicine, from family doctors in smaller institutions who work in emergency departments to obstetricians and surgeons, she added.

“For people under the age of 45, trauma is the most common cause of death. When people die of trauma, they die of bleeding. So many people experience these extreme bleeds. We believe that some of them might be preventable with faster, more standardized, more aggressive care. That’s why we wrote this review,” said Dr. Callum.
 

Administer TXA quickly  

The first recommendation is to ensure that every hospital has a massive hemorrhage protocol. Such a protocol is vital for the emergency department, operating room, and obstetric unit. “Making sure you’ve got a protocol that is updated every 3 years and adjusted to the local hospital context is essential,” said Dr. Callum.

Smaller hospitals will have to adjust their protocols according to the capabilities of their sites. “Some smaller hospitals do not have platelets in stock and get their platelets from another hospital, so you need to adjust your protocol to what you are able to do. Not every hospital can control bleeding in a trauma patient, so your protocol would be to stabilize and call a helicopter. Make sure all of this is detailed so that implementing it becomes automatic,” said Dr. Callum.

An MHP should be activated for patients with uncontrolled hemorrhage who meet the clinical criteria of the local hospital and are expected to need blood product support and red blood cells.

“Lots of people bleed, but not everybody is bleeding enough that they need a code transfusion,” said Dr. Callum. Most patients with gastrointestinal bleeds caused by NSAID use can be managed with uncrossed matched blood from the local blood bank. “But in patients who need the full code transfusion because they are going to need plasma, clotting factor replacement, and many other drugs, that is when the MHP should be activated. Don’t activate it when you don’t need it, because doing so activates the whole hospital and diverts care away from other patients.”

TXA should be administered as soon as possible after onset of hemorrhage in most patients, with the exception of gastrointestinal hemorrhage, where a benefit has not been shown.

TXA has been a major advance in treating massive bleeding, Dr. Callum said. “TXA was invented by a Japanese husband-and-wife research team. We know that it reduces the death rate in trauma and in postpartum hemorrhage, and it reduces the chance of major bleeding with major surgical procedures. We give it routinely in surgical procedures. If a patient gets TXA within 60 minutes of injury, it dramatically reduces the death rate. And it costs $10 per patient. It’s cheap, it’s easy, it has no side effects. It’s just amazing.”

Future research must address several unanswered questions, said Dr. Callum. These questions include whether prehospital transfusion improves patient outcomes, whether whole blood has a role in the early management of major hemorrhage, and what role factor concentrates play in patients with major bleeding.
 

 

 

‘Optimal recommendations’

Commenting on the document, Bourke Tillmann, MD, PhD, trauma team leader at Sunnybrook Health Sciences Centre and the Ross Tilley Burn Center in Toronto, said: “Overall, I think it is a good overview of MHPs as an approach to major hemorrhage.”

The review also is timely, since Ontario released its MHP guidelines in 2021, he added. “I would have liked to see more about the treatment aspects than just an overview of an MHP. But if you are the person overseeing the emergency department or running the blood bank, these protocols are incredibly useful and incredibly important.”

“This report is a nice and thoughtful overview of best practices in many areas, especially trauma, and makes recommendations that are optimal, although they are not necessarily practical in all centers,” Eric L. Legome, MD, professor and chair of emergency medicine at Mount Sinai West and Mount Sinai Morningside, New York, said in an interview.

“If you’re in a small rural hospital with one lab technician, trying to do all of these things, it will not be possible. These are optimal recommendations that people can use to the best of their ability, but they are not standard of care, because some places will not be able to provide this level of care,” he added. “This paper provides practical, reasonable advice that should be looked at as you are trying to implement transfusion policies and processes, with the understanding that it is not necessarily applicable or practical for very small hospitals in very rural centers that might not have access to these types of products and tools, but it’s a reasonable and nicely written paper.”

No outside funding for the guideline was reported. Dr. Callum has received research funding from Canadian Blood Services and Octapharma. She sits on the nominating committee with the Association for the Advancement of Blood & Biotherapies and on the data safety monitoring boards for the Tranexamic Acid for Subdural Hematoma trial and the Fibrinogen Replacement in Trauma trial. Dr. Tillmann and Dr. Legome reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

A new guide offers recommendations for the nonsurgical management of major hemorrhage, which is a challenging clinical problem.

Major hemorrhage is a significant cause of death and can occur in a myriad of clinical settings.

“In Ontario, we’ve been collecting quality metrics on major hemorrhages to try and make sure that a higher percentage of patients gets the best possible care when they are experiencing significant bleeding,” author Jeannie Callum, MD, professor and director of transfusion medicine at Kingston (Ont.) Health Sciences Centre and Queen’s University, also in Kingston, said in an interview. “There were some gaps, so this is our effort to get open, clear information out to the emergency doctors, intensive care unit doctors, the surgeons, and everyone else involved in managing major hemorrhage, to help close these gaps.”

The guide was published in the Canadian Medical Association Journal.
 

Fast care essential

The guide aims to provide answers, based on the latest research, to questions such as when to activate a massive hemorrhage protocol (MHP), which patients should receive tranexamic acid (TXA), which blood products should be transfused before laboratory results are available, how to monitor the effects of blood transfusion, and when fibrinogen concentrate or prothrombin complex concentrate should be given.

Not all recommendations will be followed, Dr. Callum said, especially in rural hospitals with limited resources. But the guide is adaptable, and rural hospitals can create protocols that are customized to their unique circumstances.

Care must be “perfect and fast” in the first hour of major injury, said Dr. Callum. “You need to get a proclotting drug in that first hour if you have a traumatic or postpartum bleed. You have to make sure your clotting factors never fail you throughout your resuscitation. You have to be fast with the transfusion. You have to monitor for the complications of the transfusion, electrolyte disturbances, and the patient’s temperature dropping. It’s a complicated situation that needs a multidisciplinary team.”

Bleeding affects everybody in medicine, from family doctors in smaller institutions who work in emergency departments to obstetricians and surgeons, she added.

“For people under the age of 45, trauma is the most common cause of death. When people die of trauma, they die of bleeding. So many people experience these extreme bleeds. We believe that some of them might be preventable with faster, more standardized, more aggressive care. That’s why we wrote this review,” said Dr. Callum.
 

Administer TXA quickly  

The first recommendation is to ensure that every hospital has a massive hemorrhage protocol. Such a protocol is vital for the emergency department, operating room, and obstetric unit. “Making sure you’ve got a protocol that is updated every 3 years and adjusted to the local hospital context is essential,” said Dr. Callum.

Smaller hospitals will have to adjust their protocols according to the capabilities of their sites. “Some smaller hospitals do not have platelets in stock and get their platelets from another hospital, so you need to adjust your protocol to what you are able to do. Not every hospital can control bleeding in a trauma patient, so your protocol would be to stabilize and call a helicopter. Make sure all of this is detailed so that implementing it becomes automatic,” said Dr. Callum.

An MHP should be activated for patients with uncontrolled hemorrhage who meet the clinical criteria of the local hospital and are expected to need blood product support and red blood cells.

“Lots of people bleed, but not everybody is bleeding enough that they need a code transfusion,” said Dr. Callum. Most patients with gastrointestinal bleeds caused by NSAID use can be managed with uncrossed matched blood from the local blood bank. “But in patients who need the full code transfusion because they are going to need plasma, clotting factor replacement, and many other drugs, that is when the MHP should be activated. Don’t activate it when you don’t need it, because doing so activates the whole hospital and diverts care away from other patients.”

TXA should be administered as soon as possible after onset of hemorrhage in most patients, with the exception of gastrointestinal hemorrhage, where a benefit has not been shown.

TXA has been a major advance in treating massive bleeding, Dr. Callum said. “TXA was invented by a Japanese husband-and-wife research team. We know that it reduces the death rate in trauma and in postpartum hemorrhage, and it reduces the chance of major bleeding with major surgical procedures. We give it routinely in surgical procedures. If a patient gets TXA within 60 minutes of injury, it dramatically reduces the death rate. And it costs $10 per patient. It’s cheap, it’s easy, it has no side effects. It’s just amazing.”

Future research must address several unanswered questions, said Dr. Callum. These questions include whether prehospital transfusion improves patient outcomes, whether whole blood has a role in the early management of major hemorrhage, and what role factor concentrates play in patients with major bleeding.
 

 

 

‘Optimal recommendations’

Commenting on the document, Bourke Tillmann, MD, PhD, trauma team leader at Sunnybrook Health Sciences Centre and the Ross Tilley Burn Center in Toronto, said: “Overall, I think it is a good overview of MHPs as an approach to major hemorrhage.”

The review also is timely, since Ontario released its MHP guidelines in 2021, he added. “I would have liked to see more about the treatment aspects than just an overview of an MHP. But if you are the person overseeing the emergency department or running the blood bank, these protocols are incredibly useful and incredibly important.”

“This report is a nice and thoughtful overview of best practices in many areas, especially trauma, and makes recommendations that are optimal, although they are not necessarily practical in all centers,” Eric L. Legome, MD, professor and chair of emergency medicine at Mount Sinai West and Mount Sinai Morningside, New York, said in an interview.

“If you’re in a small rural hospital with one lab technician, trying to do all of these things, it will not be possible. These are optimal recommendations that people can use to the best of their ability, but they are not standard of care, because some places will not be able to provide this level of care,” he added. “This paper provides practical, reasonable advice that should be looked at as you are trying to implement transfusion policies and processes, with the understanding that it is not necessarily applicable or practical for very small hospitals in very rural centers that might not have access to these types of products and tools, but it’s a reasonable and nicely written paper.”

No outside funding for the guideline was reported. Dr. Callum has received research funding from Canadian Blood Services and Octapharma. She sits on the nominating committee with the Association for the Advancement of Blood & Biotherapies and on the data safety monitoring boards for the Tranexamic Acid for Subdural Hematoma trial and the Fibrinogen Replacement in Trauma trial. Dr. Tillmann and Dr. Legome reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

A new guide offers recommendations for the nonsurgical management of major hemorrhage, which is a challenging clinical problem.

Major hemorrhage is a significant cause of death and can occur in a myriad of clinical settings.

“In Ontario, we’ve been collecting quality metrics on major hemorrhages to try and make sure that a higher percentage of patients gets the best possible care when they are experiencing significant bleeding,” author Jeannie Callum, MD, professor and director of transfusion medicine at Kingston (Ont.) Health Sciences Centre and Queen’s University, also in Kingston, said in an interview. “There were some gaps, so this is our effort to get open, clear information out to the emergency doctors, intensive care unit doctors, the surgeons, and everyone else involved in managing major hemorrhage, to help close these gaps.”

The guide was published in the Canadian Medical Association Journal.
 

Fast care essential

The guide aims to provide answers, based on the latest research, to questions such as when to activate a massive hemorrhage protocol (MHP), which patients should receive tranexamic acid (TXA), which blood products should be transfused before laboratory results are available, how to monitor the effects of blood transfusion, and when fibrinogen concentrate or prothrombin complex concentrate should be given.

Not all recommendations will be followed, Dr. Callum said, especially in rural hospitals with limited resources. But the guide is adaptable, and rural hospitals can create protocols that are customized to their unique circumstances.

Care must be “perfect and fast” in the first hour of major injury, said Dr. Callum. “You need to get a proclotting drug in that first hour if you have a traumatic or postpartum bleed. You have to make sure your clotting factors never fail you throughout your resuscitation. You have to be fast with the transfusion. You have to monitor for the complications of the transfusion, electrolyte disturbances, and the patient’s temperature dropping. It’s a complicated situation that needs a multidisciplinary team.”

Bleeding affects everybody in medicine, from family doctors in smaller institutions who work in emergency departments to obstetricians and surgeons, she added.

“For people under the age of 45, trauma is the most common cause of death. When people die of trauma, they die of bleeding. So many people experience these extreme bleeds. We believe that some of them might be preventable with faster, more standardized, more aggressive care. That’s why we wrote this review,” said Dr. Callum.
 

Administer TXA quickly  

The first recommendation is to ensure that every hospital has a massive hemorrhage protocol. Such a protocol is vital for the emergency department, operating room, and obstetric unit. “Making sure you’ve got a protocol that is updated every 3 years and adjusted to the local hospital context is essential,” said Dr. Callum.

Smaller hospitals will have to adjust their protocols according to the capabilities of their sites. “Some smaller hospitals do not have platelets in stock and get their platelets from another hospital, so you need to adjust your protocol to what you are able to do. Not every hospital can control bleeding in a trauma patient, so your protocol would be to stabilize and call a helicopter. Make sure all of this is detailed so that implementing it becomes automatic,” said Dr. Callum.

An MHP should be activated for patients with uncontrolled hemorrhage who meet the clinical criteria of the local hospital and are expected to need blood product support and red blood cells.

“Lots of people bleed, but not everybody is bleeding enough that they need a code transfusion,” said Dr. Callum. Most patients with gastrointestinal bleeds caused by NSAID use can be managed with uncrossed matched blood from the local blood bank. “But in patients who need the full code transfusion because they are going to need plasma, clotting factor replacement, and many other drugs, that is when the MHP should be activated. Don’t activate it when you don’t need it, because doing so activates the whole hospital and diverts care away from other patients.”

TXA should be administered as soon as possible after onset of hemorrhage in most patients, with the exception of gastrointestinal hemorrhage, where a benefit has not been shown.

TXA has been a major advance in treating massive bleeding, Dr. Callum said. “TXA was invented by a Japanese husband-and-wife research team. We know that it reduces the death rate in trauma and in postpartum hemorrhage, and it reduces the chance of major bleeding with major surgical procedures. We give it routinely in surgical procedures. If a patient gets TXA within 60 minutes of injury, it dramatically reduces the death rate. And it costs $10 per patient. It’s cheap, it’s easy, it has no side effects. It’s just amazing.”

Future research must address several unanswered questions, said Dr. Callum. These questions include whether prehospital transfusion improves patient outcomes, whether whole blood has a role in the early management of major hemorrhage, and what role factor concentrates play in patients with major bleeding.
 

 

 

‘Optimal recommendations’

Commenting on the document, Bourke Tillmann, MD, PhD, trauma team leader at Sunnybrook Health Sciences Centre and the Ross Tilley Burn Center in Toronto, said: “Overall, I think it is a good overview of MHPs as an approach to major hemorrhage.”

The review also is timely, since Ontario released its MHP guidelines in 2021, he added. “I would have liked to see more about the treatment aspects than just an overview of an MHP. But if you are the person overseeing the emergency department or running the blood bank, these protocols are incredibly useful and incredibly important.”

“This report is a nice and thoughtful overview of best practices in many areas, especially trauma, and makes recommendations that are optimal, although they are not necessarily practical in all centers,” Eric L. Legome, MD, professor and chair of emergency medicine at Mount Sinai West and Mount Sinai Morningside, New York, said in an interview.

“If you’re in a small rural hospital with one lab technician, trying to do all of these things, it will not be possible. These are optimal recommendations that people can use to the best of their ability, but they are not standard of care, because some places will not be able to provide this level of care,” he added. “This paper provides practical, reasonable advice that should be looked at as you are trying to implement transfusion policies and processes, with the understanding that it is not necessarily applicable or practical for very small hospitals in very rural centers that might not have access to these types of products and tools, but it’s a reasonable and nicely written paper.”

No outside funding for the guideline was reported. Dr. Callum has received research funding from Canadian Blood Services and Octapharma. She sits on the nominating committee with the Association for the Advancement of Blood & Biotherapies and on the data safety monitoring boards for the Tranexamic Acid for Subdural Hematoma trial and the Fibrinogen Replacement in Trauma trial. Dr. Tillmann and Dr. Legome reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE CANADIAN MEDICAL ASSOCIATION JOURNAL

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Link between bipolar disorder and CVD mortality explained?

Article Type
Changed

An early predictor of cardiovascular disease (CVD) has been found in youth with bipolar disorder (BD), in new findings that may explain the “excessive and premature mortality” related to heart disease in this patient population.

The investigators found that higher reactive hyperemia index (RHI) scores, a measure of endothelial function, were tied to mood severity in patients with higher mania, but not depression scores. These findings persisted even after accounting for medications, obesity, and other cardiovascular risk factors (CVRFs).

“From a clinical perspective, these findings highlight the potential value of integrating vascular health in the assessment and management of youth with BD, and from a scientific perspective, these findings call for additional research focused on shared biological mechanisms linking vascular health and mood symptoms of BD,” senior investigator Benjamin Goldstein, MD, PhD, full professor of psychiatry, pharmacology, and psychological clinical science, University of Toronto, said in an interview.

The study was published online in the Journal of Clinical Psychiatry.
 

‘Excessively present’

BD is associated with “excessive and premature cardiovascular mortality” and CVD is “excessively present” in BD, exceeding what can be explained by traditional cardiovascular risk factors, psychiatric medications, and substance use, the researchers noted.

“In adults, more severe mood symptoms increase the risk of future CVD. Our focus on endothelial function rose due to the fact that CVD is rare in youth, whereas endothelial dysfunction – considered a precursor of CVD – can be assessed in youth,” said Dr. Goldstein, who holds the RBC Investments Chair in children’s mental health and developmental psychopathology at the Centre for Addiction and Mental Health, Toronto, where he is director of the Centre for Youth Bipolar Disorder.

For this reason, he and his colleagues were “interested in researching whether endothelial dysfunction is associated with mood symptoms in youth with BD.” Ultimately, the motivation was to “inspire new therapeutic opportunities that may improve both cardiovascular and mental health simultaneously.”

To investigate the question, the researchers studied 209 youth aged 13-20 years (n = 114 with BD and 94 healthy controls [HCs]).

In the BD group, there were 34 BD-euthymia, 36 BD-depressed, and 44 BD-hypomanic/mixed; and within the groups who had depression or hypomania/mixed features, 72 were experiencing clinically significant depression. 

Participants had to be free of chronic inflammatory illness, use of medications that might be addressing traditional CVRFs, recent infectious diseases, or neurologic conditions.

Participants’ bipolar symptoms, psychosocial functioning, and family history were assessed. In addition, they were asked about treatment, physical and/or sexual abuse, smoking status, and socioeconomic status. Height, weight, waist circumference, blood pressure, and blood tests to assess CVRFs, including C-reactive protein (CRP), were also assessed. RHI was measured via pulse amplitude tonometry, with lower values indicating poorer endothelial function.
 

Positive affect beneficial?

Compared with HCs, there were fewer White participants in the BD group (78% vs. 55%; P < .001). The BD group also had higher Tanner stage development scores (stage 5: 65% vs. 35%; P = .03; V = 0.21), higher body mass index (BMI, 24.4 ± 4.6 vs. 22.0 ± 4.2; P < .001; d = 0.53), and higher CRP (1.94 ± 3.99 vs. 0.76 ± 0.86; P = .009; d = –0.40).

After controlling for age, sex, and BMI (F3,202 = 4.47; P = .005; np2  = 0.06), the researchers found significant between-group differences in RHI.

Post hoc pairwise comparisons showed RHI to be significantly lower in the BD-depressed versus the HC group (P = .04; d = 0.4). Moreover, the BD-hypomanic/mixed group had significantly higher RHI, compared with the other BD groups and the HC group.

RHI was associated with higher mania scores (beta, 0.26; P = .006), but there was no similar significant association with depression mood scores (beta, 0.01; P = .90).

The mood state differences in RHI and the RHI-mania association remained significant in sensitivity analyses examining the effect of current medication use as well as CVRFs, including lipids, CRP, and blood pressure on RHI.

“We found that youth with BD experiencing a depressive episode had lower endothelial function, whereas youth with BD experiencing a hypomanic/mixed episode had higher endothelial function, as compared to healthy youth,” Dr. Goldstein said.

There are several mechanisms potentially underlying the association between endothelial function and hypomania, the investigators noted. For example, positive affect is associated with increased endothelial function in normative samples, so hypomanic symptoms, including elation, may have similar beneficial associations, although those benefits likely do not extend to mania, which has been associated with cardiovascular risk.

They also point to several limitations in the study. The cross-sectional design “precludes making inferences regarding the temporal relationship between RHI and mood.” Moreover, the study focused only on hypomania, so “we cannot draw conclusions about mania.” In addition, the HC group had a “significantly higher proportion” of White participants, and a lower Tanner stage, so it “may not be a representative control sample.”

Nevertheless, the researchers concluded that the study “adds to the existing evidence for the potential value of integrating cardiovascular-related therapeutic approaches in BD,” noting that further research is needed to elucidate the mechanisms of the association.
 

 

 

Observable changes in youth

In a comment, Jess G Fiedorowicz, MD, PhD, head and chief, department of mental health, Ottawa Hospital Research Institute, noted that individuals with BD “have a much higher risk of CVD, which tends to develop earlier and shortens life expectancy by more than a decade.” 

This cardiovascular risk “appears to be acquired over the long-term course of illness and proportionate to the persistence and severity of mood symptoms, which implies that mood syndromes, such as depression and mania, themselves may induce changes in the body relevant to CVD,” said Dr. Fiedorowicz, who is also a professor in the department of psychiatry and senior research chair in adult psychiatry at the Brain and Mind Research Institute, University of Ottawa, and was not involved with the study.

The study “adds to a growing body of evidence that mood syndromes may enact physiological changes that may be relevant to risk of CVD. One important aspect of this study is that this can even be observed in young sample,” he said.

This study was funded by the Canadian Institutes of Health Research and a Miner’s Lamp Innovation Fund from the University of Toronto. Dr. Goldstein and coauthors declare no relevant financial relationships. Dr. Fiedorowicz receives an honorarium from Elsevier for his work as editor-in-chief of the Journal of Psychosomatic Research.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

An early predictor of cardiovascular disease (CVD) has been found in youth with bipolar disorder (BD), in new findings that may explain the “excessive and premature mortality” related to heart disease in this patient population.

The investigators found that higher reactive hyperemia index (RHI) scores, a measure of endothelial function, were tied to mood severity in patients with higher mania, but not depression scores. These findings persisted even after accounting for medications, obesity, and other cardiovascular risk factors (CVRFs).

“From a clinical perspective, these findings highlight the potential value of integrating vascular health in the assessment and management of youth with BD, and from a scientific perspective, these findings call for additional research focused on shared biological mechanisms linking vascular health and mood symptoms of BD,” senior investigator Benjamin Goldstein, MD, PhD, full professor of psychiatry, pharmacology, and psychological clinical science, University of Toronto, said in an interview.

The study was published online in the Journal of Clinical Psychiatry.
 

‘Excessively present’

BD is associated with “excessive and premature cardiovascular mortality” and CVD is “excessively present” in BD, exceeding what can be explained by traditional cardiovascular risk factors, psychiatric medications, and substance use, the researchers noted.

“In adults, more severe mood symptoms increase the risk of future CVD. Our focus on endothelial function rose due to the fact that CVD is rare in youth, whereas endothelial dysfunction – considered a precursor of CVD – can be assessed in youth,” said Dr. Goldstein, who holds the RBC Investments Chair in children’s mental health and developmental psychopathology at the Centre for Addiction and Mental Health, Toronto, where he is director of the Centre for Youth Bipolar Disorder.

For this reason, he and his colleagues were “interested in researching whether endothelial dysfunction is associated with mood symptoms in youth with BD.” Ultimately, the motivation was to “inspire new therapeutic opportunities that may improve both cardiovascular and mental health simultaneously.”

To investigate the question, the researchers studied 209 youth aged 13-20 years (n = 114 with BD and 94 healthy controls [HCs]).

In the BD group, there were 34 BD-euthymia, 36 BD-depressed, and 44 BD-hypomanic/mixed; and within the groups who had depression or hypomania/mixed features, 72 were experiencing clinically significant depression. 

Participants had to be free of chronic inflammatory illness, use of medications that might be addressing traditional CVRFs, recent infectious diseases, or neurologic conditions.

Participants’ bipolar symptoms, psychosocial functioning, and family history were assessed. In addition, they were asked about treatment, physical and/or sexual abuse, smoking status, and socioeconomic status. Height, weight, waist circumference, blood pressure, and blood tests to assess CVRFs, including C-reactive protein (CRP), were also assessed. RHI was measured via pulse amplitude tonometry, with lower values indicating poorer endothelial function.
 

Positive affect beneficial?

Compared with HCs, there were fewer White participants in the BD group (78% vs. 55%; P < .001). The BD group also had higher Tanner stage development scores (stage 5: 65% vs. 35%; P = .03; V = 0.21), higher body mass index (BMI, 24.4 ± 4.6 vs. 22.0 ± 4.2; P < .001; d = 0.53), and higher CRP (1.94 ± 3.99 vs. 0.76 ± 0.86; P = .009; d = –0.40).

After controlling for age, sex, and BMI (F3,202 = 4.47; P = .005; np2  = 0.06), the researchers found significant between-group differences in RHI.

Post hoc pairwise comparisons showed RHI to be significantly lower in the BD-depressed versus the HC group (P = .04; d = 0.4). Moreover, the BD-hypomanic/mixed group had significantly higher RHI, compared with the other BD groups and the HC group.

RHI was associated with higher mania scores (beta, 0.26; P = .006), but there was no similar significant association with depression mood scores (beta, 0.01; P = .90).

The mood state differences in RHI and the RHI-mania association remained significant in sensitivity analyses examining the effect of current medication use as well as CVRFs, including lipids, CRP, and blood pressure on RHI.

“We found that youth with BD experiencing a depressive episode had lower endothelial function, whereas youth with BD experiencing a hypomanic/mixed episode had higher endothelial function, as compared to healthy youth,” Dr. Goldstein said.

There are several mechanisms potentially underlying the association between endothelial function and hypomania, the investigators noted. For example, positive affect is associated with increased endothelial function in normative samples, so hypomanic symptoms, including elation, may have similar beneficial associations, although those benefits likely do not extend to mania, which has been associated with cardiovascular risk.

They also point to several limitations in the study. The cross-sectional design “precludes making inferences regarding the temporal relationship between RHI and mood.” Moreover, the study focused only on hypomania, so “we cannot draw conclusions about mania.” In addition, the HC group had a “significantly higher proportion” of White participants, and a lower Tanner stage, so it “may not be a representative control sample.”

Nevertheless, the researchers concluded that the study “adds to the existing evidence for the potential value of integrating cardiovascular-related therapeutic approaches in BD,” noting that further research is needed to elucidate the mechanisms of the association.
 

 

 

Observable changes in youth

In a comment, Jess G Fiedorowicz, MD, PhD, head and chief, department of mental health, Ottawa Hospital Research Institute, noted that individuals with BD “have a much higher risk of CVD, which tends to develop earlier and shortens life expectancy by more than a decade.” 

This cardiovascular risk “appears to be acquired over the long-term course of illness and proportionate to the persistence and severity of mood symptoms, which implies that mood syndromes, such as depression and mania, themselves may induce changes in the body relevant to CVD,” said Dr. Fiedorowicz, who is also a professor in the department of psychiatry and senior research chair in adult psychiatry at the Brain and Mind Research Institute, University of Ottawa, and was not involved with the study.

The study “adds to a growing body of evidence that mood syndromes may enact physiological changes that may be relevant to risk of CVD. One important aspect of this study is that this can even be observed in young sample,” he said.

This study was funded by the Canadian Institutes of Health Research and a Miner’s Lamp Innovation Fund from the University of Toronto. Dr. Goldstein and coauthors declare no relevant financial relationships. Dr. Fiedorowicz receives an honorarium from Elsevier for his work as editor-in-chief of the Journal of Psychosomatic Research.

A version of this article first appeared on Medscape.com.

An early predictor of cardiovascular disease (CVD) has been found in youth with bipolar disorder (BD), in new findings that may explain the “excessive and premature mortality” related to heart disease in this patient population.

The investigators found that higher reactive hyperemia index (RHI) scores, a measure of endothelial function, were tied to mood severity in patients with higher mania, but not depression scores. These findings persisted even after accounting for medications, obesity, and other cardiovascular risk factors (CVRFs).

“From a clinical perspective, these findings highlight the potential value of integrating vascular health in the assessment and management of youth with BD, and from a scientific perspective, these findings call for additional research focused on shared biological mechanisms linking vascular health and mood symptoms of BD,” senior investigator Benjamin Goldstein, MD, PhD, full professor of psychiatry, pharmacology, and psychological clinical science, University of Toronto, said in an interview.

The study was published online in the Journal of Clinical Psychiatry.
 

‘Excessively present’

BD is associated with “excessive and premature cardiovascular mortality” and CVD is “excessively present” in BD, exceeding what can be explained by traditional cardiovascular risk factors, psychiatric medications, and substance use, the researchers noted.

“In adults, more severe mood symptoms increase the risk of future CVD. Our focus on endothelial function rose due to the fact that CVD is rare in youth, whereas endothelial dysfunction – considered a precursor of CVD – can be assessed in youth,” said Dr. Goldstein, who holds the RBC Investments Chair in children’s mental health and developmental psychopathology at the Centre for Addiction and Mental Health, Toronto, where he is director of the Centre for Youth Bipolar Disorder.

For this reason, he and his colleagues were “interested in researching whether endothelial dysfunction is associated with mood symptoms in youth with BD.” Ultimately, the motivation was to “inspire new therapeutic opportunities that may improve both cardiovascular and mental health simultaneously.”

To investigate the question, the researchers studied 209 youth aged 13-20 years (n = 114 with BD and 94 healthy controls [HCs]).

In the BD group, there were 34 BD-euthymia, 36 BD-depressed, and 44 BD-hypomanic/mixed; and within the groups who had depression or hypomania/mixed features, 72 were experiencing clinically significant depression. 

Participants had to be free of chronic inflammatory illness, use of medications that might be addressing traditional CVRFs, recent infectious diseases, or neurologic conditions.

Participants’ bipolar symptoms, psychosocial functioning, and family history were assessed. In addition, they were asked about treatment, physical and/or sexual abuse, smoking status, and socioeconomic status. Height, weight, waist circumference, blood pressure, and blood tests to assess CVRFs, including C-reactive protein (CRP), were also assessed. RHI was measured via pulse amplitude tonometry, with lower values indicating poorer endothelial function.
 

Positive affect beneficial?

Compared with HCs, there were fewer White participants in the BD group (78% vs. 55%; P < .001). The BD group also had higher Tanner stage development scores (stage 5: 65% vs. 35%; P = .03; V = 0.21), higher body mass index (BMI, 24.4 ± 4.6 vs. 22.0 ± 4.2; P < .001; d = 0.53), and higher CRP (1.94 ± 3.99 vs. 0.76 ± 0.86; P = .009; d = –0.40).

After controlling for age, sex, and BMI (F3,202 = 4.47; P = .005; np2  = 0.06), the researchers found significant between-group differences in RHI.

Post hoc pairwise comparisons showed RHI to be significantly lower in the BD-depressed versus the HC group (P = .04; d = 0.4). Moreover, the BD-hypomanic/mixed group had significantly higher RHI, compared with the other BD groups and the HC group.

RHI was associated with higher mania scores (beta, 0.26; P = .006), but there was no similar significant association with depression mood scores (beta, 0.01; P = .90).

The mood state differences in RHI and the RHI-mania association remained significant in sensitivity analyses examining the effect of current medication use as well as CVRFs, including lipids, CRP, and blood pressure on RHI.

“We found that youth with BD experiencing a depressive episode had lower endothelial function, whereas youth with BD experiencing a hypomanic/mixed episode had higher endothelial function, as compared to healthy youth,” Dr. Goldstein said.

There are several mechanisms potentially underlying the association between endothelial function and hypomania, the investigators noted. For example, positive affect is associated with increased endothelial function in normative samples, so hypomanic symptoms, including elation, may have similar beneficial associations, although those benefits likely do not extend to mania, which has been associated with cardiovascular risk.

They also point to several limitations in the study. The cross-sectional design “precludes making inferences regarding the temporal relationship between RHI and mood.” Moreover, the study focused only on hypomania, so “we cannot draw conclusions about mania.” In addition, the HC group had a “significantly higher proportion” of White participants, and a lower Tanner stage, so it “may not be a representative control sample.”

Nevertheless, the researchers concluded that the study “adds to the existing evidence for the potential value of integrating cardiovascular-related therapeutic approaches in BD,” noting that further research is needed to elucidate the mechanisms of the association.
 

 

 

Observable changes in youth

In a comment, Jess G Fiedorowicz, MD, PhD, head and chief, department of mental health, Ottawa Hospital Research Institute, noted that individuals with BD “have a much higher risk of CVD, which tends to develop earlier and shortens life expectancy by more than a decade.” 

This cardiovascular risk “appears to be acquired over the long-term course of illness and proportionate to the persistence and severity of mood symptoms, which implies that mood syndromes, such as depression and mania, themselves may induce changes in the body relevant to CVD,” said Dr. Fiedorowicz, who is also a professor in the department of psychiatry and senior research chair in adult psychiatry at the Brain and Mind Research Institute, University of Ottawa, and was not involved with the study.

The study “adds to a growing body of evidence that mood syndromes may enact physiological changes that may be relevant to risk of CVD. One important aspect of this study is that this can even be observed in young sample,” he said.

This study was funded by the Canadian Institutes of Health Research and a Miner’s Lamp Innovation Fund from the University of Toronto. Dr. Goldstein and coauthors declare no relevant financial relationships. Dr. Fiedorowicz receives an honorarium from Elsevier for his work as editor-in-chief of the Journal of Psychosomatic Research.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Survival similar with hearts donated after circulatory or brain death

Article Type
Changed

Heart transplantation using the new strategy of donation after circulatory death (DCD) resulted in similar 6-month survival among recipients as the traditional method of using hearts donated after brain death (DBD) in the first randomized trial comparing the two approaches.

“This randomized trial showing recipient survival with DCD to be similar to DBD should lead to DCD becoming the standard of care alongside DBD,” lead author Jacob Schroder, MD, surgical director, heart transplantation program, Duke University Medical Center, Durham, N.C., said in an interview.

“This should enable many more heart transplants to take place and for us to be able to cast the net further and wider for donors,” he said.

The trial was published online in the New England Journal of Medicine.

Dr. Schroder estimated that only around one-fifth of the 120 U.S. heart transplant centers currently carry out DCD transplants, but he is hopeful that the publication of this study will encourage more transplant centers to do these DCD procedures.

“The problem is there are many low-volume heart transplant centers, which may not be keen to do DCD transplants as they are a bit more complicated and expensive than DBD heart transplants,” he said. “But we need to look at the big picture of how many lives can be saved by increasing the number of heart transplant procedures and the money saved by getting more patients off the waiting list.”

The authors explain that heart transplantation has traditionally been limited to the use of hearts obtained from donors after brain death, which allows in situ assessment of cardiac function and of the suitability for transplantation of the donor allograft before surgical procurement.

But because the need for heart transplants far exceeds the availability of suitable donors, the use of DCD hearts has been investigated and this approach is now being pursued in many countries. In the DCD approach, the heart will have stopped beating in the donor, and perfusion techniques are used to restart the organ.

There are two different approaches to restarting the heart in DCD. The first approach involves the heart being removed from the donor and reanimated, preserved, assessed, and transported with the use of a portable extracorporeal perfusion and preservation system (Organ Care System, TransMedics). The second involves restarting the heart in the donor’s body for evaluation before removal and transportation under the traditional cold storage method used for donations after brain death.

The current trial was designed to compare clinical outcomes in patients who had received a heart from a circulatory death donor using the portable extracorporeal perfusion method for DCD transplantation, with outcomes from the traditional method of heart transplantation using organs donated after brain death.

For the randomized, noninferiority trial, adult candidates for heart transplantation were assigned to receive a heart after the circulatory death of the donor or a heart from a donor after brain death if that heart was available first (circulatory-death group) or to receive only a heart that had been preserved with the use of traditional cold storage after the brain death of the donor (brain-death group).

The primary end point was the risk-adjusted survival at 6 months in the as-treated circulatory-death group, as compared with the brain-death group. The primary safety end point was serious adverse events associated with the heart graft at 30 days after transplantation.

A total of 180 patients underwent transplantation, 90 of whom received a heart donated after circulatory death and 90 who received a heart donated after brain death. A total of 166 transplant recipients were included in the as-treated primary analysis (80 who received a heart from a circulatory-death donor and 86 who received a heart from a brain-death donor).

The risk-adjusted 6-month survival in the as-treated population was 94% among recipients of a heart from a circulatory-death donor, as compared with 90% among recipients of a heart from a brain-death donor (P < .001 for noninferiority).

There were no substantial between-group differences in the mean per-patient number of serious adverse events associated with the heart graft at 30 days after transplantation.

Of 101 hearts from circulatory-death donors that were preserved with the use of the perfusion system, 90 were successfully transplanted according to the criteria for lactate trend and overall contractility of the donor heart, which resulted in overall utilization percentage of 89%.

More patients who received a heart from a circulatory-death donor had moderate or severe primary graft dysfunction (22%) than those who received a heart from a brain-death donor (10%). However, graft failure that resulted in retransplantation occurred in two (2.3%) patients who received a heart from a brain-death donor versus zero patients who received a heart from a circulatory-death donor.

The researchers note that the higher incidence of primary graft dysfunction in the circulatory-death group is expected, given the period of warm ischemia that occurs in this approach. But they point out that this did not affect patient or graft survival at 30 days or 1 year.

“Primary graft dysfunction is when the heart doesn’t fully work immediately after transplant and some mechanical support is needed,” Dr. Schroder commented to this news organization. “This occurred more often in the DCD group, but this mechanical support is only temporary, and generally only needed for a day or two.

“It looks like it might take the heart a little longer to start fully functioning after DCD, but our results show this doesn’t seem to affect recipient survival.”

He added: “We’ve started to become more comfortable with DCD. Sometimes it may take a little longer to get the heart working properly on its own, but the rate of mechanical support is now much lower than when we first started doing these procedures. And cardiac MRI on the recipient patients before discharge have shown that the DCD hearts are not more damaged than those from DBD donors.”

The authors also report that there were six donor hearts in the DCD group for which there were protocol deviations of functional warm ischemic time greater than 30 minutes or continuously rising lactate levels and these hearts did not show primary graft dysfunction.

On this observation, Dr. Schroder said: “I think we need to do more work on understanding the ischemic time limits. The current 30 minutes time limit was estimated in animal studies. We need to look more closely at data from actual DCD transplants. While 30 minutes may be too long for a heart from an older donor, the heart from a younger donor may be fine for a longer period of ischemic time as it will be healthier.”


 

 

 

“Exciting” results

In an editorial, Nancy K. Sweitzer, MD, PhD, vice chair of clinical research, department of medicine, and director of clinical research, division of cardiology, Washington University in St. Louis, describes the results of the current study as “exciting,” adding that, “They clearly show the feasibility and safety of transplantation of hearts from circulatory-death donors.”

However, Dr. Sweitzer points out that the sickest patients in the study – those who were United Network for Organ Sharing (UNOS) status 1 and 2 – were more likely to receive a DBD heart and the more stable patients (UNOS 3-6) were more likely to receive a DCD heart.

“This imbalance undoubtedly contributed to the success of the trial in meeting its noninferiority end point. Whether transplantation of hearts from circulatory-death donors is truly safe in our sickest patients with heart failure is not clear,” she says.

However, she concludes, “Although caution and continuous evaluation of data are warranted, the increased use of hearts from circulatory-death donors appears to be safe in the hands of experienced transplantation teams and will launch an exciting phase of learning and improvement.”

“A safely expanded pool of heart donors has the potential to increase fairness and equity in heart transplantation, allowing more persons with heart failure to have access to this lifesaving therapy,” she adds. “Organ donors and transplantation teams will save increasing numbers of lives with this most precious gift.”

The current study was supported by TransMedics. Dr. Schroder reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Heart transplantation using the new strategy of donation after circulatory death (DCD) resulted in similar 6-month survival among recipients as the traditional method of using hearts donated after brain death (DBD) in the first randomized trial comparing the two approaches.

“This randomized trial showing recipient survival with DCD to be similar to DBD should lead to DCD becoming the standard of care alongside DBD,” lead author Jacob Schroder, MD, surgical director, heart transplantation program, Duke University Medical Center, Durham, N.C., said in an interview.

“This should enable many more heart transplants to take place and for us to be able to cast the net further and wider for donors,” he said.

The trial was published online in the New England Journal of Medicine.

Dr. Schroder estimated that only around one-fifth of the 120 U.S. heart transplant centers currently carry out DCD transplants, but he is hopeful that the publication of this study will encourage more transplant centers to do these DCD procedures.

“The problem is there are many low-volume heart transplant centers, which may not be keen to do DCD transplants as they are a bit more complicated and expensive than DBD heart transplants,” he said. “But we need to look at the big picture of how many lives can be saved by increasing the number of heart transplant procedures and the money saved by getting more patients off the waiting list.”

The authors explain that heart transplantation has traditionally been limited to the use of hearts obtained from donors after brain death, which allows in situ assessment of cardiac function and of the suitability for transplantation of the donor allograft before surgical procurement.

But because the need for heart transplants far exceeds the availability of suitable donors, the use of DCD hearts has been investigated and this approach is now being pursued in many countries. In the DCD approach, the heart will have stopped beating in the donor, and perfusion techniques are used to restart the organ.

There are two different approaches to restarting the heart in DCD. The first approach involves the heart being removed from the donor and reanimated, preserved, assessed, and transported with the use of a portable extracorporeal perfusion and preservation system (Organ Care System, TransMedics). The second involves restarting the heart in the donor’s body for evaluation before removal and transportation under the traditional cold storage method used for donations after brain death.

The current trial was designed to compare clinical outcomes in patients who had received a heart from a circulatory death donor using the portable extracorporeal perfusion method for DCD transplantation, with outcomes from the traditional method of heart transplantation using organs donated after brain death.

For the randomized, noninferiority trial, adult candidates for heart transplantation were assigned to receive a heart after the circulatory death of the donor or a heart from a donor after brain death if that heart was available first (circulatory-death group) or to receive only a heart that had been preserved with the use of traditional cold storage after the brain death of the donor (brain-death group).

The primary end point was the risk-adjusted survival at 6 months in the as-treated circulatory-death group, as compared with the brain-death group. The primary safety end point was serious adverse events associated with the heart graft at 30 days after transplantation.

A total of 180 patients underwent transplantation, 90 of whom received a heart donated after circulatory death and 90 who received a heart donated after brain death. A total of 166 transplant recipients were included in the as-treated primary analysis (80 who received a heart from a circulatory-death donor and 86 who received a heart from a brain-death donor).

The risk-adjusted 6-month survival in the as-treated population was 94% among recipients of a heart from a circulatory-death donor, as compared with 90% among recipients of a heart from a brain-death donor (P < .001 for noninferiority).

There were no substantial between-group differences in the mean per-patient number of serious adverse events associated with the heart graft at 30 days after transplantation.

Of 101 hearts from circulatory-death donors that were preserved with the use of the perfusion system, 90 were successfully transplanted according to the criteria for lactate trend and overall contractility of the donor heart, which resulted in overall utilization percentage of 89%.

More patients who received a heart from a circulatory-death donor had moderate or severe primary graft dysfunction (22%) than those who received a heart from a brain-death donor (10%). However, graft failure that resulted in retransplantation occurred in two (2.3%) patients who received a heart from a brain-death donor versus zero patients who received a heart from a circulatory-death donor.

The researchers note that the higher incidence of primary graft dysfunction in the circulatory-death group is expected, given the period of warm ischemia that occurs in this approach. But they point out that this did not affect patient or graft survival at 30 days or 1 year.

“Primary graft dysfunction is when the heart doesn’t fully work immediately after transplant and some mechanical support is needed,” Dr. Schroder commented to this news organization. “This occurred more often in the DCD group, but this mechanical support is only temporary, and generally only needed for a day or two.

“It looks like it might take the heart a little longer to start fully functioning after DCD, but our results show this doesn’t seem to affect recipient survival.”

He added: “We’ve started to become more comfortable with DCD. Sometimes it may take a little longer to get the heart working properly on its own, but the rate of mechanical support is now much lower than when we first started doing these procedures. And cardiac MRI on the recipient patients before discharge have shown that the DCD hearts are not more damaged than those from DBD donors.”

The authors also report that there were six donor hearts in the DCD group for which there were protocol deviations of functional warm ischemic time greater than 30 minutes or continuously rising lactate levels and these hearts did not show primary graft dysfunction.

On this observation, Dr. Schroder said: “I think we need to do more work on understanding the ischemic time limits. The current 30 minutes time limit was estimated in animal studies. We need to look more closely at data from actual DCD transplants. While 30 minutes may be too long for a heart from an older donor, the heart from a younger donor may be fine for a longer period of ischemic time as it will be healthier.”


 

 

 

“Exciting” results

In an editorial, Nancy K. Sweitzer, MD, PhD, vice chair of clinical research, department of medicine, and director of clinical research, division of cardiology, Washington University in St. Louis, describes the results of the current study as “exciting,” adding that, “They clearly show the feasibility and safety of transplantation of hearts from circulatory-death donors.”

However, Dr. Sweitzer points out that the sickest patients in the study – those who were United Network for Organ Sharing (UNOS) status 1 and 2 – were more likely to receive a DBD heart and the more stable patients (UNOS 3-6) were more likely to receive a DCD heart.

“This imbalance undoubtedly contributed to the success of the trial in meeting its noninferiority end point. Whether transplantation of hearts from circulatory-death donors is truly safe in our sickest patients with heart failure is not clear,” she says.

However, she concludes, “Although caution and continuous evaluation of data are warranted, the increased use of hearts from circulatory-death donors appears to be safe in the hands of experienced transplantation teams and will launch an exciting phase of learning and improvement.”

“A safely expanded pool of heart donors has the potential to increase fairness and equity in heart transplantation, allowing more persons with heart failure to have access to this lifesaving therapy,” she adds. “Organ donors and transplantation teams will save increasing numbers of lives with this most precious gift.”

The current study was supported by TransMedics. Dr. Schroder reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Heart transplantation using the new strategy of donation after circulatory death (DCD) resulted in similar 6-month survival among recipients as the traditional method of using hearts donated after brain death (DBD) in the first randomized trial comparing the two approaches.

“This randomized trial showing recipient survival with DCD to be similar to DBD should lead to DCD becoming the standard of care alongside DBD,” lead author Jacob Schroder, MD, surgical director, heart transplantation program, Duke University Medical Center, Durham, N.C., said in an interview.

“This should enable many more heart transplants to take place and for us to be able to cast the net further and wider for donors,” he said.

The trial was published online in the New England Journal of Medicine.

Dr. Schroder estimated that only around one-fifth of the 120 U.S. heart transplant centers currently carry out DCD transplants, but he is hopeful that the publication of this study will encourage more transplant centers to do these DCD procedures.

“The problem is there are many low-volume heart transplant centers, which may not be keen to do DCD transplants as they are a bit more complicated and expensive than DBD heart transplants,” he said. “But we need to look at the big picture of how many lives can be saved by increasing the number of heart transplant procedures and the money saved by getting more patients off the waiting list.”

The authors explain that heart transplantation has traditionally been limited to the use of hearts obtained from donors after brain death, which allows in situ assessment of cardiac function and of the suitability for transplantation of the donor allograft before surgical procurement.

But because the need for heart transplants far exceeds the availability of suitable donors, the use of DCD hearts has been investigated and this approach is now being pursued in many countries. In the DCD approach, the heart will have stopped beating in the donor, and perfusion techniques are used to restart the organ.

There are two different approaches to restarting the heart in DCD. The first approach involves the heart being removed from the donor and reanimated, preserved, assessed, and transported with the use of a portable extracorporeal perfusion and preservation system (Organ Care System, TransMedics). The second involves restarting the heart in the donor’s body for evaluation before removal and transportation under the traditional cold storage method used for donations after brain death.

The current trial was designed to compare clinical outcomes in patients who had received a heart from a circulatory death donor using the portable extracorporeal perfusion method for DCD transplantation, with outcomes from the traditional method of heart transplantation using organs donated after brain death.

For the randomized, noninferiority trial, adult candidates for heart transplantation were assigned to receive a heart after the circulatory death of the donor or a heart from a donor after brain death if that heart was available first (circulatory-death group) or to receive only a heart that had been preserved with the use of traditional cold storage after the brain death of the donor (brain-death group).

The primary end point was the risk-adjusted survival at 6 months in the as-treated circulatory-death group, as compared with the brain-death group. The primary safety end point was serious adverse events associated with the heart graft at 30 days after transplantation.

A total of 180 patients underwent transplantation, 90 of whom received a heart donated after circulatory death and 90 who received a heart donated after brain death. A total of 166 transplant recipients were included in the as-treated primary analysis (80 who received a heart from a circulatory-death donor and 86 who received a heart from a brain-death donor).

The risk-adjusted 6-month survival in the as-treated population was 94% among recipients of a heart from a circulatory-death donor, as compared with 90% among recipients of a heart from a brain-death donor (P < .001 for noninferiority).

There were no substantial between-group differences in the mean per-patient number of serious adverse events associated with the heart graft at 30 days after transplantation.

Of 101 hearts from circulatory-death donors that were preserved with the use of the perfusion system, 90 were successfully transplanted according to the criteria for lactate trend and overall contractility of the donor heart, which resulted in overall utilization percentage of 89%.

More patients who received a heart from a circulatory-death donor had moderate or severe primary graft dysfunction (22%) than those who received a heart from a brain-death donor (10%). However, graft failure that resulted in retransplantation occurred in two (2.3%) patients who received a heart from a brain-death donor versus zero patients who received a heart from a circulatory-death donor.

The researchers note that the higher incidence of primary graft dysfunction in the circulatory-death group is expected, given the period of warm ischemia that occurs in this approach. But they point out that this did not affect patient or graft survival at 30 days or 1 year.

“Primary graft dysfunction is when the heart doesn’t fully work immediately after transplant and some mechanical support is needed,” Dr. Schroder commented to this news organization. “This occurred more often in the DCD group, but this mechanical support is only temporary, and generally only needed for a day or two.

“It looks like it might take the heart a little longer to start fully functioning after DCD, but our results show this doesn’t seem to affect recipient survival.”

He added: “We’ve started to become more comfortable with DCD. Sometimes it may take a little longer to get the heart working properly on its own, but the rate of mechanical support is now much lower than when we first started doing these procedures. And cardiac MRI on the recipient patients before discharge have shown that the DCD hearts are not more damaged than those from DBD donors.”

The authors also report that there were six donor hearts in the DCD group for which there were protocol deviations of functional warm ischemic time greater than 30 minutes or continuously rising lactate levels and these hearts did not show primary graft dysfunction.

On this observation, Dr. Schroder said: “I think we need to do more work on understanding the ischemic time limits. The current 30 minutes time limit was estimated in animal studies. We need to look more closely at data from actual DCD transplants. While 30 minutes may be too long for a heart from an older donor, the heart from a younger donor may be fine for a longer period of ischemic time as it will be healthier.”


 

 

 

“Exciting” results

In an editorial, Nancy K. Sweitzer, MD, PhD, vice chair of clinical research, department of medicine, and director of clinical research, division of cardiology, Washington University in St. Louis, describes the results of the current study as “exciting,” adding that, “They clearly show the feasibility and safety of transplantation of hearts from circulatory-death donors.”

However, Dr. Sweitzer points out that the sickest patients in the study – those who were United Network for Organ Sharing (UNOS) status 1 and 2 – were more likely to receive a DBD heart and the more stable patients (UNOS 3-6) were more likely to receive a DCD heart.

“This imbalance undoubtedly contributed to the success of the trial in meeting its noninferiority end point. Whether transplantation of hearts from circulatory-death donors is truly safe in our sickest patients with heart failure is not clear,” she says.

However, she concludes, “Although caution and continuous evaluation of data are warranted, the increased use of hearts from circulatory-death donors appears to be safe in the hands of experienced transplantation teams and will launch an exciting phase of learning and improvement.”

“A safely expanded pool of heart donors has the potential to increase fairness and equity in heart transplantation, allowing more persons with heart failure to have access to this lifesaving therapy,” she adds. “Organ donors and transplantation teams will save increasing numbers of lives with this most precious gift.”

The current study was supported by TransMedics. Dr. Schroder reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

When could you be sued for AI malpractice? You’re likely using it now

Article Type
Changed

The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
 

And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.

The use of AI in your daily practice can come with hidden liabilities, say legal experts, and as hospitals and medical groups deploy AI into more areas of health care, new liability exposures may be on the horizon.

“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”

Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:

  • Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
  • Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
  • Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
  • A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
  • Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
  • AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
  • Some systems within EHRs use AI to indicate high-risk patients.
  • Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
  • About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
  • Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
 

 

The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.

“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
 

What are the top AI legal dangers of today?

A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.

This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.

“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.

“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”

Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.

It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.

When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”

Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.

“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”

In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.

Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.

“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”

The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.

As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.

“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”

So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.

“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
 

 

 

Upcoming AI legal risks to watch for

Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.

Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.

No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.

“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”

In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.

In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.

“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”

Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.

For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.

“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
 

 

 

How you can prevent AI-related lawsuits

The first step to preventing an AI-related claim is being aware of when and how you are using AI.

Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.

“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”

Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.

When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.

“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.

Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.

“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.

In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.

It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.

“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”

While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.

“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
 

And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.

The use of AI in your daily practice can come with hidden liabilities, say legal experts, and as hospitals and medical groups deploy AI into more areas of health care, new liability exposures may be on the horizon.

“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”

Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:

  • Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
  • Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
  • Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
  • A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
  • Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
  • AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
  • Some systems within EHRs use AI to indicate high-risk patients.
  • Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
  • About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
  • Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
 

 

The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.

“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
 

What are the top AI legal dangers of today?

A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.

This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.

“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.

“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”

Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.

It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.

When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”

Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.

“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”

In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.

Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.

“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”

The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.

As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.

“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”

So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.

“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
 

 

 

Upcoming AI legal risks to watch for

Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.

Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.

No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.

“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”

In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.

In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.

“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”

Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.

For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.

“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
 

 

 

How you can prevent AI-related lawsuits

The first step to preventing an AI-related claim is being aware of when and how you are using AI.

Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.

“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”

Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.

When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.

“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.

Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.

“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.

In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.

It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.

“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”

While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.

“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”

A version of this article first appeared on Medscape.com.

The ways in which artificial intelligence (AI) may transform the future of medicine is making headlines across the globe. But chances are, you’re already using AI in your practice every day – you may just not realize it.
 

And whether you recognize the presence of AI or not, the technology could be putting you in danger of a lawsuit, legal experts say.

The use of AI in your daily practice can come with hidden liabilities, say legal experts, and as hospitals and medical groups deploy AI into more areas of health care, new liability exposures may be on the horizon.

“For physicians, AI has also not yet drastically changed or improved the way care is provided or consumed,” said Michael LeTang, chief nursing informatics officer and vice president of risk management and patient safety at Healthcare Risk Advisors, part of TDC Group. “Consequently, it may seem like AI is not present in their work streams, but in reality, it has been utilized in health care for several years. As AI technologies continue to develop and become more sophisticated, we can expect them to play an increasingly significant role in health care.”

Today, most AI applications in health care use narrow AI, which is designed to complete a single task without human assistance, as opposed to artificial general intelligence (AGI), which pertains to human-level reasoning and problem solving across a broad spectrum. Here are some ways doctors are using AI throughout the day – sometimes being aware of its assistance, and sometimes being unaware:

  • Many doctors use electronic health records (EHRs) with integrated AI that include computerized clinical decision support tools designed to reduce the risk of diagnostic error and to integrate decision-making in the medication ordering function.
  • Cardiologists, pathologists, and dermatologists use AI in the interpretation of vast amounts of images, tracings, and complex patterns.
  • Surgeons are using AI-enhanced surgical robotics for orthopedic surgeries, such as joint replacement and spine surgery.
  • A growing number of doctors are using ChatGPT to assist in drafting prior authorization letters for insurers. Experts say more doctors are also experimenting with ChatGPT to support medical decision-making.
  • Within oncology, physicians use machine learning techniques in the form of computer-aided detection systems for early breast cancer detection.
  • AI algorithms are often used by health systems for workflow, staffing optimization, population management, and care coordination.
  • Some systems within EHRs use AI to indicate high-risk patients.
  • Physicians are using AI applications for the early recognition of sepsis, including EHR-integrated decision tools, such as the Hospital Corporation of America Healthcare’s Sepsis Prediction and Optimization Therapy and the Sepsis Early Risk Assessment algorithm.
  • About 30% of radiologists use AI in their practice to analyze x-rays and CT scans.
  • Epic Systems recently announced a partnership with Microsoft to integrate ChatGPT into MyChart, Epic’s patient portal system. Pilot hospitals will utilize ChatGPT to automatically generate responses to patient-generated questions sent via the portal.
 

 

The growth of AI in health care has been enormous, and it’s only going to continue, said Ravi B. Parikh, MD, an assistant professor in the department of medical ethics and health policy and medicine at the University of Pennsylvania, Philadelphia.

“What’s really critical is that physicians, clinicians, and nurses using AI are provided with the tools to understand how artificial intelligence works and, most importantly, understand that they are still accountable for making the ultimate decision,” Mr. LeTang said, “The information is not always going to be the right thing to do or the most accurate thing to do. They’re still liable for making a bad decision, even if AI is driving that.”
 

What are the top AI legal dangers of today?

A pressing legal risk is becoming too reliant on the suggestions that AI-based systems provide, which can lead to poor care decisions, said Kenneth Rashbaum, a New York–based cybersecurity attorney with more than 25 years of experience in medical malpractice defense.

This can occur, for example, when using clinical support systems that leverage AI, machine learning, or statistical pattern recognition. Today, clinical support systems are commonly administered through EHRs and other computerized clinical workflows. In general, such systems match a patient’s characteristics to a computerized clinical knowledge base. An assessment or recommendation is then presented to the physician for a decision.

“If the clinician blindly accepts it without considering whether it’s appropriate for this patient at this time with this presentation, the clinician may bear some responsibility if there is an untoward result,” Mr. Rashbaum said.

“A common claim even in the days before the EMR [electronic medical record] and AI, was that the clinician did not take all available information into account in rendering treatment, including history of past and present condition, as reflected in the records, communication with past and other present treating clinicians, lab and radiology results, discussions with the patient, and physical examination findings,” he said. “So, if the clinician relied upon the support prompt to the exclusion of these other sources of information, that could be a very strong argument for the plaintiff.”

Chatbots, such OpenAI’s ChatGPT, are another form of AI raising legal red flags. ChatGPT, trained on a massive set of text data, can carry out conversations, write code, draft emails, and answer any question posed. The chatbot has gained considerable credibility for accurately diagnosing rare conditions in seconds, and it recently passed the U.S. Medical Licensing Examination.

It’s unclear how many doctors are signing onto the ChatGPT website daily, but physicians are actively using the chatbot, particularly for assistance with prior authorization letters and to support decision-making processes in their practices, said Mr. LeTang.

When physicians ask ChatGPT a question, however, they should be mindful that ChatGPT could “hallucinate,” a term that refers to a generated response that sounds plausible but is factually incorrect or is unrelated to the context, explains Harvey Castro, MD, an emergency physician, ChatGPT health care expert, and author of the 2023 book “ChatGPT and Healthcare: Unlocking the Potential of Patient Empowerment.”

Acting on ChatGPT’s response without vetting the information places doctors at serious risk of a lawsuit, he said.

“Sometimes, the response is half true and half false,” he said. “Say, I go outside my specialty of emergency medicine and ask it about a pediatric surgical procedure. It could give me a response that sounds medically correct, but then I ask a pediatric cardiologist, and he says, ‘We don’t even do this. This doesn’t even exist!’ Physicians really have to make sure they are vetting the information provided.”

In response to ChatGPT’s growing usage by health care professionals, hospitals and practices are quickly implementing guidelines, policies, and restrictions that caution physicians about the accuracy of ChatGPT-generated information, adds Mr. LeTang.

Emerging best practices include avoiding the input of patient health information, personally identifiable information, or any data that could be commercially valuable or considered the intellectual property of a hospital or health system, he said.

“Another crucial guideline is not to rely solely on ChatGPT as a definitive source for clinical decision-making; physicians must exercise their professional judgment,” he said. “If best practices are not adhered to, the associated risks are present today. However, these risks may become more significant as AI technologies continue to evolve and become increasingly integrated into health care.”

The potential for misdiagnosis by AI systems and the risk of unnecessary procedures if physicians do not thoroughly evaluate and validate AI predictions are other dangers.

As an example, Mr. LeTang described a case in which a physician documents in the EHR that a patient has presented to the emergency department with chest pains and other signs of a heart attack, and an AI algorithm predicts that the patient is experiencing an active myocardial infarction. If the physician then sends the patient for stenting or an angioplasty without other concrete evidence or tests to confirm the diagnosis, the doctor could later face a misdiagnosis complaint if the costly procedures were unnecessary.

“That’s one of the risks of using artificial intelligence,” he said. “A large percentage of malpractice claims is failure to diagnose, delayed diagnosis, or inaccurate diagnosis. What falls in the category of failure to diagnose is sending a patient for an unnecessary procedure or having an adverse event or bad outcome because of the failure to diagnose.”

So far, no AI lawsuits have been filed, but they may make an appearance soon, said Sue Boisvert, senior patient safety risk manager at The Doctors Company, a national medical liability insurer.

“There are hundreds of AI programs currently in use in health care,” she said. “At some point, a provider will make a decision that is contrary to what the AI recommended. The AI may be wrong, or the provider may be wrong. Either way, the provider will neglect to document their clinical reasoning, a patient will be harmed, and we will have the first AI claim.”
 

 

 

Upcoming AI legal risks to watch for

Lawsuits that allege biased patient care by physicians on the basis of algorithmic bias may also be forthcoming, analysts warn.

Much has been written about algorithmic bias that compounds and worsens inequities in socioeconomic status, ethnicity, sexual orientation, and gender in health systems. In 2019, a groundbreaking article in Science shed light on commonly used algorithms that are considered racially biased and how health care professionals often use such information to make medical decisions.

No claims involving AI bias have come down the pipeline yet, but it’s an area to watch, said Ms. Boisvert. She noted a website that highlights complaints and accusations of AI bias, including in health care.

“We need to be sure the training of the AI is appropriate, current, and broad enough so that there is no bias in the AI when it’s participating in the decision-making,” said Ms. Boisvert. “Imagine if the AI is diagnosing based on a dataset that is not local. It doesn’t represent the population at that particular hospital, and it’s providing inaccurate information to the physicians who are then making decisions about treatment.”

In pain management, for example, there are known differences in how patients experience pain, Ms. Boisvert said. If AI was being used to develop an algorithm for how a particular patient’s postoperative pain should be managed, and the algorithm did not include the differences, the pain control for a certain patient could be inappropriate. A poor outcome resulting from the treatment could lead to a claim against the physician or hospital that used the biased AI system, she said.

In the future, as AI becomes more integrated and accepted in medicine, there may be a risk of legal complaints against doctors for not using AI, said Saurabh Jha, MD, an associate professor of radiology at the University of Pennsylvania, Philadelphia, and a scholar of AI in radiology.

“Ultimately, we might get to a place where AI starts helping physicians detect more or reduce the miss of certain conditions, and it becomes the standard of care,” Dr. Jha said. “For example, if it became part of the standard of care for pulmonary embolism [PE] detection, and you didn’t use it for PE detection, and there was a miss. That could put you at legal risk. We’re not at that stage yet, but that is one future possibility.”

Dr. Parikh envisions an even cloudier liability landscape as the potential grows for AI to control patient care decisions. In such a scenario, rather than just issuing an alert or prediction to a physician, the AI system could trigger an action.

For instance, if an algorithm is trained to predict sepsis and, once triggered, the AI could initiate a nurse-led rapid response or a change in patient care outside the clinician’s control, said Dr. Parikh, who coauthored a recent article on AI and medical liability in The Milbank Quarterly.

“That’s still very much the minority of how AI is being used, but as evidence is growing that AI-based diagnostic tools perform equivalent or even superior to physicians, these autonomous workflows are being considered,” Dr. Parikh said. “When the ultimate action upon the patient is more determined by the AI than what the clinician does, then I think the liability picture gets murkier, and we should be thinking about how we can respond to that from a liability framework.”
 

 

 

How you can prevent AI-related lawsuits

The first step to preventing an AI-related claim is being aware of when and how you are using AI.

Ensure you’re informed about how the AI was trained, Ms. Boisvert stresses.

“Ask questions!” she said. “Is the AI safe? Are the recommendations accurate? Does the AI perform better than current systems? In what way? What databases were used, and did the programmers consider bias? Do I understand how to use the results?”

Never blindly trust the AI but rather view it as a data point in a medical decision, said Dr. Parikh. Ensure that other sources of medical information are properly accessed and that best practices for your specialty are still being followed.

When using any form of AI, document your usage, adds Mr. Rashbaum. A record that clearly outlines how the physician incorporated the AI is critical if a claim later arises in which the doctor is accused of AI-related malpractice, he said.

“Indicating how the AI tool was used, why it was used, and that it was used in conjunction with available clinical information and the clinician’s best judgment could reduce the risk of being found responsible as a result of AI use in a particular case,” he said.

Use chatbots, such as ChatGPT, the way they were intended, as support tools, rather than definitive diagnostic instruments, adds Dr. Castro.

“Doctors should also be well-trained in interpreting and understanding the suggestions provided by ChatGPT and should use their clinical judgment and experience alongside the AI tool for more accurate decision-making,” he said.

In addition, because no AI insurance product exists on the market, physicians and organizations using AI – particularly for direct health care – should evaluate their current insurance or insurance-like products to determine where a claim involving AI might fall and whether the policy would respond, said Ms. Boisvert. The AI vendor/manufacturer will likely have indemnified themselves in the purchase and sale agreement or contract, she said.

It will also become increasingly important for medical practices, hospitals, and health systems to put in place strong data governance strategies, Mr. LeTang said.

“AI relies on good data,” he said. “A data governance strategy is a key component to making sure we understand where the data is coming from, what is represents, how accurate it is, if it’s reproducible, what controls are in place to ensure the right people have the right access, and that if we’re starting to use it to build algorithms, that it’s deidentified.”

While no malpractice claims associated with the use of AI have yet surfaced, this may change as legal courts catch up on the backlog of malpractice claims that were delayed because of COVID-19, and even more so as AI becomes more prevalent in health care, Mr. LeTang said.

“Similar to the attention that autonomous driving systems, like Tesla, receive when the system fails and accidents occur, we can be assured that media outlets will widely publicize AI-related medical adverse events,” he said. “It is crucial for health care professionals, AI developers, and regulatory authorities to work together to ensure the responsible use of AI in health care, with patient safety as the top priority. By doing so, they can mitigate the risks associated with AI implementation and minimize the potential for legal disputes arising from AI-related medical errors.”

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article