Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.

mdendo
Main menu
MD Endocrinology Main Menu
Explore menu
MD Endocrinology Explore Menu
Proclivity ID
18855001
Unpublish
Negative Keywords Excluded Elements
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
div[contains(@class, 'main-prefix')]
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
Altmetric
Click for Credit Button Label
Click For Credit
DSM Affiliated
Display in offset block
Disqus Exclude
Best Practices
CE/CME
Education Center
Medical Education Library
Enable Disqus
Display Author and Disclosure Link
Publication Type
News
Slot System
Featured Buckets
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Expire Announcement Bar
Use larger logo size
On
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz
Gating Strategy
First Peek Free
Challenge Center
Disable Inline Native ads
survey writer start date

SGLT2 inhibitors: Real-world data show benefits outweigh risks

Article Type
Changed

 

A new study provides the first comprehensive safety profile of sodium-glucose cotransporter 2 (SGLT2) inhibitors in U.S. patients with chronic kidney disease (CKD) and type 2 diabetes receiving routine care and suggests that the benefits outweigh the risks.

Starting therapy with an SGLT2 inhibitor versus a glucagon-like peptide-1 (GLP-1) receptor agonist was associated with more lower limb amputations, nonvertebral fractures, and genital infections, but these risks need to be balanced against cardiovascular and renoprotective benefits, according to the researchers.

The analysis showed that there would be 2.1 more lower limb amputations, 2.5 more nonvertebral fractures, and 41 more genital infections per 1,000 patients per year among those receiving SGLT2 inhibitors versus an equal number of patients receiving GLP-1 agonists, lead author Edouard Fu, PhD, explained to this news organization in an email.

“On the other hand, we know from the evidence from randomized controlled trials that taking an SGLT2 inhibitor compared with placebo lowers the risk of developing kidney failure,” said Dr. Fu, who is a research fellow in the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital, Boston.

“For instance,” he continued, “in the DAPA-CKD clinical trial, dapagliflozin versus placebo led to 29 fewer events per 1,000 patients per year of the composite outcome (50% decline in estimated glomerular filtration rate [eGFR], kidney failure, cardiovascular or kidney death).”

In the CREDENCE trial, canagliflozin versus placebo led to 18 fewer events per 1,000 person-years for the composite outcome of doubling of serum creatinine, kidney failure, and cardiovascular or kidney death.

And in the EMPA-KIDNEY study, empagliflozin versus placebo led to 21 fewer events per 1,000 person-years for the composite outcome of progression of kidney disease or cardiovascular death.

“Thus, benefits would still outweigh the risks,” Dr. Fu emphasized.
 

‘Quantifies absolute rate of events among routine care patients’

“The importance of our paper,” he summarized, “is that it quantifies the absolute rate of events among routine care patients and may be used to inform shared decision-making.”

The analysis also found that the risks of diabetic ketoacidosis (DKA), hypovolemia, hypoglycemia, and severe urinary tract infection (UTI) were similar with SGLT2 inhibitors versus GLP-1 agonists, but the risk of developing acute kidney injury (AKI) was lower with an SGLT2 inhibitor.

“Our study can help inform patient-physician decision-making regarding risks and benefits before prescribing SGLT2 inhibitors in this population” of patients with CKD and diabetes treated in clinical practice, the researchers conclude, “but needs to be interpreted in light of its limitations, including residual confounding, short follow-up time, and the use of diagnosis codes to identify patients with CKD.”

The study was recently published in the Clinical Journal of the American Society of Nephrology.
 

Slow uptake, safety concerns

SGLT2 inhibitors are recommended as first-line therapy in patients with type 2 diabetes and CKD who have an eGFR equal to or greater than 20 mL/min per 1.73 m2, and thus are at high risk for cardiovascular disease and kidney disease progression, Dr. Fu and colleagues write.

However, studies report that as few as 6% of patients with CKD and type 2 diabetes are currently prescribed SGLT2 inhibitors in the United States.

This slow uptake of SGLT2 inhibitors among patients with CKD may be partly due to concerns about DKA, fractures, amputations, and urogenital infections observed in clinical trials.

However, such trials are generally underpowered to assess rare adverse events, use monitoring protocols to lower the risk of adverse events, and include a highly selected patient population, and so safety in routine clinical practice is often unclear.

To examine this, the researchers identified health insurance claims data from 96,128 individuals (from Optum, IBM MarketScan, and Medicare databases) who were 18 years or older (65 years or older for Medicare) and had type 2 diabetes and at least one inpatient or two outpatient diagnostic codes for stage 3 or 4 CKD.

Of these patients, 32,192 had a newly filled prescription for an SGLT2 inhibitor (empagliflozin, dapagliflozin, canagliflozin, or ertugliflozin) and 63,936 had a newly filled prescription for a GLP-1 agonist (liraglutide, dulaglutide, semaglutide, exenatide, albiglutide, or lixisenatide) between April 2013, when the first SGLT2 inhibitor was available in the United States, and 2021.

The researchers matched 28,847 individuals who were initiated on an SGLT2 inhibitor with an equal number who were initiated on a GLP-1 agonist, based on propensity scores, adjusting for more than 120 baseline characteristics.

Safety outcomes were based on previously identified potential safety signals.

Patients who were initiated on an SGLT2 inhibitor had 1.30-fold, 2.13-fold, and 3.08-fold higher risks of having a nonvertebral fracture, a lower limb amputation, and a genital infection, respectively, compared with patients who were initiated on a GLP-1 agonist, after a mean on-treatment time of 7.5 months,

Risks of DKA, hypovolemia, hypoglycemia, and severe UTI were similar in both groups.

Patients initiated on an SGLT2 inhibitor versus a GLP-1 agonist had a lower risk of AKI (hazard ratio, 0.93) equivalent to 6.75 fewer cases of AKI per 1,000 patients per year.

Patients had higher risks for lower limb amputation, genital infections, and nonvertebral fractures with SGLT2 inhibitors versus GLP-1 agonists across most of the prespecified subgroups by age, sex, cardiovascular disease, heart failure, and use of metformin, insulin, or sulfonylurea, but with wider confidence intervals.

Dr. Fu was supported by a Rubicon grant from the Dutch Research Council and has reported no relevant financial relationships. Disclosures for the other authors are listed with the article.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

A new study provides the first comprehensive safety profile of sodium-glucose cotransporter 2 (SGLT2) inhibitors in U.S. patients with chronic kidney disease (CKD) and type 2 diabetes receiving routine care and suggests that the benefits outweigh the risks.

Starting therapy with an SGLT2 inhibitor versus a glucagon-like peptide-1 (GLP-1) receptor agonist was associated with more lower limb amputations, nonvertebral fractures, and genital infections, but these risks need to be balanced against cardiovascular and renoprotective benefits, according to the researchers.

The analysis showed that there would be 2.1 more lower limb amputations, 2.5 more nonvertebral fractures, and 41 more genital infections per 1,000 patients per year among those receiving SGLT2 inhibitors versus an equal number of patients receiving GLP-1 agonists, lead author Edouard Fu, PhD, explained to this news organization in an email.

“On the other hand, we know from the evidence from randomized controlled trials that taking an SGLT2 inhibitor compared with placebo lowers the risk of developing kidney failure,” said Dr. Fu, who is a research fellow in the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital, Boston.

“For instance,” he continued, “in the DAPA-CKD clinical trial, dapagliflozin versus placebo led to 29 fewer events per 1,000 patients per year of the composite outcome (50% decline in estimated glomerular filtration rate [eGFR], kidney failure, cardiovascular or kidney death).”

In the CREDENCE trial, canagliflozin versus placebo led to 18 fewer events per 1,000 person-years for the composite outcome of doubling of serum creatinine, kidney failure, and cardiovascular or kidney death.

And in the EMPA-KIDNEY study, empagliflozin versus placebo led to 21 fewer events per 1,000 person-years for the composite outcome of progression of kidney disease or cardiovascular death.

“Thus, benefits would still outweigh the risks,” Dr. Fu emphasized.
 

‘Quantifies absolute rate of events among routine care patients’

“The importance of our paper,” he summarized, “is that it quantifies the absolute rate of events among routine care patients and may be used to inform shared decision-making.”

The analysis also found that the risks of diabetic ketoacidosis (DKA), hypovolemia, hypoglycemia, and severe urinary tract infection (UTI) were similar with SGLT2 inhibitors versus GLP-1 agonists, but the risk of developing acute kidney injury (AKI) was lower with an SGLT2 inhibitor.

“Our study can help inform patient-physician decision-making regarding risks and benefits before prescribing SGLT2 inhibitors in this population” of patients with CKD and diabetes treated in clinical practice, the researchers conclude, “but needs to be interpreted in light of its limitations, including residual confounding, short follow-up time, and the use of diagnosis codes to identify patients with CKD.”

The study was recently published in the Clinical Journal of the American Society of Nephrology.
 

Slow uptake, safety concerns

SGLT2 inhibitors are recommended as first-line therapy in patients with type 2 diabetes and CKD who have an eGFR equal to or greater than 20 mL/min per 1.73 m2, and thus are at high risk for cardiovascular disease and kidney disease progression, Dr. Fu and colleagues write.

However, studies report that as few as 6% of patients with CKD and type 2 diabetes are currently prescribed SGLT2 inhibitors in the United States.

This slow uptake of SGLT2 inhibitors among patients with CKD may be partly due to concerns about DKA, fractures, amputations, and urogenital infections observed in clinical trials.

However, such trials are generally underpowered to assess rare adverse events, use monitoring protocols to lower the risk of adverse events, and include a highly selected patient population, and so safety in routine clinical practice is often unclear.

To examine this, the researchers identified health insurance claims data from 96,128 individuals (from Optum, IBM MarketScan, and Medicare databases) who were 18 years or older (65 years or older for Medicare) and had type 2 diabetes and at least one inpatient or two outpatient diagnostic codes for stage 3 or 4 CKD.

Of these patients, 32,192 had a newly filled prescription for an SGLT2 inhibitor (empagliflozin, dapagliflozin, canagliflozin, or ertugliflozin) and 63,936 had a newly filled prescription for a GLP-1 agonist (liraglutide, dulaglutide, semaglutide, exenatide, albiglutide, or lixisenatide) between April 2013, when the first SGLT2 inhibitor was available in the United States, and 2021.

The researchers matched 28,847 individuals who were initiated on an SGLT2 inhibitor with an equal number who were initiated on a GLP-1 agonist, based on propensity scores, adjusting for more than 120 baseline characteristics.

Safety outcomes were based on previously identified potential safety signals.

Patients who were initiated on an SGLT2 inhibitor had 1.30-fold, 2.13-fold, and 3.08-fold higher risks of having a nonvertebral fracture, a lower limb amputation, and a genital infection, respectively, compared with patients who were initiated on a GLP-1 agonist, after a mean on-treatment time of 7.5 months,

Risks of DKA, hypovolemia, hypoglycemia, and severe UTI were similar in both groups.

Patients initiated on an SGLT2 inhibitor versus a GLP-1 agonist had a lower risk of AKI (hazard ratio, 0.93) equivalent to 6.75 fewer cases of AKI per 1,000 patients per year.

Patients had higher risks for lower limb amputation, genital infections, and nonvertebral fractures with SGLT2 inhibitors versus GLP-1 agonists across most of the prespecified subgroups by age, sex, cardiovascular disease, heart failure, and use of metformin, insulin, or sulfonylurea, but with wider confidence intervals.

Dr. Fu was supported by a Rubicon grant from the Dutch Research Council and has reported no relevant financial relationships. Disclosures for the other authors are listed with the article.

A version of this article originally appeared on Medscape.com.

 

A new study provides the first comprehensive safety profile of sodium-glucose cotransporter 2 (SGLT2) inhibitors in U.S. patients with chronic kidney disease (CKD) and type 2 diabetes receiving routine care and suggests that the benefits outweigh the risks.

Starting therapy with an SGLT2 inhibitor versus a glucagon-like peptide-1 (GLP-1) receptor agonist was associated with more lower limb amputations, nonvertebral fractures, and genital infections, but these risks need to be balanced against cardiovascular and renoprotective benefits, according to the researchers.

The analysis showed that there would be 2.1 more lower limb amputations, 2.5 more nonvertebral fractures, and 41 more genital infections per 1,000 patients per year among those receiving SGLT2 inhibitors versus an equal number of patients receiving GLP-1 agonists, lead author Edouard Fu, PhD, explained to this news organization in an email.

“On the other hand, we know from the evidence from randomized controlled trials that taking an SGLT2 inhibitor compared with placebo lowers the risk of developing kidney failure,” said Dr. Fu, who is a research fellow in the division of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital, Boston.

“For instance,” he continued, “in the DAPA-CKD clinical trial, dapagliflozin versus placebo led to 29 fewer events per 1,000 patients per year of the composite outcome (50% decline in estimated glomerular filtration rate [eGFR], kidney failure, cardiovascular or kidney death).”

In the CREDENCE trial, canagliflozin versus placebo led to 18 fewer events per 1,000 person-years for the composite outcome of doubling of serum creatinine, kidney failure, and cardiovascular or kidney death.

And in the EMPA-KIDNEY study, empagliflozin versus placebo led to 21 fewer events per 1,000 person-years for the composite outcome of progression of kidney disease or cardiovascular death.

“Thus, benefits would still outweigh the risks,” Dr. Fu emphasized.
 

‘Quantifies absolute rate of events among routine care patients’

“The importance of our paper,” he summarized, “is that it quantifies the absolute rate of events among routine care patients and may be used to inform shared decision-making.”

The analysis also found that the risks of diabetic ketoacidosis (DKA), hypovolemia, hypoglycemia, and severe urinary tract infection (UTI) were similar with SGLT2 inhibitors versus GLP-1 agonists, but the risk of developing acute kidney injury (AKI) was lower with an SGLT2 inhibitor.

“Our study can help inform patient-physician decision-making regarding risks and benefits before prescribing SGLT2 inhibitors in this population” of patients with CKD and diabetes treated in clinical practice, the researchers conclude, “but needs to be interpreted in light of its limitations, including residual confounding, short follow-up time, and the use of diagnosis codes to identify patients with CKD.”

The study was recently published in the Clinical Journal of the American Society of Nephrology.
 

Slow uptake, safety concerns

SGLT2 inhibitors are recommended as first-line therapy in patients with type 2 diabetes and CKD who have an eGFR equal to or greater than 20 mL/min per 1.73 m2, and thus are at high risk for cardiovascular disease and kidney disease progression, Dr. Fu and colleagues write.

However, studies report that as few as 6% of patients with CKD and type 2 diabetes are currently prescribed SGLT2 inhibitors in the United States.

This slow uptake of SGLT2 inhibitors among patients with CKD may be partly due to concerns about DKA, fractures, amputations, and urogenital infections observed in clinical trials.

However, such trials are generally underpowered to assess rare adverse events, use monitoring protocols to lower the risk of adverse events, and include a highly selected patient population, and so safety in routine clinical practice is often unclear.

To examine this, the researchers identified health insurance claims data from 96,128 individuals (from Optum, IBM MarketScan, and Medicare databases) who were 18 years or older (65 years or older for Medicare) and had type 2 diabetes and at least one inpatient or two outpatient diagnostic codes for stage 3 or 4 CKD.

Of these patients, 32,192 had a newly filled prescription for an SGLT2 inhibitor (empagliflozin, dapagliflozin, canagliflozin, or ertugliflozin) and 63,936 had a newly filled prescription for a GLP-1 agonist (liraglutide, dulaglutide, semaglutide, exenatide, albiglutide, or lixisenatide) between April 2013, when the first SGLT2 inhibitor was available in the United States, and 2021.

The researchers matched 28,847 individuals who were initiated on an SGLT2 inhibitor with an equal number who were initiated on a GLP-1 agonist, based on propensity scores, adjusting for more than 120 baseline characteristics.

Safety outcomes were based on previously identified potential safety signals.

Patients who were initiated on an SGLT2 inhibitor had 1.30-fold, 2.13-fold, and 3.08-fold higher risks of having a nonvertebral fracture, a lower limb amputation, and a genital infection, respectively, compared with patients who were initiated on a GLP-1 agonist, after a mean on-treatment time of 7.5 months,

Risks of DKA, hypovolemia, hypoglycemia, and severe UTI were similar in both groups.

Patients initiated on an SGLT2 inhibitor versus a GLP-1 agonist had a lower risk of AKI (hazard ratio, 0.93) equivalent to 6.75 fewer cases of AKI per 1,000 patients per year.

Patients had higher risks for lower limb amputation, genital infections, and nonvertebral fractures with SGLT2 inhibitors versus GLP-1 agonists across most of the prespecified subgroups by age, sex, cardiovascular disease, heart failure, and use of metformin, insulin, or sulfonylurea, but with wider confidence intervals.

Dr. Fu was supported by a Rubicon grant from the Dutch Research Council and has reported no relevant financial relationships. Disclosures for the other authors are listed with the article.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Some diets better than others for heart protection

Article Type
Changed

 

In an analysis of randomized trials, the Mediterranean diet and low-fat diets were linked to reduced risks of all-cause mortality and nonfatal MI over 3 years in adults at increased risk for cardiovascular disease (CVD), while the Mediterranean diet also showed lower risk of stroke.

Five other popular diets appeared to have little or no benefit with regard to these outcomes.

“These findings with data presentations are extremely important for patients who are skeptical about the desirability of diet change,” wrote the authors, led by Giorgio Karam, a medical student at the University of Manitoba, Winnipeg.

The results were published online in The BMJ.

Dietary guidelines recommend various diets along with physical activity or other cointerventions for adults at increased CVD risk, but they are often based on low-certainty evidence from nonrandomized studies and on surrogate outcomes.

Several meta-analyses of randomized controlled trials with mortality and major CV outcomes have reported benefits of some dietary programs, but those studies did not use network meta-analysis to give absolute estimates and certainty of estimates for adults at intermediate and high risk, the authors noted.

For this study, Mr. Karam and colleagues conducted a comprehensive systematic review and network meta-analysis in which they compared the effects of seven popular structured diets on mortality and CVD events for adults with CVD or CVD risk factors.

The seven diet plans were the Mediterranean, low fat, very low fat, modified fat, combined low fat and low sodium, Ornish, and Pritikin diets. Data for the analysis came from 40 randomized controlled trials that involved 35,548 participants who were followed for an average of 3 years.

There was evidence of “moderate” certainty that the Mediterranean diet was superior to minimal intervention for all-cause mortality (odds ratio [OR], 0.72), CV mortality (OR, 0.55), stroke (OR, 0.65), and nonfatal MI (OR, 0.48).

On an absolute basis (per 1,000 over 5 years), the Mediterranean diet let to 17 fewer deaths from any cause, 13 fewer CV deaths, seven fewer strokes, and 17 fewer nonfatal MIs.

There was evidence of moderate certainty that a low-fat diet was superior to minimal intervention for prevention of all-cause mortality (OR, 0.84; nine fewer deaths per 1,000) and nonfatal MI (OR, 0.77; seven fewer deaths per 1,000). The low-fat diet had little to no benefit with regard to stroke reduction.

The Mediterranean diet was not “convincingly” superior to a low-fat diet for mortality or nonfatal MI, the authors noted.

The absolute effects for the Mediterranean and low-fat diets were more pronounced in adults at high CVD risk. With the Mediterranean diet, there were 36 fewer all-cause deaths and 39 fewer CV deaths per 1,000 over 5 years.

The five other dietary programs generally had “little or no benefit” compared with minimal intervention. The evidence was of low to moderate certainty.

The studies did not provide enough data to gauge the impact of the diets on angina, heart failure, peripheral vascular events, and atrial fibrillation.

The researchers say that strengths of their analysis include a comprehensive review and thorough literature search and a rigorous assessment of study bias. In addition, the researchers adhered to recognized GRADE methods for assessing the certainty of estimates.

Limitations of their work include not being able to measure adherence to dietary programs and the possibility that some of the benefits may have been due to other factors, such as drug treatment and support for quitting smoking.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

In an analysis of randomized trials, the Mediterranean diet and low-fat diets were linked to reduced risks of all-cause mortality and nonfatal MI over 3 years in adults at increased risk for cardiovascular disease (CVD), while the Mediterranean diet also showed lower risk of stroke.

Five other popular diets appeared to have little or no benefit with regard to these outcomes.

“These findings with data presentations are extremely important for patients who are skeptical about the desirability of diet change,” wrote the authors, led by Giorgio Karam, a medical student at the University of Manitoba, Winnipeg.

The results were published online in The BMJ.

Dietary guidelines recommend various diets along with physical activity or other cointerventions for adults at increased CVD risk, but they are often based on low-certainty evidence from nonrandomized studies and on surrogate outcomes.

Several meta-analyses of randomized controlled trials with mortality and major CV outcomes have reported benefits of some dietary programs, but those studies did not use network meta-analysis to give absolute estimates and certainty of estimates for adults at intermediate and high risk, the authors noted.

For this study, Mr. Karam and colleagues conducted a comprehensive systematic review and network meta-analysis in which they compared the effects of seven popular structured diets on mortality and CVD events for adults with CVD or CVD risk factors.

The seven diet plans were the Mediterranean, low fat, very low fat, modified fat, combined low fat and low sodium, Ornish, and Pritikin diets. Data for the analysis came from 40 randomized controlled trials that involved 35,548 participants who were followed for an average of 3 years.

There was evidence of “moderate” certainty that the Mediterranean diet was superior to minimal intervention for all-cause mortality (odds ratio [OR], 0.72), CV mortality (OR, 0.55), stroke (OR, 0.65), and nonfatal MI (OR, 0.48).

On an absolute basis (per 1,000 over 5 years), the Mediterranean diet let to 17 fewer deaths from any cause, 13 fewer CV deaths, seven fewer strokes, and 17 fewer nonfatal MIs.

There was evidence of moderate certainty that a low-fat diet was superior to minimal intervention for prevention of all-cause mortality (OR, 0.84; nine fewer deaths per 1,000) and nonfatal MI (OR, 0.77; seven fewer deaths per 1,000). The low-fat diet had little to no benefit with regard to stroke reduction.

The Mediterranean diet was not “convincingly” superior to a low-fat diet for mortality or nonfatal MI, the authors noted.

The absolute effects for the Mediterranean and low-fat diets were more pronounced in adults at high CVD risk. With the Mediterranean diet, there were 36 fewer all-cause deaths and 39 fewer CV deaths per 1,000 over 5 years.

The five other dietary programs generally had “little or no benefit” compared with minimal intervention. The evidence was of low to moderate certainty.

The studies did not provide enough data to gauge the impact of the diets on angina, heart failure, peripheral vascular events, and atrial fibrillation.

The researchers say that strengths of their analysis include a comprehensive review and thorough literature search and a rigorous assessment of study bias. In addition, the researchers adhered to recognized GRADE methods for assessing the certainty of estimates.

Limitations of their work include not being able to measure adherence to dietary programs and the possibility that some of the benefits may have been due to other factors, such as drug treatment and support for quitting smoking.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

In an analysis of randomized trials, the Mediterranean diet and low-fat diets were linked to reduced risks of all-cause mortality and nonfatal MI over 3 years in adults at increased risk for cardiovascular disease (CVD), while the Mediterranean diet also showed lower risk of stroke.

Five other popular diets appeared to have little or no benefit with regard to these outcomes.

“These findings with data presentations are extremely important for patients who are skeptical about the desirability of diet change,” wrote the authors, led by Giorgio Karam, a medical student at the University of Manitoba, Winnipeg.

The results were published online in The BMJ.

Dietary guidelines recommend various diets along with physical activity or other cointerventions for adults at increased CVD risk, but they are often based on low-certainty evidence from nonrandomized studies and on surrogate outcomes.

Several meta-analyses of randomized controlled trials with mortality and major CV outcomes have reported benefits of some dietary programs, but those studies did not use network meta-analysis to give absolute estimates and certainty of estimates for adults at intermediate and high risk, the authors noted.

For this study, Mr. Karam and colleagues conducted a comprehensive systematic review and network meta-analysis in which they compared the effects of seven popular structured diets on mortality and CVD events for adults with CVD or CVD risk factors.

The seven diet plans were the Mediterranean, low fat, very low fat, modified fat, combined low fat and low sodium, Ornish, and Pritikin diets. Data for the analysis came from 40 randomized controlled trials that involved 35,548 participants who were followed for an average of 3 years.

There was evidence of “moderate” certainty that the Mediterranean diet was superior to minimal intervention for all-cause mortality (odds ratio [OR], 0.72), CV mortality (OR, 0.55), stroke (OR, 0.65), and nonfatal MI (OR, 0.48).

On an absolute basis (per 1,000 over 5 years), the Mediterranean diet let to 17 fewer deaths from any cause, 13 fewer CV deaths, seven fewer strokes, and 17 fewer nonfatal MIs.

There was evidence of moderate certainty that a low-fat diet was superior to minimal intervention for prevention of all-cause mortality (OR, 0.84; nine fewer deaths per 1,000) and nonfatal MI (OR, 0.77; seven fewer deaths per 1,000). The low-fat diet had little to no benefit with regard to stroke reduction.

The Mediterranean diet was not “convincingly” superior to a low-fat diet for mortality or nonfatal MI, the authors noted.

The absolute effects for the Mediterranean and low-fat diets were more pronounced in adults at high CVD risk. With the Mediterranean diet, there were 36 fewer all-cause deaths and 39 fewer CV deaths per 1,000 over 5 years.

The five other dietary programs generally had “little or no benefit” compared with minimal intervention. The evidence was of low to moderate certainty.

The studies did not provide enough data to gauge the impact of the diets on angina, heart failure, peripheral vascular events, and atrial fibrillation.

The researchers say that strengths of their analysis include a comprehensive review and thorough literature search and a rigorous assessment of study bias. In addition, the researchers adhered to recognized GRADE methods for assessing the certainty of estimates.

Limitations of their work include not being able to measure adherence to dietary programs and the possibility that some of the benefits may have been due to other factors, such as drug treatment and support for quitting smoking.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New antiobesity drugs will benefit many. Is that bad?

Article Type
Changed

 

The biased discourse and double standards around antiobesity glucagon-like peptide 1 (GLP-1) receptor agonists continue apace, most recently in The New England Journal of Medicine (NEJM) where some economists opined that their coverage would be disastrous for Medicare.

Among their concerns? The drugs need to be taken long term (just like drugs for any other chronic condition). The new drugs are more expensive than the old drugs (just like new drugs for any other chronic condition). Lots of people will want to take them (just like highly effective drugs for any other chronic condition that has a significant quality-of-life or clinical impact). The U.K. recommended that they be covered only for 2 years (unlike drugs for any other chronic condition). And the Institute for Clinical and Economic Review (ICER) on which they lean heavily decided that $13,618 annually was too expensive for a medication that leads to sustained 15%-20% weight losses and those losses’ consequential benefits.

As a clinician working with patients who sustain those levels of weight loss, I find that conclusion confusing. Whether by way of lifestyle alone, or more often by way of lifestyle efforts plus medication or lifestyle efforts plus surgery, the benefits reported and seen with 15%-20% weight losses are almost uniformly huge. Patients are regularly seen discontinuing or reducing the dosage of multiple medications as a result of improvements to multiple weight-responsive comorbidities, and they also report objective benefits to mood, sleep, mobility, pain, and energy. Losing that much weight changes lives. Not to mention the impact that that degree of loss has on the primary prevention of so many diseases, including plausible reductions in many common cancers – reductions that have been shown to occur after surgery-related weight losses and for which there’s no plausible reason to imagine that they wouldn’t occur with pharmaceutical-related losses.

Are those discussions found in the NEJM op-ed or in the ICER report? Well, yes, sort of. However, in the NEJM op-ed, the word “prevention” isn’t used once, and unlike with oral hypoglycemics or antihypertensives, the authors state that with antiobesity medications, additional research is needed to determine whether medication-induced changes to A1c, blood pressure, and waist circumference would have clinical benefits: “Antiobesity medications have been shown to improve the surrogate end points of weight, glycated hemoglobin levels, systolic blood pressure, and waist circumference. Long-term studies are needed, however, to clarify how medication-induced changes in these surrogate markers translate to health outcomes.”

Primary prevention is mentioned in the ICER review, but in the “limitations” section where the authors explain that they didn’t include it in their modeling: “The long-term benefits of preventing other comorbidities including cancer, chronic kidney disease, osteoarthritis, and sleep apnea were not explicitly modeled in the base case.”

And they pretended that the impact on existing weight-responsive comorbidities mostly didn’t exist, too: “To limit the complexity of the cost-effectiveness model and to prevent double-counting of treatment benefits, we limited the long-term effects of treatments for weight management to cardiovascular risk and delays in the onset and/or diagnosis of diabetes mellitus.”

As far as cardiovascular disease (CVD) benefits go, you might have thought that it would be a slam dunk on that basis alone, at least according to a recent simple back-of-the-envelope math exercise presented at a recent American College of Cardiology conference, which applied the semaglutide treatment group weight changes in the STEP 1 trial to estimate the population impact on weight and obesity in 30- to 74-year-olds without prior CVD, and estimated 10-year CVD risks utilizing the BMI-based Framingham CVD risk scores. By their accounting, semaglutide treatment in eligible American patients has the potential to prevent over 1.6 million CVD events over 10 years.

Finally, even putting aside ICER’s admittedly and exceedingly narrow base case, what lifestyle-alone studies could ICER possibly be comparing with drug efficacy? And what does “alone” mean? Does “alone” mean with a months- or years long interprofessional behavioral program? Does “alone” mean by way of diet books? Does “alone” mean by way of simply “moving more and eating less”? I’m not aware of robust studies demonstrating any long-term meaningful, predictable, reproducible, durable weight loss outcomes for any lifestyle-only approach, intensive or otherwise.

It’s difficult for me to imagine a situation in which a drug other than an antiobesity drug would be found to have too many benefits to include in your cost-effectiveness analysis but where you’d be comfortable to run that analysis anyhow, and then come out against the drug’s recommendation and fearmonger about its use.

But then again, systemic weight bias is a hell of a drug.
 

Dr. Freedhoff is associate professor, department of family medicine, University of Ottawa, and medical director, Bariatric Medical Institute, Ottawa. He disclosed ties with Constant Health and Novo Nordisk, and has shared opinions via Weighty Matters and social media.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

The biased discourse and double standards around antiobesity glucagon-like peptide 1 (GLP-1) receptor agonists continue apace, most recently in The New England Journal of Medicine (NEJM) where some economists opined that their coverage would be disastrous for Medicare.

Among their concerns? The drugs need to be taken long term (just like drugs for any other chronic condition). The new drugs are more expensive than the old drugs (just like new drugs for any other chronic condition). Lots of people will want to take them (just like highly effective drugs for any other chronic condition that has a significant quality-of-life or clinical impact). The U.K. recommended that they be covered only for 2 years (unlike drugs for any other chronic condition). And the Institute for Clinical and Economic Review (ICER) on which they lean heavily decided that $13,618 annually was too expensive for a medication that leads to sustained 15%-20% weight losses and those losses’ consequential benefits.

As a clinician working with patients who sustain those levels of weight loss, I find that conclusion confusing. Whether by way of lifestyle alone, or more often by way of lifestyle efforts plus medication or lifestyle efforts plus surgery, the benefits reported and seen with 15%-20% weight losses are almost uniformly huge. Patients are regularly seen discontinuing or reducing the dosage of multiple medications as a result of improvements to multiple weight-responsive comorbidities, and they also report objective benefits to mood, sleep, mobility, pain, and energy. Losing that much weight changes lives. Not to mention the impact that that degree of loss has on the primary prevention of so many diseases, including plausible reductions in many common cancers – reductions that have been shown to occur after surgery-related weight losses and for which there’s no plausible reason to imagine that they wouldn’t occur with pharmaceutical-related losses.

Are those discussions found in the NEJM op-ed or in the ICER report? Well, yes, sort of. However, in the NEJM op-ed, the word “prevention” isn’t used once, and unlike with oral hypoglycemics or antihypertensives, the authors state that with antiobesity medications, additional research is needed to determine whether medication-induced changes to A1c, blood pressure, and waist circumference would have clinical benefits: “Antiobesity medications have been shown to improve the surrogate end points of weight, glycated hemoglobin levels, systolic blood pressure, and waist circumference. Long-term studies are needed, however, to clarify how medication-induced changes in these surrogate markers translate to health outcomes.”

Primary prevention is mentioned in the ICER review, but in the “limitations” section where the authors explain that they didn’t include it in their modeling: “The long-term benefits of preventing other comorbidities including cancer, chronic kidney disease, osteoarthritis, and sleep apnea were not explicitly modeled in the base case.”

And they pretended that the impact on existing weight-responsive comorbidities mostly didn’t exist, too: “To limit the complexity of the cost-effectiveness model and to prevent double-counting of treatment benefits, we limited the long-term effects of treatments for weight management to cardiovascular risk and delays in the onset and/or diagnosis of diabetes mellitus.”

As far as cardiovascular disease (CVD) benefits go, you might have thought that it would be a slam dunk on that basis alone, at least according to a recent simple back-of-the-envelope math exercise presented at a recent American College of Cardiology conference, which applied the semaglutide treatment group weight changes in the STEP 1 trial to estimate the population impact on weight and obesity in 30- to 74-year-olds without prior CVD, and estimated 10-year CVD risks utilizing the BMI-based Framingham CVD risk scores. By their accounting, semaglutide treatment in eligible American patients has the potential to prevent over 1.6 million CVD events over 10 years.

Finally, even putting aside ICER’s admittedly and exceedingly narrow base case, what lifestyle-alone studies could ICER possibly be comparing with drug efficacy? And what does “alone” mean? Does “alone” mean with a months- or years long interprofessional behavioral program? Does “alone” mean by way of diet books? Does “alone” mean by way of simply “moving more and eating less”? I’m not aware of robust studies demonstrating any long-term meaningful, predictable, reproducible, durable weight loss outcomes for any lifestyle-only approach, intensive or otherwise.

It’s difficult for me to imagine a situation in which a drug other than an antiobesity drug would be found to have too many benefits to include in your cost-effectiveness analysis but where you’d be comfortable to run that analysis anyhow, and then come out against the drug’s recommendation and fearmonger about its use.

But then again, systemic weight bias is a hell of a drug.
 

Dr. Freedhoff is associate professor, department of family medicine, University of Ottawa, and medical director, Bariatric Medical Institute, Ottawa. He disclosed ties with Constant Health and Novo Nordisk, and has shared opinions via Weighty Matters and social media.

A version of this article originally appeared on Medscape.com.

 

The biased discourse and double standards around antiobesity glucagon-like peptide 1 (GLP-1) receptor agonists continue apace, most recently in The New England Journal of Medicine (NEJM) where some economists opined that their coverage would be disastrous for Medicare.

Among their concerns? The drugs need to be taken long term (just like drugs for any other chronic condition). The new drugs are more expensive than the old drugs (just like new drugs for any other chronic condition). Lots of people will want to take them (just like highly effective drugs for any other chronic condition that has a significant quality-of-life or clinical impact). The U.K. recommended that they be covered only for 2 years (unlike drugs for any other chronic condition). And the Institute for Clinical and Economic Review (ICER) on which they lean heavily decided that $13,618 annually was too expensive for a medication that leads to sustained 15%-20% weight losses and those losses’ consequential benefits.

As a clinician working with patients who sustain those levels of weight loss, I find that conclusion confusing. Whether by way of lifestyle alone, or more often by way of lifestyle efforts plus medication or lifestyle efforts plus surgery, the benefits reported and seen with 15%-20% weight losses are almost uniformly huge. Patients are regularly seen discontinuing or reducing the dosage of multiple medications as a result of improvements to multiple weight-responsive comorbidities, and they also report objective benefits to mood, sleep, mobility, pain, and energy. Losing that much weight changes lives. Not to mention the impact that that degree of loss has on the primary prevention of so many diseases, including plausible reductions in many common cancers – reductions that have been shown to occur after surgery-related weight losses and for which there’s no plausible reason to imagine that they wouldn’t occur with pharmaceutical-related losses.

Are those discussions found in the NEJM op-ed or in the ICER report? Well, yes, sort of. However, in the NEJM op-ed, the word “prevention” isn’t used once, and unlike with oral hypoglycemics or antihypertensives, the authors state that with antiobesity medications, additional research is needed to determine whether medication-induced changes to A1c, blood pressure, and waist circumference would have clinical benefits: “Antiobesity medications have been shown to improve the surrogate end points of weight, glycated hemoglobin levels, systolic blood pressure, and waist circumference. Long-term studies are needed, however, to clarify how medication-induced changes in these surrogate markers translate to health outcomes.”

Primary prevention is mentioned in the ICER review, but in the “limitations” section where the authors explain that they didn’t include it in their modeling: “The long-term benefits of preventing other comorbidities including cancer, chronic kidney disease, osteoarthritis, and sleep apnea were not explicitly modeled in the base case.”

And they pretended that the impact on existing weight-responsive comorbidities mostly didn’t exist, too: “To limit the complexity of the cost-effectiveness model and to prevent double-counting of treatment benefits, we limited the long-term effects of treatments for weight management to cardiovascular risk and delays in the onset and/or diagnosis of diabetes mellitus.”

As far as cardiovascular disease (CVD) benefits go, you might have thought that it would be a slam dunk on that basis alone, at least according to a recent simple back-of-the-envelope math exercise presented at a recent American College of Cardiology conference, which applied the semaglutide treatment group weight changes in the STEP 1 trial to estimate the population impact on weight and obesity in 30- to 74-year-olds without prior CVD, and estimated 10-year CVD risks utilizing the BMI-based Framingham CVD risk scores. By their accounting, semaglutide treatment in eligible American patients has the potential to prevent over 1.6 million CVD events over 10 years.

Finally, even putting aside ICER’s admittedly and exceedingly narrow base case, what lifestyle-alone studies could ICER possibly be comparing with drug efficacy? And what does “alone” mean? Does “alone” mean with a months- or years long interprofessional behavioral program? Does “alone” mean by way of diet books? Does “alone” mean by way of simply “moving more and eating less”? I’m not aware of robust studies demonstrating any long-term meaningful, predictable, reproducible, durable weight loss outcomes for any lifestyle-only approach, intensive or otherwise.

It’s difficult for me to imagine a situation in which a drug other than an antiobesity drug would be found to have too many benefits to include in your cost-effectiveness analysis but where you’d be comfortable to run that analysis anyhow, and then come out against the drug’s recommendation and fearmonger about its use.

But then again, systemic weight bias is a hell of a drug.
 

Dr. Freedhoff is associate professor, department of family medicine, University of Ottawa, and medical director, Bariatric Medical Institute, Ottawa. He disclosed ties with Constant Health and Novo Nordisk, and has shared opinions via Weighty Matters and social media.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Subclinical CAD by CT predicts MI risk, with or without stenoses

Article Type
Changed

 

About half of middle-aged adults in the community without cardiovascular (CV) symptoms have coronary atherosclerosis by CT angiography (CTA) that puts them at substantial risk for myocardial infarction (MI), suggests a prospective cohort study.

The 10% of participants who had subclinical disease considered obstructive at CTA showed a ninefold increased risk for MI over several years. Obstructive disease seemed to elevate risk more than subclinical disease that wasn’t obstructive but still considered extensive within the coronary arteries.

The findings, based on a Copenhagen General Population Study cohort, are new for CTA but consistent with research based on coronary artery calcium (CAC) scores and other ways to assess CV risk, say researchers.

Although all participants underwent CTA, such imaging isn’t used in the general population for atherosclerosis screening. But the findings may have implications for “opportunistic screening” for subclinical coronary disease at CTA conducted for other reasons, notes the study’s report, published online in the Annals of Internal Medicine.

“Identification of luminal obstructive or extensive subclinical coronary atherosclerosis” could potentially provide “clinically relevant, incremental risk assessment” in nonischemic patients who undergo cardiac CT or electrocardiogram-gated chest CT before procedures such as arrhythmia ablation or valve repair, it states.

Such patients found with subclinical coronary atherosclerosis might potentially “benefit from referral to intensified cardiovascular primary prevention therapy,” write the authors, led by Andreas Fuchs, MD, PhD, Copenhagen University Hospital-Rigshospitalet.

The group acknowledges the findings may not entirely apply to a non-Danish population.


 

A screening role for CTA?

Whether CTA has a role to play in adults without symptoms “is a big, open question in the field right now,” observed Ron Blankstein, MD, not associated with the current analysis, for this news organization.

Brigham and Women's Hospital
Dr. Ron Blankstein

Most population studies of CV risk prediction, such as MESA, have looked at CAC scores, not CTA, and have shown that “the more plaque individuals have, the higher the risk.” The current findings are similar but novel in coming from coronary CTA in a large asymptomatic community population, said Dr. Blankstein, who is director of cardiac CT at Brigham and Women’s Hospital, Boston.

“It’s possible that patients who have obstructive plaque in general tend to have a larger amount of plaque as well,” he said. So, while the study suggests that “the more plaque individuals have, the worse their overall risk,” it also shows that the risk “is enhanced even more if they have obstructive disease.”

The Danish cohort analysis “provides a unique opportunity to study the contemporary natural history of coronary artery disease in the absence of intervention,” notes an accompanying editorial.

For example, both patients and clinicians were blinded to CTA results, and CV preventive therapies weren’t common, observe Michael McDermott, MBChB, and David E. Newby, DM, PhD, of the BHF Centre for Cardiovascular Science, University of Edinburgh.

The analysis suggests that subclinical coronary disease that is obstructive predicts MI risk more strongly than extensive coronary disease, they note, and may be present in two-thirds of MI patients. “This contrasts with symptomatic populations, where nonobstructive disease accounts for most future myocardial infarctions, presumably from plaque rupture.”

It also points to “strong associations between nonobstructive extensive disease and adverse plaque characteristics,” write Dr. McDermott and Dr. Newby. “This underscores the major importance of plaque burden” for the prediction of coronary events.
 

 

 

Graded risk

The analysis included 9,533 persons aged 40 and older without known ischemic heart disease or symptoms with available CTA assessments.

Obstructive disease, defined as presence of a luminal stenosis of at least 50%, was seen in 10% and nonobstructive disease in 36% of the total cohort, the report states.

Disease occupying more than one-third of the coronary tree was considered extensive and less than one-third of the coronaries nonextensive, occurring in 10.5% and 35.8% of the cohort, respectively.

There were 71 MIs and 193 deaths over a median of 3.5 years. The adjusted relative risk for MI, compared with those without coronary atherosclerosis, was:

  • 7.65 (95% confidence interval, 3.53-16.57) overall in patients with extensive disease.
  • 8.28 (95% CI, 3.75-18.32) in those with obstructive but nonextensive disease.
  • 9.19 (95% CI, 4.49-18.82) overall in those with obstructive disease.
  • 12.48 (95% CI, 5.50-28.12) in those with or obstructive and extensive disease.

The adjusted RR for the composite of death or MI was also elevated in persons with extensive disease:

  • 2.70 (95% CI, 1.72-4.25) in those with extensive but nonobstructive disease.
  • 3.15 (95% CI, 2.05-4.83) in those with extensive and obstructive disease.

“It’s one thing to show that the more plaque, the higher the risk,” Dr. Blankstein said. But “does the information ultimately lead to better outcomes? Do patients have fewer MIs or fewer deaths?” Several ongoing randomized trials are exploring these questions.

They include DANE-HEART (Computed Tomography Coronary Angiography for Primary Prevention), projected to enroll about 6,000 participants from the Copenhagen General Population Study cohort who have at least one CV risk factor, and SCOT-HEART 2 (second Computed Tomography Coronary Angiography for the Prevention of Myocardial Infarction), enrolling a similar cohort in Scotland.

The study was supported by grants from AP Møller og Hustru Chastine Mc-Kinney Møllers Fond, the Research Council of Rigshospitalet, and Danish Heart Foundation. Dr. Fuchs reports no relevant financial relationships. Disclosures for the other authors can be found here. Dr. Blankstein recently disclosed serving as a consultant to Amgen, Caristo Diagnostics, Novartis, and Silence Therapeutics. Disclosures for Dr. McDermott and Dr. Newby, who are SCOT-HEART 2 investigators, can be found here.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

About half of middle-aged adults in the community without cardiovascular (CV) symptoms have coronary atherosclerosis by CT angiography (CTA) that puts them at substantial risk for myocardial infarction (MI), suggests a prospective cohort study.

The 10% of participants who had subclinical disease considered obstructive at CTA showed a ninefold increased risk for MI over several years. Obstructive disease seemed to elevate risk more than subclinical disease that wasn’t obstructive but still considered extensive within the coronary arteries.

The findings, based on a Copenhagen General Population Study cohort, are new for CTA but consistent with research based on coronary artery calcium (CAC) scores and other ways to assess CV risk, say researchers.

Although all participants underwent CTA, such imaging isn’t used in the general population for atherosclerosis screening. But the findings may have implications for “opportunistic screening” for subclinical coronary disease at CTA conducted for other reasons, notes the study’s report, published online in the Annals of Internal Medicine.

“Identification of luminal obstructive or extensive subclinical coronary atherosclerosis” could potentially provide “clinically relevant, incremental risk assessment” in nonischemic patients who undergo cardiac CT or electrocardiogram-gated chest CT before procedures such as arrhythmia ablation or valve repair, it states.

Such patients found with subclinical coronary atherosclerosis might potentially “benefit from referral to intensified cardiovascular primary prevention therapy,” write the authors, led by Andreas Fuchs, MD, PhD, Copenhagen University Hospital-Rigshospitalet.

The group acknowledges the findings may not entirely apply to a non-Danish population.


 

A screening role for CTA?

Whether CTA has a role to play in adults without symptoms “is a big, open question in the field right now,” observed Ron Blankstein, MD, not associated with the current analysis, for this news organization.

Brigham and Women's Hospital
Dr. Ron Blankstein

Most population studies of CV risk prediction, such as MESA, have looked at CAC scores, not CTA, and have shown that “the more plaque individuals have, the higher the risk.” The current findings are similar but novel in coming from coronary CTA in a large asymptomatic community population, said Dr. Blankstein, who is director of cardiac CT at Brigham and Women’s Hospital, Boston.

“It’s possible that patients who have obstructive plaque in general tend to have a larger amount of plaque as well,” he said. So, while the study suggests that “the more plaque individuals have, the worse their overall risk,” it also shows that the risk “is enhanced even more if they have obstructive disease.”

The Danish cohort analysis “provides a unique opportunity to study the contemporary natural history of coronary artery disease in the absence of intervention,” notes an accompanying editorial.

For example, both patients and clinicians were blinded to CTA results, and CV preventive therapies weren’t common, observe Michael McDermott, MBChB, and David E. Newby, DM, PhD, of the BHF Centre for Cardiovascular Science, University of Edinburgh.

The analysis suggests that subclinical coronary disease that is obstructive predicts MI risk more strongly than extensive coronary disease, they note, and may be present in two-thirds of MI patients. “This contrasts with symptomatic populations, where nonobstructive disease accounts for most future myocardial infarctions, presumably from plaque rupture.”

It also points to “strong associations between nonobstructive extensive disease and adverse plaque characteristics,” write Dr. McDermott and Dr. Newby. “This underscores the major importance of plaque burden” for the prediction of coronary events.
 

 

 

Graded risk

The analysis included 9,533 persons aged 40 and older without known ischemic heart disease or symptoms with available CTA assessments.

Obstructive disease, defined as presence of a luminal stenosis of at least 50%, was seen in 10% and nonobstructive disease in 36% of the total cohort, the report states.

Disease occupying more than one-third of the coronary tree was considered extensive and less than one-third of the coronaries nonextensive, occurring in 10.5% and 35.8% of the cohort, respectively.

There were 71 MIs and 193 deaths over a median of 3.5 years. The adjusted relative risk for MI, compared with those without coronary atherosclerosis, was:

  • 7.65 (95% confidence interval, 3.53-16.57) overall in patients with extensive disease.
  • 8.28 (95% CI, 3.75-18.32) in those with obstructive but nonextensive disease.
  • 9.19 (95% CI, 4.49-18.82) overall in those with obstructive disease.
  • 12.48 (95% CI, 5.50-28.12) in those with or obstructive and extensive disease.

The adjusted RR for the composite of death or MI was also elevated in persons with extensive disease:

  • 2.70 (95% CI, 1.72-4.25) in those with extensive but nonobstructive disease.
  • 3.15 (95% CI, 2.05-4.83) in those with extensive and obstructive disease.

“It’s one thing to show that the more plaque, the higher the risk,” Dr. Blankstein said. But “does the information ultimately lead to better outcomes? Do patients have fewer MIs or fewer deaths?” Several ongoing randomized trials are exploring these questions.

They include DANE-HEART (Computed Tomography Coronary Angiography for Primary Prevention), projected to enroll about 6,000 participants from the Copenhagen General Population Study cohort who have at least one CV risk factor, and SCOT-HEART 2 (second Computed Tomography Coronary Angiography for the Prevention of Myocardial Infarction), enrolling a similar cohort in Scotland.

The study was supported by grants from AP Møller og Hustru Chastine Mc-Kinney Møllers Fond, the Research Council of Rigshospitalet, and Danish Heart Foundation. Dr. Fuchs reports no relevant financial relationships. Disclosures for the other authors can be found here. Dr. Blankstein recently disclosed serving as a consultant to Amgen, Caristo Diagnostics, Novartis, and Silence Therapeutics. Disclosures for Dr. McDermott and Dr. Newby, who are SCOT-HEART 2 investigators, can be found here.

A version of this article originally appeared on Medscape.com.

 

About half of middle-aged adults in the community without cardiovascular (CV) symptoms have coronary atherosclerosis by CT angiography (CTA) that puts them at substantial risk for myocardial infarction (MI), suggests a prospective cohort study.

The 10% of participants who had subclinical disease considered obstructive at CTA showed a ninefold increased risk for MI over several years. Obstructive disease seemed to elevate risk more than subclinical disease that wasn’t obstructive but still considered extensive within the coronary arteries.

The findings, based on a Copenhagen General Population Study cohort, are new for CTA but consistent with research based on coronary artery calcium (CAC) scores and other ways to assess CV risk, say researchers.

Although all participants underwent CTA, such imaging isn’t used in the general population for atherosclerosis screening. But the findings may have implications for “opportunistic screening” for subclinical coronary disease at CTA conducted for other reasons, notes the study’s report, published online in the Annals of Internal Medicine.

“Identification of luminal obstructive or extensive subclinical coronary atherosclerosis” could potentially provide “clinically relevant, incremental risk assessment” in nonischemic patients who undergo cardiac CT or electrocardiogram-gated chest CT before procedures such as arrhythmia ablation or valve repair, it states.

Such patients found with subclinical coronary atherosclerosis might potentially “benefit from referral to intensified cardiovascular primary prevention therapy,” write the authors, led by Andreas Fuchs, MD, PhD, Copenhagen University Hospital-Rigshospitalet.

The group acknowledges the findings may not entirely apply to a non-Danish population.


 

A screening role for CTA?

Whether CTA has a role to play in adults without symptoms “is a big, open question in the field right now,” observed Ron Blankstein, MD, not associated with the current analysis, for this news organization.

Brigham and Women's Hospital
Dr. Ron Blankstein

Most population studies of CV risk prediction, such as MESA, have looked at CAC scores, not CTA, and have shown that “the more plaque individuals have, the higher the risk.” The current findings are similar but novel in coming from coronary CTA in a large asymptomatic community population, said Dr. Blankstein, who is director of cardiac CT at Brigham and Women’s Hospital, Boston.

“It’s possible that patients who have obstructive plaque in general tend to have a larger amount of plaque as well,” he said. So, while the study suggests that “the more plaque individuals have, the worse their overall risk,” it also shows that the risk “is enhanced even more if they have obstructive disease.”

The Danish cohort analysis “provides a unique opportunity to study the contemporary natural history of coronary artery disease in the absence of intervention,” notes an accompanying editorial.

For example, both patients and clinicians were blinded to CTA results, and CV preventive therapies weren’t common, observe Michael McDermott, MBChB, and David E. Newby, DM, PhD, of the BHF Centre for Cardiovascular Science, University of Edinburgh.

The analysis suggests that subclinical coronary disease that is obstructive predicts MI risk more strongly than extensive coronary disease, they note, and may be present in two-thirds of MI patients. “This contrasts with symptomatic populations, where nonobstructive disease accounts for most future myocardial infarctions, presumably from plaque rupture.”

It also points to “strong associations between nonobstructive extensive disease and adverse plaque characteristics,” write Dr. McDermott and Dr. Newby. “This underscores the major importance of plaque burden” for the prediction of coronary events.
 

 

 

Graded risk

The analysis included 9,533 persons aged 40 and older without known ischemic heart disease or symptoms with available CTA assessments.

Obstructive disease, defined as presence of a luminal stenosis of at least 50%, was seen in 10% and nonobstructive disease in 36% of the total cohort, the report states.

Disease occupying more than one-third of the coronary tree was considered extensive and less than one-third of the coronaries nonextensive, occurring in 10.5% and 35.8% of the cohort, respectively.

There were 71 MIs and 193 deaths over a median of 3.5 years. The adjusted relative risk for MI, compared with those without coronary atherosclerosis, was:

  • 7.65 (95% confidence interval, 3.53-16.57) overall in patients with extensive disease.
  • 8.28 (95% CI, 3.75-18.32) in those with obstructive but nonextensive disease.
  • 9.19 (95% CI, 4.49-18.82) overall in those with obstructive disease.
  • 12.48 (95% CI, 5.50-28.12) in those with or obstructive and extensive disease.

The adjusted RR for the composite of death or MI was also elevated in persons with extensive disease:

  • 2.70 (95% CI, 1.72-4.25) in those with extensive but nonobstructive disease.
  • 3.15 (95% CI, 2.05-4.83) in those with extensive and obstructive disease.

“It’s one thing to show that the more plaque, the higher the risk,” Dr. Blankstein said. But “does the information ultimately lead to better outcomes? Do patients have fewer MIs or fewer deaths?” Several ongoing randomized trials are exploring these questions.

They include DANE-HEART (Computed Tomography Coronary Angiography for Primary Prevention), projected to enroll about 6,000 participants from the Copenhagen General Population Study cohort who have at least one CV risk factor, and SCOT-HEART 2 (second Computed Tomography Coronary Angiography for the Prevention of Myocardial Infarction), enrolling a similar cohort in Scotland.

The study was supported by grants from AP Møller og Hustru Chastine Mc-Kinney Møllers Fond, the Research Council of Rigshospitalet, and Danish Heart Foundation. Dr. Fuchs reports no relevant financial relationships. Disclosures for the other authors can be found here. Dr. Blankstein recently disclosed serving as a consultant to Amgen, Caristo Diagnostics, Novartis, and Silence Therapeutics. Disclosures for Dr. McDermott and Dr. Newby, who are SCOT-HEART 2 investigators, can be found here.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Excess’ deaths surging, but why?

Article Type
Changed

 

This transcript has been edited for clarity.

“Excess deaths.” You’ve heard the phrase countless times by now. It is one of the myriad of previously esoteric epidemiology terms that the pandemic brought squarely into the zeitgeist.

As a sort of standard candle of the performance of a state or a region or a country in terms of health care, it has a lot of utility – if for nothing more than Monday-morning quarterbacking. But this week, I want to dig in on the concept a bit because, according to a new study, the excess death gap between the United States and Western Europe has never been higher.

What do we mean when we say “excess mortality?” The central connotation of the idea is that there are simply some deaths that should not have occurred. You might imagine that the best way to figure this out is for some group of intelligent people to review each death and decide, somehow, whether it was expected or not. But aside from being impractical, this would end up being somewhat subjective. That older person who died from pneumonia – was that an expected death? Could it have been avoided?

Rather, the calculation of excess mortality relies on large numbers and statistical inference to compare an expected number of deaths with those that are observed.

The difference is excess mortality, even if you can never be sure whether any particular death was expected or not.

As always, however, the devil is in the details. What data do you use to define the expected number of deaths?

There are options here. Probably the most straightforward analysis uses past data from the country of interest. You look at annual deaths over some historical period of time and compare those numbers with the rates today. Two issues need to be accounted for here: population growth – a larger population will have more deaths, so you need to adjust the historical population with current levels, and demographic shifts – an older or more male population will have more deaths, so you need to adjust for that as well.

But provided you take care of those factors, you can estimate fairly well how many deaths you can expect to see in any given period of time.

Still, you should see right away that excess mortality is a relative concept. If you think that, just perhaps, the United States has some systematic failure to deliver care that has been stable and persistent over time, you wouldn’t capture that failing in an excess mortality calculation that uses U.S. historical data as the baseline.

The best way to get around that is to use data from other countries, and that’s just what this article – a rare single-author piece by Patrick Heuveline – does, calculating excess deaths in the United States by standardizing our mortality rates to the five largest Western European countries: the United Kingdom, France, Germany, Italy, and Spain.

Controlling for the differences in the demographics of that European population, here is the expected number of deaths in the United States over the past 5 years.



Note that there is a small uptick in expected deaths in 2020, reflecting the pandemic, which returns to baseline levels by 2021. This is because that’s what happened in Europe; by 2021, the excess mortality due to COVID-19 was quite low.

Here are the actual deaths in the US during that time.

US observed mortality and US expected mortalty (2017-2021)


Highlighted here in green, then, is the excess mortality over time in the United States.



There are some fascinating and concerning findings here.

First of all, you can see that even before the pandemic, the United States has an excess mortality problem. This is not entirely a surprise; we’ve known that so-called “deaths of despair,” those due to alcohol abuse, drug overdoses, and suicide, are at an all-time high and tend to affect a “prime of life” population that would not otherwise be expected to die. In fact, fully 50% of the excess deaths in the United States occur in those between ages 15 and 64.

Excess deaths are also a concerning percentage of total deaths. In 2017, 17% of total deaths in the United States could be considered “excess.” In 2021, that number had doubled to 35%. Nearly 900,000 individuals in the United States died in 2021 who perhaps didn’t need to.

The obvious culprit to blame here is COVID, but COVID-associated excess deaths only explain about 50% of the excess we see in 2021. The rest reflect something even more concerning: a worsening of the failures of the past, perhaps exacerbated by the pandemic but not due to the virus itself.

Of course, we started this discussion acknowledging that the calculation of excess mortality is exquisitely dependent on how you model the expected number of deaths, and I’m sure some will take issue with the use of European numbers when applied to Americans. After all, Europe has, by and large, a robust public health service, socialized medicine, and healthcare that does not run the risk of bankrupting its citizens. How can we compare our outcomes to a place like that?

How indeed.
 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven,Conn. He reported no relevant conflicts of interest.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity.

“Excess deaths.” You’ve heard the phrase countless times by now. It is one of the myriad of previously esoteric epidemiology terms that the pandemic brought squarely into the zeitgeist.

As a sort of standard candle of the performance of a state or a region or a country in terms of health care, it has a lot of utility – if for nothing more than Monday-morning quarterbacking. But this week, I want to dig in on the concept a bit because, according to a new study, the excess death gap between the United States and Western Europe has never been higher.

What do we mean when we say “excess mortality?” The central connotation of the idea is that there are simply some deaths that should not have occurred. You might imagine that the best way to figure this out is for some group of intelligent people to review each death and decide, somehow, whether it was expected or not. But aside from being impractical, this would end up being somewhat subjective. That older person who died from pneumonia – was that an expected death? Could it have been avoided?

Rather, the calculation of excess mortality relies on large numbers and statistical inference to compare an expected number of deaths with those that are observed.

The difference is excess mortality, even if you can never be sure whether any particular death was expected or not.

As always, however, the devil is in the details. What data do you use to define the expected number of deaths?

There are options here. Probably the most straightforward analysis uses past data from the country of interest. You look at annual deaths over some historical period of time and compare those numbers with the rates today. Two issues need to be accounted for here: population growth – a larger population will have more deaths, so you need to adjust the historical population with current levels, and demographic shifts – an older or more male population will have more deaths, so you need to adjust for that as well.

But provided you take care of those factors, you can estimate fairly well how many deaths you can expect to see in any given period of time.

Still, you should see right away that excess mortality is a relative concept. If you think that, just perhaps, the United States has some systematic failure to deliver care that has been stable and persistent over time, you wouldn’t capture that failing in an excess mortality calculation that uses U.S. historical data as the baseline.

The best way to get around that is to use data from other countries, and that’s just what this article – a rare single-author piece by Patrick Heuveline – does, calculating excess deaths in the United States by standardizing our mortality rates to the five largest Western European countries: the United Kingdom, France, Germany, Italy, and Spain.

Controlling for the differences in the demographics of that European population, here is the expected number of deaths in the United States over the past 5 years.



Note that there is a small uptick in expected deaths in 2020, reflecting the pandemic, which returns to baseline levels by 2021. This is because that’s what happened in Europe; by 2021, the excess mortality due to COVID-19 was quite low.

Here are the actual deaths in the US during that time.

US observed mortality and US expected mortalty (2017-2021)


Highlighted here in green, then, is the excess mortality over time in the United States.



There are some fascinating and concerning findings here.

First of all, you can see that even before the pandemic, the United States has an excess mortality problem. This is not entirely a surprise; we’ve known that so-called “deaths of despair,” those due to alcohol abuse, drug overdoses, and suicide, are at an all-time high and tend to affect a “prime of life” population that would not otherwise be expected to die. In fact, fully 50% of the excess deaths in the United States occur in those between ages 15 and 64.

Excess deaths are also a concerning percentage of total deaths. In 2017, 17% of total deaths in the United States could be considered “excess.” In 2021, that number had doubled to 35%. Nearly 900,000 individuals in the United States died in 2021 who perhaps didn’t need to.

The obvious culprit to blame here is COVID, but COVID-associated excess deaths only explain about 50% of the excess we see in 2021. The rest reflect something even more concerning: a worsening of the failures of the past, perhaps exacerbated by the pandemic but not due to the virus itself.

Of course, we started this discussion acknowledging that the calculation of excess mortality is exquisitely dependent on how you model the expected number of deaths, and I’m sure some will take issue with the use of European numbers when applied to Americans. After all, Europe has, by and large, a robust public health service, socialized medicine, and healthcare that does not run the risk of bankrupting its citizens. How can we compare our outcomes to a place like that?

How indeed.
 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven,Conn. He reported no relevant conflicts of interest.
 

A version of this article originally appeared on Medscape.com.

 

This transcript has been edited for clarity.

“Excess deaths.” You’ve heard the phrase countless times by now. It is one of the myriad of previously esoteric epidemiology terms that the pandemic brought squarely into the zeitgeist.

As a sort of standard candle of the performance of a state or a region or a country in terms of health care, it has a lot of utility – if for nothing more than Monday-morning quarterbacking. But this week, I want to dig in on the concept a bit because, according to a new study, the excess death gap between the United States and Western Europe has never been higher.

What do we mean when we say “excess mortality?” The central connotation of the idea is that there are simply some deaths that should not have occurred. You might imagine that the best way to figure this out is for some group of intelligent people to review each death and decide, somehow, whether it was expected or not. But aside from being impractical, this would end up being somewhat subjective. That older person who died from pneumonia – was that an expected death? Could it have been avoided?

Rather, the calculation of excess mortality relies on large numbers and statistical inference to compare an expected number of deaths with those that are observed.

The difference is excess mortality, even if you can never be sure whether any particular death was expected or not.

As always, however, the devil is in the details. What data do you use to define the expected number of deaths?

There are options here. Probably the most straightforward analysis uses past data from the country of interest. You look at annual deaths over some historical period of time and compare those numbers with the rates today. Two issues need to be accounted for here: population growth – a larger population will have more deaths, so you need to adjust the historical population with current levels, and demographic shifts – an older or more male population will have more deaths, so you need to adjust for that as well.

But provided you take care of those factors, you can estimate fairly well how many deaths you can expect to see in any given period of time.

Still, you should see right away that excess mortality is a relative concept. If you think that, just perhaps, the United States has some systematic failure to deliver care that has been stable and persistent over time, you wouldn’t capture that failing in an excess mortality calculation that uses U.S. historical data as the baseline.

The best way to get around that is to use data from other countries, and that’s just what this article – a rare single-author piece by Patrick Heuveline – does, calculating excess deaths in the United States by standardizing our mortality rates to the five largest Western European countries: the United Kingdom, France, Germany, Italy, and Spain.

Controlling for the differences in the demographics of that European population, here is the expected number of deaths in the United States over the past 5 years.



Note that there is a small uptick in expected deaths in 2020, reflecting the pandemic, which returns to baseline levels by 2021. This is because that’s what happened in Europe; by 2021, the excess mortality due to COVID-19 was quite low.

Here are the actual deaths in the US during that time.

US observed mortality and US expected mortalty (2017-2021)


Highlighted here in green, then, is the excess mortality over time in the United States.



There are some fascinating and concerning findings here.

First of all, you can see that even before the pandemic, the United States has an excess mortality problem. This is not entirely a surprise; we’ve known that so-called “deaths of despair,” those due to alcohol abuse, drug overdoses, and suicide, are at an all-time high and tend to affect a “prime of life” population that would not otherwise be expected to die. In fact, fully 50% of the excess deaths in the United States occur in those between ages 15 and 64.

Excess deaths are also a concerning percentage of total deaths. In 2017, 17% of total deaths in the United States could be considered “excess.” In 2021, that number had doubled to 35%. Nearly 900,000 individuals in the United States died in 2021 who perhaps didn’t need to.

The obvious culprit to blame here is COVID, but COVID-associated excess deaths only explain about 50% of the excess we see in 2021. The rest reflect something even more concerning: a worsening of the failures of the past, perhaps exacerbated by the pandemic but not due to the virus itself.

Of course, we started this discussion acknowledging that the calculation of excess mortality is exquisitely dependent on how you model the expected number of deaths, and I’m sure some will take issue with the use of European numbers when applied to Americans. After all, Europe has, by and large, a robust public health service, socialized medicine, and healthcare that does not run the risk of bankrupting its citizens. How can we compare our outcomes to a place like that?

How indeed.
 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven,Conn. He reported no relevant conflicts of interest.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Could a baby’s gut health be an early predictor of future type 1 diabetes?

Article Type
Changed

Microbial biomarkers for type 1 diabetes may be present in infants as young as 12 months old, suggesting the potential to mitigate disease onset by nurturing a healthy gut microbiome early, show data from the Swedish general population.

“Our findings indicate that the gut of infants who go on to develop type 1 diabetes is notably different from healthy babies,” said Malin Bélteky, MD, from the Crown Princess Victoria’s Children’s Hospital, Linköping, Sweden, who jointly led the work, which was recently published in Diabetologia, alongside Patricia L. Milletich, PhD candidate, from the University of Florida, Gainesville.

“This discovery could be used to help identity infants at [the] highest risk of developing type 1 diabetes before or during the first stage of disease and could offer the opportunity to bolster a healthy gut microbiome to prevent the disease from becoming established,” added Dr. Bélteky.

Currently, beta-cell autoantibodies are used to predict disease, which are usually only identifiable between 9 and 36 months of age.

Marian Rewers, MD, PhD, professor of pediatrics & medicine, University of Colorado, Denver, and principal investigator of The Environmental Determinants of Diabetes in the Young (TEDDY) study, welcomed the findings, saying it is a well-designed study from a strong group of investigators.

“While the effective number of cases was very small [n = 16], the results were apparently adjusted for multiple comparisons, and significant differences were noted in the microbiome of cases versus controls at 1 year of age. This was 12 years prior to the average age of type 1 diabetes diagnosis in the cases,” he said.

“The differences in diversity and abundances of specific bacteria need to be interpreted with caution; however, the study results are consistent with several previous reports,” he noted.
 

Differences in microbial diversity and function

Data were drawn from children participating in the longitudinal, general population All Babies In Southeast Sweden (ABIS) study. Microbiota from stool samples, taken at age 1 year, were sequenced and analyzed to establish diversity, abundance, and functional status of the component bacteria. Questionnaires were completed at birth and at 1 year of age, allowing for the study of environmental factors that might influence the microbiota or type 1 diabetes risk independently. Parent diaries provided information on pregnancy, nutrition, and lifestyle factors.

Of the cohort of 167 children who developed type 1 diabetes by 2020, stool samples were available for 16 of these participants, which were compared with 268 healthy controls. The microbiomes of the 16 infants who later developed type 1 diabetes were compared with 100 iterations of 32 matched control infants (matched by geographical region, siblings at birth, residence type, duration of breastfeeding, and month of stool collection) who didn’t develop type 1 diabetes by the age of 20.

Specific bacteria found in greater abundance in children who later developed type 1 diabetes, compared with those who didn’t, included Firmicutes (Enterococcus, Gemella, and Hungatella), as well as Bacteroides (Bacteroides and Porphyromonas), known to promote inflammation and be involved in the immune response.

Bacteria with greater abundance in children who didn’t develop type 1 diabetes, compared with those who did, were Firmicutes (Anaerostipes, Flavonifractor, and Ruminococcaceae UBA1819, and Eubacterium). These species help maintain metabolic and immune health and produce butyrate, an important short-chain fatty acid that helps prevent inflammation and fuels the cells of the gut lining.

Alistipes were more abundant in infants who didn’t develop type 1 diabetes, and various abundances of Fusicatenibacter were the strongest factors for differentiating future type 1 diabetes, reported the researchers.

“Gut microbial biomarkers at 12 months would benefit the prediction opportunity well before the onset of multiple autoantibodies,” write the authors.

The youngest age at type 1 diabetes diagnosis was aged 1 year, 4 months, and the oldest was aged 21 years, 4 months. The mean age at diagnosis was 13.3 years.

The microbial differences found between infants who go on to develop type 1 diabetes and those who don’t also shed light on interactions between the developing immune system and short-chain fatty acid production and metabolism in childhood autoimmunity, write the authors.

Prior studies have found fewer short-chain fatty acid–producing microbiota in the gut of children with early-onset autoantibody development. This study confirmed these data, finding a decrease in butyrate-producing bacteria (Anaerostipes, Flavonifractor, Ruminococcaceae UBA1819, and Eubacterium) in infants who went on to develop type 1 diabetes. Likewise, a reduction in pyruvate fermentation was found in those infants with future disease.

According to coauthor Eric Triplett, PhD, from the University of Florida, Gainesville: “The autoimmune processes usually begin long before any clinical signs of disease appear, highlighting how differences in the makeup of the infant gut microbiome could shed important light on the complex interaction between the developing immune system, environmental exposures in childhood, and autoimmunity. Studies with much larger cohorts of prospectively traced individuals will be required to establish which are the strongest biomarkers and how effectively they can predict disease.”

The authors and Dr. Rewers have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Microbial biomarkers for type 1 diabetes may be present in infants as young as 12 months old, suggesting the potential to mitigate disease onset by nurturing a healthy gut microbiome early, show data from the Swedish general population.

“Our findings indicate that the gut of infants who go on to develop type 1 diabetes is notably different from healthy babies,” said Malin Bélteky, MD, from the Crown Princess Victoria’s Children’s Hospital, Linköping, Sweden, who jointly led the work, which was recently published in Diabetologia, alongside Patricia L. Milletich, PhD candidate, from the University of Florida, Gainesville.

“This discovery could be used to help identity infants at [the] highest risk of developing type 1 diabetes before or during the first stage of disease and could offer the opportunity to bolster a healthy gut microbiome to prevent the disease from becoming established,” added Dr. Bélteky.

Currently, beta-cell autoantibodies are used to predict disease, which are usually only identifiable between 9 and 36 months of age.

Marian Rewers, MD, PhD, professor of pediatrics & medicine, University of Colorado, Denver, and principal investigator of The Environmental Determinants of Diabetes in the Young (TEDDY) study, welcomed the findings, saying it is a well-designed study from a strong group of investigators.

“While the effective number of cases was very small [n = 16], the results were apparently adjusted for multiple comparisons, and significant differences were noted in the microbiome of cases versus controls at 1 year of age. This was 12 years prior to the average age of type 1 diabetes diagnosis in the cases,” he said.

“The differences in diversity and abundances of specific bacteria need to be interpreted with caution; however, the study results are consistent with several previous reports,” he noted.
 

Differences in microbial diversity and function

Data were drawn from children participating in the longitudinal, general population All Babies In Southeast Sweden (ABIS) study. Microbiota from stool samples, taken at age 1 year, were sequenced and analyzed to establish diversity, abundance, and functional status of the component bacteria. Questionnaires were completed at birth and at 1 year of age, allowing for the study of environmental factors that might influence the microbiota or type 1 diabetes risk independently. Parent diaries provided information on pregnancy, nutrition, and lifestyle factors.

Of the cohort of 167 children who developed type 1 diabetes by 2020, stool samples were available for 16 of these participants, which were compared with 268 healthy controls. The microbiomes of the 16 infants who later developed type 1 diabetes were compared with 100 iterations of 32 matched control infants (matched by geographical region, siblings at birth, residence type, duration of breastfeeding, and month of stool collection) who didn’t develop type 1 diabetes by the age of 20.

Specific bacteria found in greater abundance in children who later developed type 1 diabetes, compared with those who didn’t, included Firmicutes (Enterococcus, Gemella, and Hungatella), as well as Bacteroides (Bacteroides and Porphyromonas), known to promote inflammation and be involved in the immune response.

Bacteria with greater abundance in children who didn’t develop type 1 diabetes, compared with those who did, were Firmicutes (Anaerostipes, Flavonifractor, and Ruminococcaceae UBA1819, and Eubacterium). These species help maintain metabolic and immune health and produce butyrate, an important short-chain fatty acid that helps prevent inflammation and fuels the cells of the gut lining.

Alistipes were more abundant in infants who didn’t develop type 1 diabetes, and various abundances of Fusicatenibacter were the strongest factors for differentiating future type 1 diabetes, reported the researchers.

“Gut microbial biomarkers at 12 months would benefit the prediction opportunity well before the onset of multiple autoantibodies,” write the authors.

The youngest age at type 1 diabetes diagnosis was aged 1 year, 4 months, and the oldest was aged 21 years, 4 months. The mean age at diagnosis was 13.3 years.

The microbial differences found between infants who go on to develop type 1 diabetes and those who don’t also shed light on interactions between the developing immune system and short-chain fatty acid production and metabolism in childhood autoimmunity, write the authors.

Prior studies have found fewer short-chain fatty acid–producing microbiota in the gut of children with early-onset autoantibody development. This study confirmed these data, finding a decrease in butyrate-producing bacteria (Anaerostipes, Flavonifractor, Ruminococcaceae UBA1819, and Eubacterium) in infants who went on to develop type 1 diabetes. Likewise, a reduction in pyruvate fermentation was found in those infants with future disease.

According to coauthor Eric Triplett, PhD, from the University of Florida, Gainesville: “The autoimmune processes usually begin long before any clinical signs of disease appear, highlighting how differences in the makeup of the infant gut microbiome could shed important light on the complex interaction between the developing immune system, environmental exposures in childhood, and autoimmunity. Studies with much larger cohorts of prospectively traced individuals will be required to establish which are the strongest biomarkers and how effectively they can predict disease.”

The authors and Dr. Rewers have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Microbial biomarkers for type 1 diabetes may be present in infants as young as 12 months old, suggesting the potential to mitigate disease onset by nurturing a healthy gut microbiome early, show data from the Swedish general population.

“Our findings indicate that the gut of infants who go on to develop type 1 diabetes is notably different from healthy babies,” said Malin Bélteky, MD, from the Crown Princess Victoria’s Children’s Hospital, Linköping, Sweden, who jointly led the work, which was recently published in Diabetologia, alongside Patricia L. Milletich, PhD candidate, from the University of Florida, Gainesville.

“This discovery could be used to help identity infants at [the] highest risk of developing type 1 diabetes before or during the first stage of disease and could offer the opportunity to bolster a healthy gut microbiome to prevent the disease from becoming established,” added Dr. Bélteky.

Currently, beta-cell autoantibodies are used to predict disease, which are usually only identifiable between 9 and 36 months of age.

Marian Rewers, MD, PhD, professor of pediatrics & medicine, University of Colorado, Denver, and principal investigator of The Environmental Determinants of Diabetes in the Young (TEDDY) study, welcomed the findings, saying it is a well-designed study from a strong group of investigators.

“While the effective number of cases was very small [n = 16], the results were apparently adjusted for multiple comparisons, and significant differences were noted in the microbiome of cases versus controls at 1 year of age. This was 12 years prior to the average age of type 1 diabetes diagnosis in the cases,” he said.

“The differences in diversity and abundances of specific bacteria need to be interpreted with caution; however, the study results are consistent with several previous reports,” he noted.
 

Differences in microbial diversity and function

Data were drawn from children participating in the longitudinal, general population All Babies In Southeast Sweden (ABIS) study. Microbiota from stool samples, taken at age 1 year, were sequenced and analyzed to establish diversity, abundance, and functional status of the component bacteria. Questionnaires were completed at birth and at 1 year of age, allowing for the study of environmental factors that might influence the microbiota or type 1 diabetes risk independently. Parent diaries provided information on pregnancy, nutrition, and lifestyle factors.

Of the cohort of 167 children who developed type 1 diabetes by 2020, stool samples were available for 16 of these participants, which were compared with 268 healthy controls. The microbiomes of the 16 infants who later developed type 1 diabetes were compared with 100 iterations of 32 matched control infants (matched by geographical region, siblings at birth, residence type, duration of breastfeeding, and month of stool collection) who didn’t develop type 1 diabetes by the age of 20.

Specific bacteria found in greater abundance in children who later developed type 1 diabetes, compared with those who didn’t, included Firmicutes (Enterococcus, Gemella, and Hungatella), as well as Bacteroides (Bacteroides and Porphyromonas), known to promote inflammation and be involved in the immune response.

Bacteria with greater abundance in children who didn’t develop type 1 diabetes, compared with those who did, were Firmicutes (Anaerostipes, Flavonifractor, and Ruminococcaceae UBA1819, and Eubacterium). These species help maintain metabolic and immune health and produce butyrate, an important short-chain fatty acid that helps prevent inflammation and fuels the cells of the gut lining.

Alistipes were more abundant in infants who didn’t develop type 1 diabetes, and various abundances of Fusicatenibacter were the strongest factors for differentiating future type 1 diabetes, reported the researchers.

“Gut microbial biomarkers at 12 months would benefit the prediction opportunity well before the onset of multiple autoantibodies,” write the authors.

The youngest age at type 1 diabetes diagnosis was aged 1 year, 4 months, and the oldest was aged 21 years, 4 months. The mean age at diagnosis was 13.3 years.

The microbial differences found between infants who go on to develop type 1 diabetes and those who don’t also shed light on interactions between the developing immune system and short-chain fatty acid production and metabolism in childhood autoimmunity, write the authors.

Prior studies have found fewer short-chain fatty acid–producing microbiota in the gut of children with early-onset autoantibody development. This study confirmed these data, finding a decrease in butyrate-producing bacteria (Anaerostipes, Flavonifractor, Ruminococcaceae UBA1819, and Eubacterium) in infants who went on to develop type 1 diabetes. Likewise, a reduction in pyruvate fermentation was found in those infants with future disease.

According to coauthor Eric Triplett, PhD, from the University of Florida, Gainesville: “The autoimmune processes usually begin long before any clinical signs of disease appear, highlighting how differences in the makeup of the infant gut microbiome could shed important light on the complex interaction between the developing immune system, environmental exposures in childhood, and autoimmunity. Studies with much larger cohorts of prospectively traced individuals will be required to establish which are the strongest biomarkers and how effectively they can predict disease.”

The authors and Dr. Rewers have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Sweaty treatment for social anxiety could pass the sniff test

Article Type
Changed

 

Getting sweet on sweat

Are you the sort of person who struggles in social situations? Have the past 3 years been a secret respite from the terror and exhaustion of meeting new people? We understand your plight. People kind of suck. And you don’t have to look far to be reminded of it.

Unfortunately, on occasion we all have to interact with other human beings. If you suffer from social anxiety, this is not a fun thing to do. But new research indicates that there may be a way to alleviate the stress for those with social anxiety: armpits.

alex bracken/Unsplash

Specifically, sweat from the armpits of other people. Yes, this means a group of scientists gathered up some volunteers and collected their armpit sweat while the volunteers watched a variety of movies (horror, comedy, romance, etc.). Our condolences to the poor unpaid interns tasked with gathering the sweat.

Once they had their precious new medicine, the researchers took a group of women and administered a round of mindfulness therapy. Some of the participants then received the various sweats, while the rest were forced to smell only clean air. (The horror!) Lo and behold, the sweat groups had their anxiety scores reduced by about 40% after their therapy, compared with just 17% in the control group.

The researchers also found that the source of the sweat didn’t matter. Their study subjects responded the same to sweat excreted during a scary movie as they did to sweat from a comedy, a result that surprised the researchers. They suggested chemosignals in the sweat may affect the treatment response and advised further research. Which means more sweat collection! They plan on testing emotionally neutral movies next time, and if we can make a humble suggestion, they also should try the sweatiest movies.

Before the Food and Drug Administration can approve armpit sweat as a treatment for social anxiety, we have some advice for those shut-in introverts out there. Next time you have to interact with rabid extroverts, instead of shaking their hands, walk up to them and take a deep whiff of their armpits. Establish dominance. Someone will feel awkward, and science has proved it won’t be you.
 

The puff that vaccinates

Ever been shot with a Nerf gun or hit with a foam pool tube? More annoying than painful, right? If we asked if you’d rather get pelted with one of those than receive a traditional vaccine injection, you would choose the former. Maybe someday you actually will.

Dr. Jeremiah Gassensmith

During the boredom of the early pandemic lockdown, Jeremiah Gassensmith, PhD, of the department of chemistry and biochemistry at the University of Texas, Dallas, ordered a compressed gas–powered jet injection system to fool around with at home. Hey, who didn’t? Anyway, when it was time to go back to the lab he handed it over to one of his grad students, Yalini Wijesundara, and asked her to see what could be done with it.

In her tinkering she found that the jet injector could deliver metal-organic frameworks (MOFs) that can hold a bunch of different materials, like proteins and nucleic acids, through the skin.

Thus the “MOF-Jet” was born!

Jet injectors are nothing new, but they hurt. The MOF-Jet, however, is practically painless and cheaper than the gene guns that veterinarians use to inject biological cargo attached to the surface of a metal microparticle.

Changing the carrier gas also changes the time needed to break down the MOF and thus alters delivery of the drug inside. “If you shoot it with carbon dioxide, it will release its cargo faster within cells; if you use regular air, it will take 4 or 5 days,” Ms. Wijesundara explained in a written statement. That means the same drug could be released over different timescales without changing its formulation.

While testing on onion cells and mice, Ms. Wijesundara noted that it was as easy as “pointing and shooting” to distribute the puff of gas into the cells. A saving grace to those with needle anxiety. Not that we would know anything about needle anxiety.

More testing needs to be done before bringing this technology to human use, obviously, but we’re looking forward to saying goodbye to that dreaded prick and hello to a puff.
 

 

 

Your hippocampus is showing

Brain anatomy is one of the many, many things that’s not really our thing, but we do know a cool picture when we see one. Case in point: The image just below, which happens to be a full-scale, single-cell resolution model of the CA1 region of the hippocampus that “replicates the structure and architecture of the area, along with the position and relative connectivity of the neurons,” according to a statement from the Human Brain Project.

Dr. Michele Migliore

“We have performed a data mining operation on high resolution images of the human hippocampus, obtained from the BigBrain database. The position of individual neurons has been derived from a detailed analysis of these images,” said senior author Michele Migliore, PhD, of the Italian National Research Council’s Institute of Biophysics in Palermo.

Yes, he did say BigBrain database. BigBrain iswe checked and it’s definitely not this – a 3D model of a brain that was sectioned into 7,404 slices just 20 micrometers thick and then scanned by MRI. Digital reconstruction of those slices was done by supercomputer and the results are now available for analysis.

Dr. Migliore and his associates developed an image-processing algorithm to obtain neuronal positioning distribution and an algorithm to generate neuronal connectivity by approximating the shapes of dendrites and axons. (Our brains are starting to hurt just trying to write this.) “Some fit into narrow cones, others have a broad complex extension that can be approximated by dedicated geometrical volumes, and the connectivity to nearby neurons changes accordingly,” explained lead author Daniela Gandolfi of the University of Modena (Italy) and Reggio Emilia.

The investigators have made their dataset and the extraction methodology available on the EBRAINS platform and through the Human Brain Project and are moving on to other brain regions. And then, once everyone can find their way in and around the old gray matter, it should bring an end to conversations like this, which no doubt occur between male and female neuroscientists every day:

“Arnold, I think we’re lost.”

“Don’t worry, Bev, I know where I’m going.”

“Stop and ask this lady for directions.”

“I said I can find it.”

“Just ask her.”

“Fine. Excuse me, ma’am, can you tell us how to get to the corpora quadrigemina from here?

Publications
Topics
Sections

 

Getting sweet on sweat

Are you the sort of person who struggles in social situations? Have the past 3 years been a secret respite from the terror and exhaustion of meeting new people? We understand your plight. People kind of suck. And you don’t have to look far to be reminded of it.

Unfortunately, on occasion we all have to interact with other human beings. If you suffer from social anxiety, this is not a fun thing to do. But new research indicates that there may be a way to alleviate the stress for those with social anxiety: armpits.

alex bracken/Unsplash

Specifically, sweat from the armpits of other people. Yes, this means a group of scientists gathered up some volunteers and collected their armpit sweat while the volunteers watched a variety of movies (horror, comedy, romance, etc.). Our condolences to the poor unpaid interns tasked with gathering the sweat.

Once they had their precious new medicine, the researchers took a group of women and administered a round of mindfulness therapy. Some of the participants then received the various sweats, while the rest were forced to smell only clean air. (The horror!) Lo and behold, the sweat groups had their anxiety scores reduced by about 40% after their therapy, compared with just 17% in the control group.

The researchers also found that the source of the sweat didn’t matter. Their study subjects responded the same to sweat excreted during a scary movie as they did to sweat from a comedy, a result that surprised the researchers. They suggested chemosignals in the sweat may affect the treatment response and advised further research. Which means more sweat collection! They plan on testing emotionally neutral movies next time, and if we can make a humble suggestion, they also should try the sweatiest movies.

Before the Food and Drug Administration can approve armpit sweat as a treatment for social anxiety, we have some advice for those shut-in introverts out there. Next time you have to interact with rabid extroverts, instead of shaking their hands, walk up to them and take a deep whiff of their armpits. Establish dominance. Someone will feel awkward, and science has proved it won’t be you.
 

The puff that vaccinates

Ever been shot with a Nerf gun or hit with a foam pool tube? More annoying than painful, right? If we asked if you’d rather get pelted with one of those than receive a traditional vaccine injection, you would choose the former. Maybe someday you actually will.

Dr. Jeremiah Gassensmith

During the boredom of the early pandemic lockdown, Jeremiah Gassensmith, PhD, of the department of chemistry and biochemistry at the University of Texas, Dallas, ordered a compressed gas–powered jet injection system to fool around with at home. Hey, who didn’t? Anyway, when it was time to go back to the lab he handed it over to one of his grad students, Yalini Wijesundara, and asked her to see what could be done with it.

In her tinkering she found that the jet injector could deliver metal-organic frameworks (MOFs) that can hold a bunch of different materials, like proteins and nucleic acids, through the skin.

Thus the “MOF-Jet” was born!

Jet injectors are nothing new, but they hurt. The MOF-Jet, however, is practically painless and cheaper than the gene guns that veterinarians use to inject biological cargo attached to the surface of a metal microparticle.

Changing the carrier gas also changes the time needed to break down the MOF and thus alters delivery of the drug inside. “If you shoot it with carbon dioxide, it will release its cargo faster within cells; if you use regular air, it will take 4 or 5 days,” Ms. Wijesundara explained in a written statement. That means the same drug could be released over different timescales without changing its formulation.

While testing on onion cells and mice, Ms. Wijesundara noted that it was as easy as “pointing and shooting” to distribute the puff of gas into the cells. A saving grace to those with needle anxiety. Not that we would know anything about needle anxiety.

More testing needs to be done before bringing this technology to human use, obviously, but we’re looking forward to saying goodbye to that dreaded prick and hello to a puff.
 

 

 

Your hippocampus is showing

Brain anatomy is one of the many, many things that’s not really our thing, but we do know a cool picture when we see one. Case in point: The image just below, which happens to be a full-scale, single-cell resolution model of the CA1 region of the hippocampus that “replicates the structure and architecture of the area, along with the position and relative connectivity of the neurons,” according to a statement from the Human Brain Project.

Dr. Michele Migliore

“We have performed a data mining operation on high resolution images of the human hippocampus, obtained from the BigBrain database. The position of individual neurons has been derived from a detailed analysis of these images,” said senior author Michele Migliore, PhD, of the Italian National Research Council’s Institute of Biophysics in Palermo.

Yes, he did say BigBrain database. BigBrain iswe checked and it’s definitely not this – a 3D model of a brain that was sectioned into 7,404 slices just 20 micrometers thick and then scanned by MRI. Digital reconstruction of those slices was done by supercomputer and the results are now available for analysis.

Dr. Migliore and his associates developed an image-processing algorithm to obtain neuronal positioning distribution and an algorithm to generate neuronal connectivity by approximating the shapes of dendrites and axons. (Our brains are starting to hurt just trying to write this.) “Some fit into narrow cones, others have a broad complex extension that can be approximated by dedicated geometrical volumes, and the connectivity to nearby neurons changes accordingly,” explained lead author Daniela Gandolfi of the University of Modena (Italy) and Reggio Emilia.

The investigators have made their dataset and the extraction methodology available on the EBRAINS platform and through the Human Brain Project and are moving on to other brain regions. And then, once everyone can find their way in and around the old gray matter, it should bring an end to conversations like this, which no doubt occur between male and female neuroscientists every day:

“Arnold, I think we’re lost.”

“Don’t worry, Bev, I know where I’m going.”

“Stop and ask this lady for directions.”

“I said I can find it.”

“Just ask her.”

“Fine. Excuse me, ma’am, can you tell us how to get to the corpora quadrigemina from here?

 

Getting sweet on sweat

Are you the sort of person who struggles in social situations? Have the past 3 years been a secret respite from the terror and exhaustion of meeting new people? We understand your plight. People kind of suck. And you don’t have to look far to be reminded of it.

Unfortunately, on occasion we all have to interact with other human beings. If you suffer from social anxiety, this is not a fun thing to do. But new research indicates that there may be a way to alleviate the stress for those with social anxiety: armpits.

alex bracken/Unsplash

Specifically, sweat from the armpits of other people. Yes, this means a group of scientists gathered up some volunteers and collected their armpit sweat while the volunteers watched a variety of movies (horror, comedy, romance, etc.). Our condolences to the poor unpaid interns tasked with gathering the sweat.

Once they had their precious new medicine, the researchers took a group of women and administered a round of mindfulness therapy. Some of the participants then received the various sweats, while the rest were forced to smell only clean air. (The horror!) Lo and behold, the sweat groups had their anxiety scores reduced by about 40% after their therapy, compared with just 17% in the control group.

The researchers also found that the source of the sweat didn’t matter. Their study subjects responded the same to sweat excreted during a scary movie as they did to sweat from a comedy, a result that surprised the researchers. They suggested chemosignals in the sweat may affect the treatment response and advised further research. Which means more sweat collection! They plan on testing emotionally neutral movies next time, and if we can make a humble suggestion, they also should try the sweatiest movies.

Before the Food and Drug Administration can approve armpit sweat as a treatment for social anxiety, we have some advice for those shut-in introverts out there. Next time you have to interact with rabid extroverts, instead of shaking their hands, walk up to them and take a deep whiff of their armpits. Establish dominance. Someone will feel awkward, and science has proved it won’t be you.
 

The puff that vaccinates

Ever been shot with a Nerf gun or hit with a foam pool tube? More annoying than painful, right? If we asked if you’d rather get pelted with one of those than receive a traditional vaccine injection, you would choose the former. Maybe someday you actually will.

Dr. Jeremiah Gassensmith

During the boredom of the early pandemic lockdown, Jeremiah Gassensmith, PhD, of the department of chemistry and biochemistry at the University of Texas, Dallas, ordered a compressed gas–powered jet injection system to fool around with at home. Hey, who didn’t? Anyway, when it was time to go back to the lab he handed it over to one of his grad students, Yalini Wijesundara, and asked her to see what could be done with it.

In her tinkering she found that the jet injector could deliver metal-organic frameworks (MOFs) that can hold a bunch of different materials, like proteins and nucleic acids, through the skin.

Thus the “MOF-Jet” was born!

Jet injectors are nothing new, but they hurt. The MOF-Jet, however, is practically painless and cheaper than the gene guns that veterinarians use to inject biological cargo attached to the surface of a metal microparticle.

Changing the carrier gas also changes the time needed to break down the MOF and thus alters delivery of the drug inside. “If you shoot it with carbon dioxide, it will release its cargo faster within cells; if you use regular air, it will take 4 or 5 days,” Ms. Wijesundara explained in a written statement. That means the same drug could be released over different timescales without changing its formulation.

While testing on onion cells and mice, Ms. Wijesundara noted that it was as easy as “pointing and shooting” to distribute the puff of gas into the cells. A saving grace to those with needle anxiety. Not that we would know anything about needle anxiety.

More testing needs to be done before bringing this technology to human use, obviously, but we’re looking forward to saying goodbye to that dreaded prick and hello to a puff.
 

 

 

Your hippocampus is showing

Brain anatomy is one of the many, many things that’s not really our thing, but we do know a cool picture when we see one. Case in point: The image just below, which happens to be a full-scale, single-cell resolution model of the CA1 region of the hippocampus that “replicates the structure and architecture of the area, along with the position and relative connectivity of the neurons,” according to a statement from the Human Brain Project.

Dr. Michele Migliore

“We have performed a data mining operation on high resolution images of the human hippocampus, obtained from the BigBrain database. The position of individual neurons has been derived from a detailed analysis of these images,” said senior author Michele Migliore, PhD, of the Italian National Research Council’s Institute of Biophysics in Palermo.

Yes, he did say BigBrain database. BigBrain iswe checked and it’s definitely not this – a 3D model of a brain that was sectioned into 7,404 slices just 20 micrometers thick and then scanned by MRI. Digital reconstruction of those slices was done by supercomputer and the results are now available for analysis.

Dr. Migliore and his associates developed an image-processing algorithm to obtain neuronal positioning distribution and an algorithm to generate neuronal connectivity by approximating the shapes of dendrites and axons. (Our brains are starting to hurt just trying to write this.) “Some fit into narrow cones, others have a broad complex extension that can be approximated by dedicated geometrical volumes, and the connectivity to nearby neurons changes accordingly,” explained lead author Daniela Gandolfi of the University of Modena (Italy) and Reggio Emilia.

The investigators have made their dataset and the extraction methodology available on the EBRAINS platform and through the Human Brain Project and are moving on to other brain regions. And then, once everyone can find their way in and around the old gray matter, it should bring an end to conversations like this, which no doubt occur between male and female neuroscientists every day:

“Arnold, I think we’re lost.”

“Don’t worry, Bev, I know where I’m going.”

“Stop and ask this lady for directions.”

“I said I can find it.”

“Just ask her.”

“Fine. Excuse me, ma’am, can you tell us how to get to the corpora quadrigemina from here?

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

FDA approves OTC naloxone, but will cost be a barrier?

Article Type
Changed

The Food and Drug Administration has approved over-the-counter sales of the overdose reversal agent Narcan (naloxone, Emergent BioSolutions). Greater access to the drug should mean more lives saved. However, it’s unclear how much the nasal spray will cost and whether pharmacies will stock the product openly on shelves. 

Currently, major pharmacy chains such as CVS and Walgreens make naloxone available without prescription, but consumers have to ask a pharmacist to dispense the drug.

“The major question is what is it going to cost,” Brian Hurley, MD, MBA, president-elect of the American Society of Addiction Medicine, said in an interview. “In order for people to access it they have to be able to afford it.”

“We won’t accomplish much if people can’t afford to buy Narcan,” said Chuck Ingoglia, president and CEO of the National Council for Mental Wellbeing, in a statement. Still, he applauded the FDA.

“No single approach will end overdose deaths but making Narcan easy to obtain and widely available likely will save countless lives annually,” he said.

“The timeline for availability and price of this OTC product is determined by the manufacturer,” the FDA said in a statement.

Commissioner Robert M. Califf, MD, called for the drug’s manufacturer to “make accessibility to the product a priority by making it available as soon as possible and at an affordable price.”

Emergent BioSolutions did not comment on cost. It said in a statement that the spray “will be available on U.S. shelves and at online retailers by the late summer,” after it has adapted Narcan for direct-to-consumer use, including more consumer-oriented packaging.

Naloxone’s cost varies, depending on geographic location and whether it is generic. According to GoodRX, a box containing two doses of generic naloxone costs $31-$100, depending on location and coupon availability.

A two-dose box of Narcan costs $135-$140. Emergent reported a 14% decline in naloxone sales in 2022 – to $373.7 million – blaming it in part on the introduction of generic formulations.

Dr. Hurley said he expects those who purchase Narcan at a drug store will primarily already be shopping there. It may or may not be those who most often experience overdose, such as people leaving incarceration or experiencing homelessness.

Having Narcan available over-the-counter “is an important supplement but it doesn’t replace the existing array of naloxone distribution programs,” Dr. Hurley said.

The FDA has encouraged naloxone manufacturers to seek OTC approval for the medication since at least 2019, when it designed a model label for a theoretical OTC product.

In November, the agency said it had determined that some naloxone products had the potential to be safe and effective for OTC use and again urged drugmakers to seek such an approval.

Emergent BioSolutions was the first to pursue OTC approval, but another manufacturer – the nonprofit Harm Reduction Therapeutics – is awaiting approval of its application to sell its spray directly to consumers.

Scott Gottlieb, MD, who was the FDA commissioner from 2017 to 2019, said in a tweet that more work needed to be done.

“This regulatory move should be followed by a strong push by elected officials to support wider deployment of Narcan, getting more doses into the hands of at risk households and frontline workers,” he tweeted.

Mr. Ingoglia said that “Narcan represents a second chance. By giving people a second chance, we also give them an opportunity to enter treatment if they so choose. You can’t recover if you’re dead, and we shouldn’t turn our backs on those who may choose a pathway to recovery that includes treatment.”
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The Food and Drug Administration has approved over-the-counter sales of the overdose reversal agent Narcan (naloxone, Emergent BioSolutions). Greater access to the drug should mean more lives saved. However, it’s unclear how much the nasal spray will cost and whether pharmacies will stock the product openly on shelves. 

Currently, major pharmacy chains such as CVS and Walgreens make naloxone available without prescription, but consumers have to ask a pharmacist to dispense the drug.

“The major question is what is it going to cost,” Brian Hurley, MD, MBA, president-elect of the American Society of Addiction Medicine, said in an interview. “In order for people to access it they have to be able to afford it.”

“We won’t accomplish much if people can’t afford to buy Narcan,” said Chuck Ingoglia, president and CEO of the National Council for Mental Wellbeing, in a statement. Still, he applauded the FDA.

“No single approach will end overdose deaths but making Narcan easy to obtain and widely available likely will save countless lives annually,” he said.

“The timeline for availability and price of this OTC product is determined by the manufacturer,” the FDA said in a statement.

Commissioner Robert M. Califf, MD, called for the drug’s manufacturer to “make accessibility to the product a priority by making it available as soon as possible and at an affordable price.”

Emergent BioSolutions did not comment on cost. It said in a statement that the spray “will be available on U.S. shelves and at online retailers by the late summer,” after it has adapted Narcan for direct-to-consumer use, including more consumer-oriented packaging.

Naloxone’s cost varies, depending on geographic location and whether it is generic. According to GoodRX, a box containing two doses of generic naloxone costs $31-$100, depending on location and coupon availability.

A two-dose box of Narcan costs $135-$140. Emergent reported a 14% decline in naloxone sales in 2022 – to $373.7 million – blaming it in part on the introduction of generic formulations.

Dr. Hurley said he expects those who purchase Narcan at a drug store will primarily already be shopping there. It may or may not be those who most often experience overdose, such as people leaving incarceration or experiencing homelessness.

Having Narcan available over-the-counter “is an important supplement but it doesn’t replace the existing array of naloxone distribution programs,” Dr. Hurley said.

The FDA has encouraged naloxone manufacturers to seek OTC approval for the medication since at least 2019, when it designed a model label for a theoretical OTC product.

In November, the agency said it had determined that some naloxone products had the potential to be safe and effective for OTC use and again urged drugmakers to seek such an approval.

Emergent BioSolutions was the first to pursue OTC approval, but another manufacturer – the nonprofit Harm Reduction Therapeutics – is awaiting approval of its application to sell its spray directly to consumers.

Scott Gottlieb, MD, who was the FDA commissioner from 2017 to 2019, said in a tweet that more work needed to be done.

“This regulatory move should be followed by a strong push by elected officials to support wider deployment of Narcan, getting more doses into the hands of at risk households and frontline workers,” he tweeted.

Mr. Ingoglia said that “Narcan represents a second chance. By giving people a second chance, we also give them an opportunity to enter treatment if they so choose. You can’t recover if you’re dead, and we shouldn’t turn our backs on those who may choose a pathway to recovery that includes treatment.”
 

A version of this article first appeared on Medscape.com.

The Food and Drug Administration has approved over-the-counter sales of the overdose reversal agent Narcan (naloxone, Emergent BioSolutions). Greater access to the drug should mean more lives saved. However, it’s unclear how much the nasal spray will cost and whether pharmacies will stock the product openly on shelves. 

Currently, major pharmacy chains such as CVS and Walgreens make naloxone available without prescription, but consumers have to ask a pharmacist to dispense the drug.

“The major question is what is it going to cost,” Brian Hurley, MD, MBA, president-elect of the American Society of Addiction Medicine, said in an interview. “In order for people to access it they have to be able to afford it.”

“We won’t accomplish much if people can’t afford to buy Narcan,” said Chuck Ingoglia, president and CEO of the National Council for Mental Wellbeing, in a statement. Still, he applauded the FDA.

“No single approach will end overdose deaths but making Narcan easy to obtain and widely available likely will save countless lives annually,” he said.

“The timeline for availability and price of this OTC product is determined by the manufacturer,” the FDA said in a statement.

Commissioner Robert M. Califf, MD, called for the drug’s manufacturer to “make accessibility to the product a priority by making it available as soon as possible and at an affordable price.”

Emergent BioSolutions did not comment on cost. It said in a statement that the spray “will be available on U.S. shelves and at online retailers by the late summer,” after it has adapted Narcan for direct-to-consumer use, including more consumer-oriented packaging.

Naloxone’s cost varies, depending on geographic location and whether it is generic. According to GoodRX, a box containing two doses of generic naloxone costs $31-$100, depending on location and coupon availability.

A two-dose box of Narcan costs $135-$140. Emergent reported a 14% decline in naloxone sales in 2022 – to $373.7 million – blaming it in part on the introduction of generic formulations.

Dr. Hurley said he expects those who purchase Narcan at a drug store will primarily already be shopping there. It may or may not be those who most often experience overdose, such as people leaving incarceration or experiencing homelessness.

Having Narcan available over-the-counter “is an important supplement but it doesn’t replace the existing array of naloxone distribution programs,” Dr. Hurley said.

The FDA has encouraged naloxone manufacturers to seek OTC approval for the medication since at least 2019, when it designed a model label for a theoretical OTC product.

In November, the agency said it had determined that some naloxone products had the potential to be safe and effective for OTC use and again urged drugmakers to seek such an approval.

Emergent BioSolutions was the first to pursue OTC approval, but another manufacturer – the nonprofit Harm Reduction Therapeutics – is awaiting approval of its application to sell its spray directly to consumers.

Scott Gottlieb, MD, who was the FDA commissioner from 2017 to 2019, said in a tweet that more work needed to be done.

“This regulatory move should be followed by a strong push by elected officials to support wider deployment of Narcan, getting more doses into the hands of at risk households and frontline workers,” he tweeted.

Mr. Ingoglia said that “Narcan represents a second chance. By giving people a second chance, we also give them an opportunity to enter treatment if they so choose. You can’t recover if you’re dead, and we shouldn’t turn our backs on those who may choose a pathway to recovery that includes treatment.”
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Plant-based diets not always healthy; quality is key

Article Type
Changed

Diets consisting of high-quality – but not low-quality – plant-based foods and lower intakes of animal products may lower the risks for cancer, heart disease, and early death, new research suggests.

The prospective cohort study used data from more than 120,000 middle-aged adults followed for over 10 years in the UK Biobank. Those who consumed a healthful plant-based diet – with higher amounts of foods such as fruits, vegetables, legumes, whole grains, and nuts – and lower intakes of animal products, sugary drinks, and refined grains had a 16% lower risk of dying during follow-up, compared with those with the lowest intakes of the healthful plant-based foods.

By contrast, an unhealthy plant-based diet was associated with a 23% higher total mortality risk.

“Not all plant-based diets are created equally. Our data provide evidence to support the notion that for health benefits the plant-based sources need to be whole grains, fruits and vegetables, legumes, nuts, etc., rather than processed plant-based foods,” study coauthor Aedín Cassidy, PhD, of Queen’s University, Belfast, Northern Ireland, said in an interview.

She added: “We do not necessarily need to radically shift diets to vegan or vegetarian regimens, but rather to switch proportions on the plate to incorporate more healthful plant-based foods, fish, and leaner cuts of meat into our habitual diet. This would have benefits for both individual health and planetary health.”

The findings were published online in JAMA Network Open by Alysha S. Thompson, MSc, also at Queen’s University, and colleagues.
 

High- vs. low-quality plant-based diets linked to better outcomes

The UK Biobank is a population-based, prospective study that included more than 500,000 participants aged 40-69 years at the time of recruitment between 2006 and 2010 at 22 centers in England, Scotland, and Wales. The current study included 126,395 individuals; slightly over half (55.9%) are women.

Food intake data were collected for at least two 24-hour periods to create both “healthful” and “unhealthful” plant-based diet indexes (PDIs). These included 17 food groups: whole grains, fruits, vegetables, nuts, legumes and vegetarian protein alternatives, tea and coffee, fruit juices, refined grains, potatoes, sugar-sweetened beverages, sweets and desserts, animal fat, dairy, eggs, fish or seafood, meat, and miscellaneous animal-derived foods. Data on oils weren’t available.

Higher scores on the healthful PDI and unhealthful PDI were scored positively or negatively based on quantities of those foods consumed.

Participants were then ranked in quartiles for portions of each food group and assigned scores between 2 (lowest-intake category) and 5 (highest).

During a follow-up of 10.6-12.2 years, there were 698 deaths attributed to cardiovascular disease, 3,275 deaths caused by cancer, 6,890 individuals who experienced a cardiovascular incident, and 8,939 with incident cancer.

Another 4,751 experienced an incident fracture, which was evaluated because of the concern that diets low in animal protein might lead to insufficient vitamin B and calcium intake.

After adjustment for confounding factors, the hazard ratio for all-cause mortality in individuals with the highest healthful PDI score quartile compared with the lowest quartile was 0.84.

At the same time, the HR for all-cause mortality for those with the highest versus lowest unhealthful PDI scores was 1.23, and for cancer-related mortality was 1.19. All were statistically significant (P = .004).

Similarly, greater healthy plant-based diet adherence was associated with a significantly lower risk of being diagnosed with any cancer (HR, 0.93; P = .03), while higher unhealthful PDI scores yielded a higher risk (HR, 1.10; P = .004).

Moreover, higher healthy PDI scores were associated with lower risks for total cardiovascular incident risks (HR, 0.92; P = .007), as well as for the individual events of ischemic stroke (HR, 0.84; P = .08) and MI (HR, 0.86; P = .004). Higher unhealthy PDI scores were similarly associated with greater risks for those outcomes, with an overall HR of 1.21 (P = .004).

No associations were found between either healthful PDI or unhealthful PDI and total or site-specific fracture risk.

And because 91.3% of the UK Biobank study population was White, “future studies among more racially, ethnically, and culturally diverse populations are needed to assess the risk of major chronic disease in relation to [plant-based diets],” the authors wrote.

Dr. Cassidy and Ms. Thompson reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Diets consisting of high-quality – but not low-quality – plant-based foods and lower intakes of animal products may lower the risks for cancer, heart disease, and early death, new research suggests.

The prospective cohort study used data from more than 120,000 middle-aged adults followed for over 10 years in the UK Biobank. Those who consumed a healthful plant-based diet – with higher amounts of foods such as fruits, vegetables, legumes, whole grains, and nuts – and lower intakes of animal products, sugary drinks, and refined grains had a 16% lower risk of dying during follow-up, compared with those with the lowest intakes of the healthful plant-based foods.

By contrast, an unhealthy plant-based diet was associated with a 23% higher total mortality risk.

“Not all plant-based diets are created equally. Our data provide evidence to support the notion that for health benefits the plant-based sources need to be whole grains, fruits and vegetables, legumes, nuts, etc., rather than processed plant-based foods,” study coauthor Aedín Cassidy, PhD, of Queen’s University, Belfast, Northern Ireland, said in an interview.

She added: “We do not necessarily need to radically shift diets to vegan or vegetarian regimens, but rather to switch proportions on the plate to incorporate more healthful plant-based foods, fish, and leaner cuts of meat into our habitual diet. This would have benefits for both individual health and planetary health.”

The findings were published online in JAMA Network Open by Alysha S. Thompson, MSc, also at Queen’s University, and colleagues.
 

High- vs. low-quality plant-based diets linked to better outcomes

The UK Biobank is a population-based, prospective study that included more than 500,000 participants aged 40-69 years at the time of recruitment between 2006 and 2010 at 22 centers in England, Scotland, and Wales. The current study included 126,395 individuals; slightly over half (55.9%) are women.

Food intake data were collected for at least two 24-hour periods to create both “healthful” and “unhealthful” plant-based diet indexes (PDIs). These included 17 food groups: whole grains, fruits, vegetables, nuts, legumes and vegetarian protein alternatives, tea and coffee, fruit juices, refined grains, potatoes, sugar-sweetened beverages, sweets and desserts, animal fat, dairy, eggs, fish or seafood, meat, and miscellaneous animal-derived foods. Data on oils weren’t available.

Higher scores on the healthful PDI and unhealthful PDI were scored positively or negatively based on quantities of those foods consumed.

Participants were then ranked in quartiles for portions of each food group and assigned scores between 2 (lowest-intake category) and 5 (highest).

During a follow-up of 10.6-12.2 years, there were 698 deaths attributed to cardiovascular disease, 3,275 deaths caused by cancer, 6,890 individuals who experienced a cardiovascular incident, and 8,939 with incident cancer.

Another 4,751 experienced an incident fracture, which was evaluated because of the concern that diets low in animal protein might lead to insufficient vitamin B and calcium intake.

After adjustment for confounding factors, the hazard ratio for all-cause mortality in individuals with the highest healthful PDI score quartile compared with the lowest quartile was 0.84.

At the same time, the HR for all-cause mortality for those with the highest versus lowest unhealthful PDI scores was 1.23, and for cancer-related mortality was 1.19. All were statistically significant (P = .004).

Similarly, greater healthy plant-based diet adherence was associated with a significantly lower risk of being diagnosed with any cancer (HR, 0.93; P = .03), while higher unhealthful PDI scores yielded a higher risk (HR, 1.10; P = .004).

Moreover, higher healthy PDI scores were associated with lower risks for total cardiovascular incident risks (HR, 0.92; P = .007), as well as for the individual events of ischemic stroke (HR, 0.84; P = .08) and MI (HR, 0.86; P = .004). Higher unhealthy PDI scores were similarly associated with greater risks for those outcomes, with an overall HR of 1.21 (P = .004).

No associations were found between either healthful PDI or unhealthful PDI and total or site-specific fracture risk.

And because 91.3% of the UK Biobank study population was White, “future studies among more racially, ethnically, and culturally diverse populations are needed to assess the risk of major chronic disease in relation to [plant-based diets],” the authors wrote.

Dr. Cassidy and Ms. Thompson reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Diets consisting of high-quality – but not low-quality – plant-based foods and lower intakes of animal products may lower the risks for cancer, heart disease, and early death, new research suggests.

The prospective cohort study used data from more than 120,000 middle-aged adults followed for over 10 years in the UK Biobank. Those who consumed a healthful plant-based diet – with higher amounts of foods such as fruits, vegetables, legumes, whole grains, and nuts – and lower intakes of animal products, sugary drinks, and refined grains had a 16% lower risk of dying during follow-up, compared with those with the lowest intakes of the healthful plant-based foods.

By contrast, an unhealthy plant-based diet was associated with a 23% higher total mortality risk.

“Not all plant-based diets are created equally. Our data provide evidence to support the notion that for health benefits the plant-based sources need to be whole grains, fruits and vegetables, legumes, nuts, etc., rather than processed plant-based foods,” study coauthor Aedín Cassidy, PhD, of Queen’s University, Belfast, Northern Ireland, said in an interview.

She added: “We do not necessarily need to radically shift diets to vegan or vegetarian regimens, but rather to switch proportions on the plate to incorporate more healthful plant-based foods, fish, and leaner cuts of meat into our habitual diet. This would have benefits for both individual health and planetary health.”

The findings were published online in JAMA Network Open by Alysha S. Thompson, MSc, also at Queen’s University, and colleagues.
 

High- vs. low-quality plant-based diets linked to better outcomes

The UK Biobank is a population-based, prospective study that included more than 500,000 participants aged 40-69 years at the time of recruitment between 2006 and 2010 at 22 centers in England, Scotland, and Wales. The current study included 126,395 individuals; slightly over half (55.9%) are women.

Food intake data were collected for at least two 24-hour periods to create both “healthful” and “unhealthful” plant-based diet indexes (PDIs). These included 17 food groups: whole grains, fruits, vegetables, nuts, legumes and vegetarian protein alternatives, tea and coffee, fruit juices, refined grains, potatoes, sugar-sweetened beverages, sweets and desserts, animal fat, dairy, eggs, fish or seafood, meat, and miscellaneous animal-derived foods. Data on oils weren’t available.

Higher scores on the healthful PDI and unhealthful PDI were scored positively or negatively based on quantities of those foods consumed.

Participants were then ranked in quartiles for portions of each food group and assigned scores between 2 (lowest-intake category) and 5 (highest).

During a follow-up of 10.6-12.2 years, there were 698 deaths attributed to cardiovascular disease, 3,275 deaths caused by cancer, 6,890 individuals who experienced a cardiovascular incident, and 8,939 with incident cancer.

Another 4,751 experienced an incident fracture, which was evaluated because of the concern that diets low in animal protein might lead to insufficient vitamin B and calcium intake.

After adjustment for confounding factors, the hazard ratio for all-cause mortality in individuals with the highest healthful PDI score quartile compared with the lowest quartile was 0.84.

At the same time, the HR for all-cause mortality for those with the highest versus lowest unhealthful PDI scores was 1.23, and for cancer-related mortality was 1.19. All were statistically significant (P = .004).

Similarly, greater healthy plant-based diet adherence was associated with a significantly lower risk of being diagnosed with any cancer (HR, 0.93; P = .03), while higher unhealthful PDI scores yielded a higher risk (HR, 1.10; P = .004).

Moreover, higher healthy PDI scores were associated with lower risks for total cardiovascular incident risks (HR, 0.92; P = .007), as well as for the individual events of ischemic stroke (HR, 0.84; P = .08) and MI (HR, 0.86; P = .004). Higher unhealthy PDI scores were similarly associated with greater risks for those outcomes, with an overall HR of 1.21 (P = .004).

No associations were found between either healthful PDI or unhealthful PDI and total or site-specific fracture risk.

And because 91.3% of the UK Biobank study population was White, “future studies among more racially, ethnically, and culturally diverse populations are needed to assess the risk of major chronic disease in relation to [plant-based diets],” the authors wrote.

Dr. Cassidy and Ms. Thompson reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

One or two high-step days may reduce mortality risks

Article Type
Changed

Taking 8,000 steps or more for just 1 or 2 days a week was linked to a significant reduction in all-cause and cardiovascular mortality, according to a study of about 3,000 adults.

Previous research has shown lower mortality rates among individuals who walk consistently, especially those who log at least 8,000 steps daily, but the benefit of intense walking just once or twice a week on long-term health outcomes has not been examined, wrote Kosuke Inoue, MD, of Kyoto University, Japan, and colleagues.

iStock/thinkstockphotos

In a study published in JAMA Network Open, the researchers reviewed 10-year follow-up data for 3,101 adults aged 20 years and older who were part of the 2005 and 2006 National Health and Nutrition Examination Survey (NHANES).

The participants were asked to wear accelerometers to track their steps for 7 consecutive days. The researchers assessed the dose-response relationship between days of taking 8,000 steps or more (about 4 miles) during 1 week, and the primary outcome of all-cause mortality risk after 10 years. Cardiovascular mortality risk after 10 years was a secondary outcome.

The mean age of the participants was 50.5 years and 51% were women. The breakdown by ethnicity was 51% White, 21% Black, 24% Hispanic, and 4% other races/ethnicities. A total of 632 individuals took 8,000 steps or more 0 days a week, 532 took at least 8,000 steps 1-2 days per week, and 1,937 took at least 8,000 steps 3-7 days a week.

During the 10-year follow-up period, overall all-cause mortality was 14.2% and cardiovascular mortality was 5.3% across all step groups.

In an adjusted analysis, individuals who took at least 8,000 steps 1-2 days a week had a 14.9% lower all-cause mortality risk compared with those who never reached 8,000 daily steps. This difference was similar to the 16.5% reduced mortality risk for those who took at least 8,000 steps 3-7 days a week.

Similarly, compared with the group with no days of at least 8,000 steps, cardiovascular mortality risk was 8.1% lower for those who took 8,000 steps 1-2 days per week and 8.4% lower for those who took at least 8,000 steps 3-7 days per week. The decreased mortality risk plateaued at 3-4 days.

These patterns in reduced all-cause mortality risk persisted in a stratified analysis by age (younger than 65 years and 65 years and older) and sex. Similar patterns in reduced mortality also emerged when the researchers used different thresholds of daily steps, such as a minimum of 10,000 steps instead of 8,000. The adjusted all-cause mortality for groups who took at least 10,000 steps 1-2 days a week, 3-7 days a week, and no days a week were 8.1%, 7.3%, and 16.7%, respectively, with corresponding cardiovascular mortality risks of 2.4%, 2.3%, and 7.0%, respectively.

“Given the simplicity and ease of counting daily steps, our findings indicate that the recommended number of steps taken on as few as 1 to 2 days per week may be a feasible option for individuals who are striving to achieve some health benefits through adhering to a recommended daily step count but are unable to accomplish this on a daily basis,” the researchers wrote in their discussion.

The findings were limited by several factors including the use daily step measures for 1 week only at baseline, with no data on how physical activity changes might impact mortality risk, the researchers noted. Other limitations included possible accelerometer error and misclassification of activity, possible selection bias, and lack of data on cause-specific mortality outside of cardiovascular death, they said.

However, the results were strengthened by the use of accelerometers as objective measures of activity and by the availability of 10-year follow-up data for nearly 100% of the participants, they said.

“Although our findings might suffer from residual confounding that should be addressed in future research, they suggest that people may receive substantial health benefits even if a sufficient number of steps are taken on only a couple days of the week,” they concluded.
 

 

 

Proceed with caution

The current study findings should be interpreted cautiously in light of the potential unmeasured confounding factors and selection bias that often occur in studies of physical activity, James Sawalla Guseh, MD, of Massachusetts General Hospital, and Jose F. Figueroa, MD, of Harvard T.H. Chan School of Public Health, Boston, wrote in an accompanying editorial.

The results support previous studies showing some longevity benefits with “weekend warrior” patterns of intense physical activity for only a couple of days; however, “the body of evidence for sporadic activity is not as robust as the evidence for sustained and regular aerobic activity,” the authors emphasized.

The editorial authors also highlighted the limitations of the current study, including the observational design and significant differences in demographics and comorbidities between the 1- to 2-days of 8,000 steps exercise group and the 0-day group, as well as the reliance on only a week’s worth of data to infer 10 years’ mortality.

Although the data are consistent with previous observations that increased exercise volume reduces mortality, more research is needed, as the current study findings may not reflect other dimensions of health, including neurological health, they said.

Despite the need for cautious interpretation of the results, the current study “supports the emerging and popular idea that step counting, which does not require consideration of exercise duration or intensity, can offer guidance toward robust and favorable health outcomes,” and may inform step-based activity goals to improve public health, the editorialists wrote.

The study was supported by the Japan Agency for Medical Research and Development, the Japan Society for the Promotion of Science, the Japan Endocrine Society, and the Meiji Yasuda Life Foundation of Health and Welfare. Dr. Inoue also was supported by the Program for the Development of Next-Generation Leading Scientists With Global Insight sponsored by the Ministry of Education, Culture, Sports, Science and Technology, Japan. The other researchers had no relevant financial conflicts to disclose. The editorial authors had no financial conflicts to disclose.

Publications
Topics
Sections

Taking 8,000 steps or more for just 1 or 2 days a week was linked to a significant reduction in all-cause and cardiovascular mortality, according to a study of about 3,000 adults.

Previous research has shown lower mortality rates among individuals who walk consistently, especially those who log at least 8,000 steps daily, but the benefit of intense walking just once or twice a week on long-term health outcomes has not been examined, wrote Kosuke Inoue, MD, of Kyoto University, Japan, and colleagues.

iStock/thinkstockphotos

In a study published in JAMA Network Open, the researchers reviewed 10-year follow-up data for 3,101 adults aged 20 years and older who were part of the 2005 and 2006 National Health and Nutrition Examination Survey (NHANES).

The participants were asked to wear accelerometers to track their steps for 7 consecutive days. The researchers assessed the dose-response relationship between days of taking 8,000 steps or more (about 4 miles) during 1 week, and the primary outcome of all-cause mortality risk after 10 years. Cardiovascular mortality risk after 10 years was a secondary outcome.

The mean age of the participants was 50.5 years and 51% were women. The breakdown by ethnicity was 51% White, 21% Black, 24% Hispanic, and 4% other races/ethnicities. A total of 632 individuals took 8,000 steps or more 0 days a week, 532 took at least 8,000 steps 1-2 days per week, and 1,937 took at least 8,000 steps 3-7 days a week.

During the 10-year follow-up period, overall all-cause mortality was 14.2% and cardiovascular mortality was 5.3% across all step groups.

In an adjusted analysis, individuals who took at least 8,000 steps 1-2 days a week had a 14.9% lower all-cause mortality risk compared with those who never reached 8,000 daily steps. This difference was similar to the 16.5% reduced mortality risk for those who took at least 8,000 steps 3-7 days a week.

Similarly, compared with the group with no days of at least 8,000 steps, cardiovascular mortality risk was 8.1% lower for those who took 8,000 steps 1-2 days per week and 8.4% lower for those who took at least 8,000 steps 3-7 days per week. The decreased mortality risk plateaued at 3-4 days.

These patterns in reduced all-cause mortality risk persisted in a stratified analysis by age (younger than 65 years and 65 years and older) and sex. Similar patterns in reduced mortality also emerged when the researchers used different thresholds of daily steps, such as a minimum of 10,000 steps instead of 8,000. The adjusted all-cause mortality for groups who took at least 10,000 steps 1-2 days a week, 3-7 days a week, and no days a week were 8.1%, 7.3%, and 16.7%, respectively, with corresponding cardiovascular mortality risks of 2.4%, 2.3%, and 7.0%, respectively.

“Given the simplicity and ease of counting daily steps, our findings indicate that the recommended number of steps taken on as few as 1 to 2 days per week may be a feasible option for individuals who are striving to achieve some health benefits through adhering to a recommended daily step count but are unable to accomplish this on a daily basis,” the researchers wrote in their discussion.

The findings were limited by several factors including the use daily step measures for 1 week only at baseline, with no data on how physical activity changes might impact mortality risk, the researchers noted. Other limitations included possible accelerometer error and misclassification of activity, possible selection bias, and lack of data on cause-specific mortality outside of cardiovascular death, they said.

However, the results were strengthened by the use of accelerometers as objective measures of activity and by the availability of 10-year follow-up data for nearly 100% of the participants, they said.

“Although our findings might suffer from residual confounding that should be addressed in future research, they suggest that people may receive substantial health benefits even if a sufficient number of steps are taken on only a couple days of the week,” they concluded.
 

 

 

Proceed with caution

The current study findings should be interpreted cautiously in light of the potential unmeasured confounding factors and selection bias that often occur in studies of physical activity, James Sawalla Guseh, MD, of Massachusetts General Hospital, and Jose F. Figueroa, MD, of Harvard T.H. Chan School of Public Health, Boston, wrote in an accompanying editorial.

The results support previous studies showing some longevity benefits with “weekend warrior” patterns of intense physical activity for only a couple of days; however, “the body of evidence for sporadic activity is not as robust as the evidence for sustained and regular aerobic activity,” the authors emphasized.

The editorial authors also highlighted the limitations of the current study, including the observational design and significant differences in demographics and comorbidities between the 1- to 2-days of 8,000 steps exercise group and the 0-day group, as well as the reliance on only a week’s worth of data to infer 10 years’ mortality.

Although the data are consistent with previous observations that increased exercise volume reduces mortality, more research is needed, as the current study findings may not reflect other dimensions of health, including neurological health, they said.

Despite the need for cautious interpretation of the results, the current study “supports the emerging and popular idea that step counting, which does not require consideration of exercise duration or intensity, can offer guidance toward robust and favorable health outcomes,” and may inform step-based activity goals to improve public health, the editorialists wrote.

The study was supported by the Japan Agency for Medical Research and Development, the Japan Society for the Promotion of Science, the Japan Endocrine Society, and the Meiji Yasuda Life Foundation of Health and Welfare. Dr. Inoue also was supported by the Program for the Development of Next-Generation Leading Scientists With Global Insight sponsored by the Ministry of Education, Culture, Sports, Science and Technology, Japan. The other researchers had no relevant financial conflicts to disclose. The editorial authors had no financial conflicts to disclose.

Taking 8,000 steps or more for just 1 or 2 days a week was linked to a significant reduction in all-cause and cardiovascular mortality, according to a study of about 3,000 adults.

Previous research has shown lower mortality rates among individuals who walk consistently, especially those who log at least 8,000 steps daily, but the benefit of intense walking just once or twice a week on long-term health outcomes has not been examined, wrote Kosuke Inoue, MD, of Kyoto University, Japan, and colleagues.

iStock/thinkstockphotos

In a study published in JAMA Network Open, the researchers reviewed 10-year follow-up data for 3,101 adults aged 20 years and older who were part of the 2005 and 2006 National Health and Nutrition Examination Survey (NHANES).

The participants were asked to wear accelerometers to track their steps for 7 consecutive days. The researchers assessed the dose-response relationship between days of taking 8,000 steps or more (about 4 miles) during 1 week, and the primary outcome of all-cause mortality risk after 10 years. Cardiovascular mortality risk after 10 years was a secondary outcome.

The mean age of the participants was 50.5 years and 51% were women. The breakdown by ethnicity was 51% White, 21% Black, 24% Hispanic, and 4% other races/ethnicities. A total of 632 individuals took 8,000 steps or more 0 days a week, 532 took at least 8,000 steps 1-2 days per week, and 1,937 took at least 8,000 steps 3-7 days a week.

During the 10-year follow-up period, overall all-cause mortality was 14.2% and cardiovascular mortality was 5.3% across all step groups.

In an adjusted analysis, individuals who took at least 8,000 steps 1-2 days a week had a 14.9% lower all-cause mortality risk compared with those who never reached 8,000 daily steps. This difference was similar to the 16.5% reduced mortality risk for those who took at least 8,000 steps 3-7 days a week.

Similarly, compared with the group with no days of at least 8,000 steps, cardiovascular mortality risk was 8.1% lower for those who took 8,000 steps 1-2 days per week and 8.4% lower for those who took at least 8,000 steps 3-7 days per week. The decreased mortality risk plateaued at 3-4 days.

These patterns in reduced all-cause mortality risk persisted in a stratified analysis by age (younger than 65 years and 65 years and older) and sex. Similar patterns in reduced mortality also emerged when the researchers used different thresholds of daily steps, such as a minimum of 10,000 steps instead of 8,000. The adjusted all-cause mortality for groups who took at least 10,000 steps 1-2 days a week, 3-7 days a week, and no days a week were 8.1%, 7.3%, and 16.7%, respectively, with corresponding cardiovascular mortality risks of 2.4%, 2.3%, and 7.0%, respectively.

“Given the simplicity and ease of counting daily steps, our findings indicate that the recommended number of steps taken on as few as 1 to 2 days per week may be a feasible option for individuals who are striving to achieve some health benefits through adhering to a recommended daily step count but are unable to accomplish this on a daily basis,” the researchers wrote in their discussion.

The findings were limited by several factors including the use daily step measures for 1 week only at baseline, with no data on how physical activity changes might impact mortality risk, the researchers noted. Other limitations included possible accelerometer error and misclassification of activity, possible selection bias, and lack of data on cause-specific mortality outside of cardiovascular death, they said.

However, the results were strengthened by the use of accelerometers as objective measures of activity and by the availability of 10-year follow-up data for nearly 100% of the participants, they said.

“Although our findings might suffer from residual confounding that should be addressed in future research, they suggest that people may receive substantial health benefits even if a sufficient number of steps are taken on only a couple days of the week,” they concluded.
 

 

 

Proceed with caution

The current study findings should be interpreted cautiously in light of the potential unmeasured confounding factors and selection bias that often occur in studies of physical activity, James Sawalla Guseh, MD, of Massachusetts General Hospital, and Jose F. Figueroa, MD, of Harvard T.H. Chan School of Public Health, Boston, wrote in an accompanying editorial.

The results support previous studies showing some longevity benefits with “weekend warrior” patterns of intense physical activity for only a couple of days; however, “the body of evidence for sporadic activity is not as robust as the evidence for sustained and regular aerobic activity,” the authors emphasized.

The editorial authors also highlighted the limitations of the current study, including the observational design and significant differences in demographics and comorbidities between the 1- to 2-days of 8,000 steps exercise group and the 0-day group, as well as the reliance on only a week’s worth of data to infer 10 years’ mortality.

Although the data are consistent with previous observations that increased exercise volume reduces mortality, more research is needed, as the current study findings may not reflect other dimensions of health, including neurological health, they said.

Despite the need for cautious interpretation of the results, the current study “supports the emerging and popular idea that step counting, which does not require consideration of exercise duration or intensity, can offer guidance toward robust and favorable health outcomes,” and may inform step-based activity goals to improve public health, the editorialists wrote.

The study was supported by the Japan Agency for Medical Research and Development, the Japan Society for the Promotion of Science, the Japan Endocrine Society, and the Meiji Yasuda Life Foundation of Health and Welfare. Dr. Inoue also was supported by the Program for the Development of Next-Generation Leading Scientists With Global Insight sponsored by the Ministry of Education, Culture, Sports, Science and Technology, Japan. The other researchers had no relevant financial conflicts to disclose. The editorial authors had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article