Online CBT for Patients with AD: Self-Guided vs. Clinician-Guided Intervention Compared

Article Type
Changed

TOPLINE:

A brief self-guided online cognitive-behavioral therapy (CBT) intervention was noninferior to comprehensive clinician-guided CBT in reducing symptoms in patients with atopic dermatitis (AD), with both groups showing similar improvements on the Patient-Oriented Eczema Measure (POEM).

METHODOLOGY:

  • Researchers conducted a single-blind randomized clinical noninferiority trial at Karolinska Institutet in Stockholm, Sweden, enrolling 168 adults with AD (mean age, 39 years; 84.5% women) from November 2022 to April 2023.
  • Participants were randomly assigned to either a 12-week self-guided online CBT intervention (n = 86) without clinician support or a comprehensive 12-week clinician-guided online CBT program (n = 82).
  • The primary outcome was the change in POEM score from baseline; reduction of 4 or more points was considered a response, and the predefined noninferiority margin was 3 points.

TAKEAWAY:

  • The clinician-guided group improved by 4.20 points on POEM, while the self-guided group improved by 4.60 points, with an estimated mean difference in change of 0.36 points, which was below noninferiority margin.
  • Clinicians spent a mean of 36 minutes on treatment guidance and an additional 14 minutes on assessments in the clinician-guided group, whereas they spent only 15.8 minutes on assessments in the self-guided group.
  • Both groups demonstrated significant improvements in quality of life, sleep, depressive mood, pruritus, and stress, with no serious adverse events being reported.
  • Completion rates were higher in the self-guided group with 81% of participants completing five or more modules, compared with 67% in the clinician-guided group.

IN PRACTICE:

“Overall, the findings support a self-guided intervention as a noninferior and cost-effective alternative to a previously evaluated clinician-guided treatment,” the authors wrote. “Because psychological interventions are rare in dermatological care, this study is an important step toward implementation of CBT for people with AD. The effectiveness of CBT interventions in primary and dermatological specialist care should be investigated.”

SOURCE:

The study was led by Dorian Kern, PhD, Division of Psychology, Karolinska Institutet, and was published online in JAMA Dermatology.

LIMITATIONS: 

High data loss for secondary measurements could affect interpretation of these results. The study relied solely on self-reported measures. The predominance of women participants and the Swedish-language requirement may have limited participation from migrant populations, which could hinder the broader implementation of the study’s findings.

DISCLOSURES:

The study was supported by the Swedish Ministry of Health and Social Affairs. Kern reported receiving grants from the Swedish Ministry of Health and Social Affairs during the conduct of the study. Other authors also reported authorships and royalties, personal fees, grants, or held stocks in DahliaQomit.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

A brief self-guided online cognitive-behavioral therapy (CBT) intervention was noninferior to comprehensive clinician-guided CBT in reducing symptoms in patients with atopic dermatitis (AD), with both groups showing similar improvements on the Patient-Oriented Eczema Measure (POEM).

METHODOLOGY:

  • Researchers conducted a single-blind randomized clinical noninferiority trial at Karolinska Institutet in Stockholm, Sweden, enrolling 168 adults with AD (mean age, 39 years; 84.5% women) from November 2022 to April 2023.
  • Participants were randomly assigned to either a 12-week self-guided online CBT intervention (n = 86) without clinician support or a comprehensive 12-week clinician-guided online CBT program (n = 82).
  • The primary outcome was the change in POEM score from baseline; reduction of 4 or more points was considered a response, and the predefined noninferiority margin was 3 points.

TAKEAWAY:

  • The clinician-guided group improved by 4.20 points on POEM, while the self-guided group improved by 4.60 points, with an estimated mean difference in change of 0.36 points, which was below noninferiority margin.
  • Clinicians spent a mean of 36 minutes on treatment guidance and an additional 14 minutes on assessments in the clinician-guided group, whereas they spent only 15.8 minutes on assessments in the self-guided group.
  • Both groups demonstrated significant improvements in quality of life, sleep, depressive mood, pruritus, and stress, with no serious adverse events being reported.
  • Completion rates were higher in the self-guided group with 81% of participants completing five or more modules, compared with 67% in the clinician-guided group.

IN PRACTICE:

“Overall, the findings support a self-guided intervention as a noninferior and cost-effective alternative to a previously evaluated clinician-guided treatment,” the authors wrote. “Because psychological interventions are rare in dermatological care, this study is an important step toward implementation of CBT for people with AD. The effectiveness of CBT interventions in primary and dermatological specialist care should be investigated.”

SOURCE:

The study was led by Dorian Kern, PhD, Division of Psychology, Karolinska Institutet, and was published online in JAMA Dermatology.

LIMITATIONS: 

High data loss for secondary measurements could affect interpretation of these results. The study relied solely on self-reported measures. The predominance of women participants and the Swedish-language requirement may have limited participation from migrant populations, which could hinder the broader implementation of the study’s findings.

DISCLOSURES:

The study was supported by the Swedish Ministry of Health and Social Affairs. Kern reported receiving grants from the Swedish Ministry of Health and Social Affairs during the conduct of the study. Other authors also reported authorships and royalties, personal fees, grants, or held stocks in DahliaQomit.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

A brief self-guided online cognitive-behavioral therapy (CBT) intervention was noninferior to comprehensive clinician-guided CBT in reducing symptoms in patients with atopic dermatitis (AD), with both groups showing similar improvements on the Patient-Oriented Eczema Measure (POEM).

METHODOLOGY:

  • Researchers conducted a single-blind randomized clinical noninferiority trial at Karolinska Institutet in Stockholm, Sweden, enrolling 168 adults with AD (mean age, 39 years; 84.5% women) from November 2022 to April 2023.
  • Participants were randomly assigned to either a 12-week self-guided online CBT intervention (n = 86) without clinician support or a comprehensive 12-week clinician-guided online CBT program (n = 82).
  • The primary outcome was the change in POEM score from baseline; reduction of 4 or more points was considered a response, and the predefined noninferiority margin was 3 points.

TAKEAWAY:

  • The clinician-guided group improved by 4.20 points on POEM, while the self-guided group improved by 4.60 points, with an estimated mean difference in change of 0.36 points, which was below noninferiority margin.
  • Clinicians spent a mean of 36 minutes on treatment guidance and an additional 14 minutes on assessments in the clinician-guided group, whereas they spent only 15.8 minutes on assessments in the self-guided group.
  • Both groups demonstrated significant improvements in quality of life, sleep, depressive mood, pruritus, and stress, with no serious adverse events being reported.
  • Completion rates were higher in the self-guided group with 81% of participants completing five or more modules, compared with 67% in the clinician-guided group.

IN PRACTICE:

“Overall, the findings support a self-guided intervention as a noninferior and cost-effective alternative to a previously evaluated clinician-guided treatment,” the authors wrote. “Because psychological interventions are rare in dermatological care, this study is an important step toward implementation of CBT for people with AD. The effectiveness of CBT interventions in primary and dermatological specialist care should be investigated.”

SOURCE:

The study was led by Dorian Kern, PhD, Division of Psychology, Karolinska Institutet, and was published online in JAMA Dermatology.

LIMITATIONS: 

High data loss for secondary measurements could affect interpretation of these results. The study relied solely on self-reported measures. The predominance of women participants and the Swedish-language requirement may have limited participation from migrant populations, which could hinder the broader implementation of the study’s findings.

DISCLOSURES:

The study was supported by the Swedish Ministry of Health and Social Affairs. Kern reported receiving grants from the Swedish Ministry of Health and Social Affairs during the conduct of the study. Other authors also reported authorships and royalties, personal fees, grants, or held stocks in DahliaQomit.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Central Line Skin Reactions in Children: Survey Addresses Treatment Protocols in Use

Article Type
Changed

TOPLINE:

A survey of dermatologists found that although all respondents receive inpatient central line dressing (CLD)-related consults, most lack standardized protocols for managing adverse skin reactions and reported varying management approaches.

METHODOLOGY:

  • Researchers developed and administered a 14-item Qualtrics survey to 107 dermatologists providing pediatric inpatient care through the Society for Pediatric Dermatology’s Inpatient Dermatology Section and Section Chief email lists.
  • A total of 35 dermatologists (33%) from multiple institutions responded to the survey; most respondents (94%) specialized in pediatric dermatology.
  • Researchers assessed management of CLD-associated adverse skin reactions.

TAKEAWAY:

  • All respondents reported receiving CLD-related consults, but 66% indicated there was no personal or institutional standardized approach for managing CLD-associated skin reactions.
  • Respondents said most reactions were in children aged 1-12 years (19 or 76% of 25 respondents) compared with those aged < 1 year (3 or 12% of 25 respondents).
  • Management strategies included switching to alternative products, applying topical corticosteroids, and performing patch testing for allergies. 

IN PRACTICE:

“Insights derived from this study, including variation in clinician familiarity with reaction patterns, underscore the necessity of a standardized protocol for classifying and managing cutaneous CLD reactions in pediatric patients,” the authors wrote. “Further investigation is needed to better characterize CLD-associated allergic CD [contact dermatitis], irritant CD, and skin infections, as well as at-risk populations, to better inform clinical approaches,” they added.

SOURCE:

The study was led by Carly Mulinda, Columbia University College of Physicians and Surgeons, New York, and was published online on December 16 in Pediatric Dermatology.

LIMITATIONS:

The authors noted variable respondent awareness of institutional CLD and potential recency bias as key limitations of the study.

DISCLOSURES:

Study funding source was not declared. The authors reported no conflicts of interest.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

A survey of dermatologists found that although all respondents receive inpatient central line dressing (CLD)-related consults, most lack standardized protocols for managing adverse skin reactions and reported varying management approaches.

METHODOLOGY:

  • Researchers developed and administered a 14-item Qualtrics survey to 107 dermatologists providing pediatric inpatient care through the Society for Pediatric Dermatology’s Inpatient Dermatology Section and Section Chief email lists.
  • A total of 35 dermatologists (33%) from multiple institutions responded to the survey; most respondents (94%) specialized in pediatric dermatology.
  • Researchers assessed management of CLD-associated adverse skin reactions.

TAKEAWAY:

  • All respondents reported receiving CLD-related consults, but 66% indicated there was no personal or institutional standardized approach for managing CLD-associated skin reactions.
  • Respondents said most reactions were in children aged 1-12 years (19 or 76% of 25 respondents) compared with those aged < 1 year (3 or 12% of 25 respondents).
  • Management strategies included switching to alternative products, applying topical corticosteroids, and performing patch testing for allergies. 

IN PRACTICE:

“Insights derived from this study, including variation in clinician familiarity with reaction patterns, underscore the necessity of a standardized protocol for classifying and managing cutaneous CLD reactions in pediatric patients,” the authors wrote. “Further investigation is needed to better characterize CLD-associated allergic CD [contact dermatitis], irritant CD, and skin infections, as well as at-risk populations, to better inform clinical approaches,” they added.

SOURCE:

The study was led by Carly Mulinda, Columbia University College of Physicians and Surgeons, New York, and was published online on December 16 in Pediatric Dermatology.

LIMITATIONS:

The authors noted variable respondent awareness of institutional CLD and potential recency bias as key limitations of the study.

DISCLOSURES:

Study funding source was not declared. The authors reported no conflicts of interest.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

A survey of dermatologists found that although all respondents receive inpatient central line dressing (CLD)-related consults, most lack standardized protocols for managing adverse skin reactions and reported varying management approaches.

METHODOLOGY:

  • Researchers developed and administered a 14-item Qualtrics survey to 107 dermatologists providing pediatric inpatient care through the Society for Pediatric Dermatology’s Inpatient Dermatology Section and Section Chief email lists.
  • A total of 35 dermatologists (33%) from multiple institutions responded to the survey; most respondents (94%) specialized in pediatric dermatology.
  • Researchers assessed management of CLD-associated adverse skin reactions.

TAKEAWAY:

  • All respondents reported receiving CLD-related consults, but 66% indicated there was no personal or institutional standardized approach for managing CLD-associated skin reactions.
  • Respondents said most reactions were in children aged 1-12 years (19 or 76% of 25 respondents) compared with those aged < 1 year (3 or 12% of 25 respondents).
  • Management strategies included switching to alternative products, applying topical corticosteroids, and performing patch testing for allergies. 

IN PRACTICE:

“Insights derived from this study, including variation in clinician familiarity with reaction patterns, underscore the necessity of a standardized protocol for classifying and managing cutaneous CLD reactions in pediatric patients,” the authors wrote. “Further investigation is needed to better characterize CLD-associated allergic CD [contact dermatitis], irritant CD, and skin infections, as well as at-risk populations, to better inform clinical approaches,” they added.

SOURCE:

The study was led by Carly Mulinda, Columbia University College of Physicians and Surgeons, New York, and was published online on December 16 in Pediatric Dermatology.

LIMITATIONS:

The authors noted variable respondent awareness of institutional CLD and potential recency bias as key limitations of the study.

DISCLOSURES:

Study funding source was not declared. The authors reported no conflicts of interest.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

AI-Aided Colonoscopy’s ‘Intelligent’ Module Ups Polyp Detection

Article Type
Changed

Results from the British COLO-DETECT trial add to the growing body of evidence supporting the use of artificial intelligence (AI)–aided colonoscopy to increase premalignant colorectal polyp detection in routine colonoscopy practice.

Colin J. Rees, a professor of gastroenterology in the Faculty of Medical Sciences at Newcastle University in Newcastle upon Tyne, England, and colleagues compared the real-world clinical effectiveness of computer-aided detection (CADe)–assisted colonoscopy using an “intelligent” module with that of standard colonoscopy in a study in The Lancet Gastroenterology & Hepatology.

They found the GI Genius Intelligent Endoscopy Module (Medtronic) increased the mean number of adenomas detected per procedure and the adenoma detection rate, especially for small, flat (type 0-IIa) polyps, and sessile serrated lesions, which are more likely to be missed.

“Missed sessile serrated lesions disproportionately increase the risk of post-colonoscopy colorectal cancer, thus the adoption of GI Genius into routine colonoscopy practice could not only increase polyp detection but also reduce the incidence of post-colonoscopy colorectal cancer,” the investigators wrote.

“AI is going to have a major impact upon most aspects of healthcare. Some areas of medical practice are now well established, and some are still in evolution,” Rees, who is also president of the British Society of Gastroenterology, said in an interview. “Within gastroenterology, the role of AI in endoscopic diagnostics is also evolving. The COLO-DETECT trial demonstrates that AI increases detection of lesions, and work is ongoing to see how AI might help with characterization and other elements of endoscopic practice.”

 

Study Details

The multicenter, open-label, parallel-arm, pragmatic randomized controlled trial was conducted at 12 National Health Service hospitals in England. The study cohort consisted of adults ≥ 18 years undergoing colorectal cancer (CRC) screening or colonoscopy for gastrointestinal symptom surveillance owing to personal or family history.

Recruiting staff, participants, and colonoscopists were unmasked to allocation, whereas histopathologists, cochief investigators, and trial statisticians were masked.

CADe-assisted colonoscopy consisted of standard colonoscopy plus the GI Genius module active for at least the entire inspection phase of colonoscope withdrawal.

The primary outcome was mean adenomas per procedure (total number of adenomas detected divided by total number of procedures). The key secondary outcome was adenoma detection rate (proportion of colonoscopies with at least one adenoma).

From March 2021 to April 2023, the investigators recruited 2032 participants, 55.7% men, with a mean cohort age of 62.4 years and randomly assigned them to CADe-assisted colonoscopy (n = 1015) or to standard colonoscopy (n = 1017). Of these, 60.6% were undergoing screening and 39.4% had symptomatic indications.

Mean adenomas per procedure were 1.56 (SD, 2.82; n = 1001 participants with data) in the CADe-assisted group vs 1.21 (n = 1009) in the standard group, for an adjusted mean difference of 0.36 (95% CI, 0.14-0.57; adjusted incidence rate ratio, 1.30; 95% CI, 1.15-1.47; P < .0001).

Adenomas were detected in 555 (56.6%) of 980 participants in the CADe-assisted group vs 477 (48.4%) of 986 in the standard group, representing a proportion difference of 8.3% (95% CI, 3.9-12.7; adjusted odds ratio, 1.47; 95% CI, 1.21-1.78; P < .0001).

As to safety, adverse events were numerically comparable in both the intervention and control groups, with overall events 25 vs 19 and serious events 4 vs 6. On independent review, no adverse events in the CADe-assisted colonoscopy group were related to GI Genius.

 

Dr. Nabil M. Mansour

Offering a US perspective on the study, Nabil M. Mansour, MD, an associate professor and director of the McNair General GI Clinic at Baylor College of Medicine in Houston, Texas, said GI Genius and other CADe systems represent a significant advance over standard colonoscopy for identifying premalignant polyps. “While the data have been mixed, most studies, particularly randomized controlled trials have shown significant improvements with CADe in detection both terms of in adenomas per colonoscopy and reductions in adenoma miss rate,” he said in an interview.

He added that the main utility of CADe is for asymptomatic patients undergoing average-risk screening and surveillance colonoscopy for CRC screening and prevention, as well as for those with positive stool-based screening tests, “though there is no downside to using it in symptomatic patients as well.” Though AI colonoscopy likely still stands at < 50% of endoscopy centers overall, and is used mainly at academic centers, his clinic has been using it for the past year.

The main question, Mansour cautioned, is whether increased detection of small polyps will actually reduce CRC incidence or mortality, and it will likely be several years before clear, concrete data can answer that.

“Most studies have shown the improvement in adenoma detection is mainly for diminutive polyps < 5 mm in diameter, but whether that will actually translate to substantive improvements in hard outcomes is as yet unknown,” he said. “But if gastroenterologists are interested in doing everything they can today to help improve detection rates and lower miss rates of premalignant polyps, serious consideration should be given to adopting the use of CADe in practice.”

This study was supported by Medtronic. Rees reported receiving grant funding from ARC Medical, Norgine, Medtronic, 3-D Matrix, and Olympus Medical, and has been an expert witness for ARC Medical. Other authors disclosed receiving research funding, honoraria, or travel expenses from Medtronic or other private companies. Mansour had no competing interests to declare.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Results from the British COLO-DETECT trial add to the growing body of evidence supporting the use of artificial intelligence (AI)–aided colonoscopy to increase premalignant colorectal polyp detection in routine colonoscopy practice.

Colin J. Rees, a professor of gastroenterology in the Faculty of Medical Sciences at Newcastle University in Newcastle upon Tyne, England, and colleagues compared the real-world clinical effectiveness of computer-aided detection (CADe)–assisted colonoscopy using an “intelligent” module with that of standard colonoscopy in a study in The Lancet Gastroenterology & Hepatology.

They found the GI Genius Intelligent Endoscopy Module (Medtronic) increased the mean number of adenomas detected per procedure and the adenoma detection rate, especially for small, flat (type 0-IIa) polyps, and sessile serrated lesions, which are more likely to be missed.

“Missed sessile serrated lesions disproportionately increase the risk of post-colonoscopy colorectal cancer, thus the adoption of GI Genius into routine colonoscopy practice could not only increase polyp detection but also reduce the incidence of post-colonoscopy colorectal cancer,” the investigators wrote.

“AI is going to have a major impact upon most aspects of healthcare. Some areas of medical practice are now well established, and some are still in evolution,” Rees, who is also president of the British Society of Gastroenterology, said in an interview. “Within gastroenterology, the role of AI in endoscopic diagnostics is also evolving. The COLO-DETECT trial demonstrates that AI increases detection of lesions, and work is ongoing to see how AI might help with characterization and other elements of endoscopic practice.”

 

Study Details

The multicenter, open-label, parallel-arm, pragmatic randomized controlled trial was conducted at 12 National Health Service hospitals in England. The study cohort consisted of adults ≥ 18 years undergoing colorectal cancer (CRC) screening or colonoscopy for gastrointestinal symptom surveillance owing to personal or family history.

Recruiting staff, participants, and colonoscopists were unmasked to allocation, whereas histopathologists, cochief investigators, and trial statisticians were masked.

CADe-assisted colonoscopy consisted of standard colonoscopy plus the GI Genius module active for at least the entire inspection phase of colonoscope withdrawal.

The primary outcome was mean adenomas per procedure (total number of adenomas detected divided by total number of procedures). The key secondary outcome was adenoma detection rate (proportion of colonoscopies with at least one adenoma).

From March 2021 to April 2023, the investigators recruited 2032 participants, 55.7% men, with a mean cohort age of 62.4 years and randomly assigned them to CADe-assisted colonoscopy (n = 1015) or to standard colonoscopy (n = 1017). Of these, 60.6% were undergoing screening and 39.4% had symptomatic indications.

Mean adenomas per procedure were 1.56 (SD, 2.82; n = 1001 participants with data) in the CADe-assisted group vs 1.21 (n = 1009) in the standard group, for an adjusted mean difference of 0.36 (95% CI, 0.14-0.57; adjusted incidence rate ratio, 1.30; 95% CI, 1.15-1.47; P < .0001).

Adenomas were detected in 555 (56.6%) of 980 participants in the CADe-assisted group vs 477 (48.4%) of 986 in the standard group, representing a proportion difference of 8.3% (95% CI, 3.9-12.7; adjusted odds ratio, 1.47; 95% CI, 1.21-1.78; P < .0001).

As to safety, adverse events were numerically comparable in both the intervention and control groups, with overall events 25 vs 19 and serious events 4 vs 6. On independent review, no adverse events in the CADe-assisted colonoscopy group were related to GI Genius.

 

Dr. Nabil M. Mansour

Offering a US perspective on the study, Nabil M. Mansour, MD, an associate professor and director of the McNair General GI Clinic at Baylor College of Medicine in Houston, Texas, said GI Genius and other CADe systems represent a significant advance over standard colonoscopy for identifying premalignant polyps. “While the data have been mixed, most studies, particularly randomized controlled trials have shown significant improvements with CADe in detection both terms of in adenomas per colonoscopy and reductions in adenoma miss rate,” he said in an interview.

He added that the main utility of CADe is for asymptomatic patients undergoing average-risk screening and surveillance colonoscopy for CRC screening and prevention, as well as for those with positive stool-based screening tests, “though there is no downside to using it in symptomatic patients as well.” Though AI colonoscopy likely still stands at < 50% of endoscopy centers overall, and is used mainly at academic centers, his clinic has been using it for the past year.

The main question, Mansour cautioned, is whether increased detection of small polyps will actually reduce CRC incidence or mortality, and it will likely be several years before clear, concrete data can answer that.

“Most studies have shown the improvement in adenoma detection is mainly for diminutive polyps < 5 mm in diameter, but whether that will actually translate to substantive improvements in hard outcomes is as yet unknown,” he said. “But if gastroenterologists are interested in doing everything they can today to help improve detection rates and lower miss rates of premalignant polyps, serious consideration should be given to adopting the use of CADe in practice.”

This study was supported by Medtronic. Rees reported receiving grant funding from ARC Medical, Norgine, Medtronic, 3-D Matrix, and Olympus Medical, and has been an expert witness for ARC Medical. Other authors disclosed receiving research funding, honoraria, or travel expenses from Medtronic or other private companies. Mansour had no competing interests to declare.

A version of this article appeared on Medscape.com.

Results from the British COLO-DETECT trial add to the growing body of evidence supporting the use of artificial intelligence (AI)–aided colonoscopy to increase premalignant colorectal polyp detection in routine colonoscopy practice.

Colin J. Rees, a professor of gastroenterology in the Faculty of Medical Sciences at Newcastle University in Newcastle upon Tyne, England, and colleagues compared the real-world clinical effectiveness of computer-aided detection (CADe)–assisted colonoscopy using an “intelligent” module with that of standard colonoscopy in a study in The Lancet Gastroenterology & Hepatology.

They found the GI Genius Intelligent Endoscopy Module (Medtronic) increased the mean number of adenomas detected per procedure and the adenoma detection rate, especially for small, flat (type 0-IIa) polyps, and sessile serrated lesions, which are more likely to be missed.

“Missed sessile serrated lesions disproportionately increase the risk of post-colonoscopy colorectal cancer, thus the adoption of GI Genius into routine colonoscopy practice could not only increase polyp detection but also reduce the incidence of post-colonoscopy colorectal cancer,” the investigators wrote.

“AI is going to have a major impact upon most aspects of healthcare. Some areas of medical practice are now well established, and some are still in evolution,” Rees, who is also president of the British Society of Gastroenterology, said in an interview. “Within gastroenterology, the role of AI in endoscopic diagnostics is also evolving. The COLO-DETECT trial demonstrates that AI increases detection of lesions, and work is ongoing to see how AI might help with characterization and other elements of endoscopic practice.”

 

Study Details

The multicenter, open-label, parallel-arm, pragmatic randomized controlled trial was conducted at 12 National Health Service hospitals in England. The study cohort consisted of adults ≥ 18 years undergoing colorectal cancer (CRC) screening or colonoscopy for gastrointestinal symptom surveillance owing to personal or family history.

Recruiting staff, participants, and colonoscopists were unmasked to allocation, whereas histopathologists, cochief investigators, and trial statisticians were masked.

CADe-assisted colonoscopy consisted of standard colonoscopy plus the GI Genius module active for at least the entire inspection phase of colonoscope withdrawal.

The primary outcome was mean adenomas per procedure (total number of adenomas detected divided by total number of procedures). The key secondary outcome was adenoma detection rate (proportion of colonoscopies with at least one adenoma).

From March 2021 to April 2023, the investigators recruited 2032 participants, 55.7% men, with a mean cohort age of 62.4 years and randomly assigned them to CADe-assisted colonoscopy (n = 1015) or to standard colonoscopy (n = 1017). Of these, 60.6% were undergoing screening and 39.4% had symptomatic indications.

Mean adenomas per procedure were 1.56 (SD, 2.82; n = 1001 participants with data) in the CADe-assisted group vs 1.21 (n = 1009) in the standard group, for an adjusted mean difference of 0.36 (95% CI, 0.14-0.57; adjusted incidence rate ratio, 1.30; 95% CI, 1.15-1.47; P < .0001).

Adenomas were detected in 555 (56.6%) of 980 participants in the CADe-assisted group vs 477 (48.4%) of 986 in the standard group, representing a proportion difference of 8.3% (95% CI, 3.9-12.7; adjusted odds ratio, 1.47; 95% CI, 1.21-1.78; P < .0001).

As to safety, adverse events were numerically comparable in both the intervention and control groups, with overall events 25 vs 19 and serious events 4 vs 6. On independent review, no adverse events in the CADe-assisted colonoscopy group were related to GI Genius.

 

Dr. Nabil M. Mansour

Offering a US perspective on the study, Nabil M. Mansour, MD, an associate professor and director of the McNair General GI Clinic at Baylor College of Medicine in Houston, Texas, said GI Genius and other CADe systems represent a significant advance over standard colonoscopy for identifying premalignant polyps. “While the data have been mixed, most studies, particularly randomized controlled trials have shown significant improvements with CADe in detection both terms of in adenomas per colonoscopy and reductions in adenoma miss rate,” he said in an interview.

He added that the main utility of CADe is for asymptomatic patients undergoing average-risk screening and surveillance colonoscopy for CRC screening and prevention, as well as for those with positive stool-based screening tests, “though there is no downside to using it in symptomatic patients as well.” Though AI colonoscopy likely still stands at < 50% of endoscopy centers overall, and is used mainly at academic centers, his clinic has been using it for the past year.

The main question, Mansour cautioned, is whether increased detection of small polyps will actually reduce CRC incidence or mortality, and it will likely be several years before clear, concrete data can answer that.

“Most studies have shown the improvement in adenoma detection is mainly for diminutive polyps < 5 mm in diameter, but whether that will actually translate to substantive improvements in hard outcomes is as yet unknown,” he said. “But if gastroenterologists are interested in doing everything they can today to help improve detection rates and lower miss rates of premalignant polyps, serious consideration should be given to adopting the use of CADe in practice.”

This study was supported by Medtronic. Rees reported receiving grant funding from ARC Medical, Norgine, Medtronic, 3-D Matrix, and Olympus Medical, and has been an expert witness for ARC Medical. Other authors disclosed receiving research funding, honoraria, or travel expenses from Medtronic or other private companies. Mansour had no competing interests to declare.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET GASTROENTEROLOGY & HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Early Postpartum IUD Doesn’t Spike Healthcare Utilization

Article Type
Changed

TOPLINE:

Healthcare utilization after immediate and delayed intrauterine device (IUD) placement postpartum was comparable, with the immediate placement group making slightly fewer visits to obstetricians or gynecologists (ob/gyns). While immediate placement was associated with increased rates of imaging, it showed lower rates of laparoscopic surgery for IUD-related complications.

METHODOLOGY:

  • Researchers conducted a retrospective cohort study using data from Kaiser Permanente Northern California electronic health records to compare healthcare utilization after immediate (within 24 hours of placental delivery) and delayed (after 24 hours up to 6 weeks later) IUD placement.
  • They included 11,875 patients who delivered a live neonate and had an IUD placed between 0 and 63 days postpartum from 2016 to 2020, of whom 1543 received immediate IUD placement.
  • The primary outcome measures focused on the number of outpatient visits to ob/gyns for any indication within 1 year after delivery.
  • The secondary outcomes included pelvic or abdominal ultrasonograms performed in radiology departments, surgical interventions, hospitalizations related to IUD placement, and rates of pregnancy within 1 year.

TAKEAWAY:

  • Immediate placement of an IUD was associated with a modest decrease in the number of overall visits to ob/gyns compared with delayed placement (mean visits, 2.30 vs 2.47; adjusted risk ratio [aRR], 0.91; 95% CI, 0.87-0.94; P < .001).
  • Immediate placement of an IUD was associated with more imaging studies not within an ob/gyn visit (aRR, 2.26; P < .001); however, the rates of laparoscopic surgeries for complications related to IUD were lower in the immediate than in the delayed group (0.0% vs 0.4%; P = .005).
  • Hospitalizations related to IUD insertion were rare and increased in the immediate group (0.4% immediate; 0.02% delayed; P < .001).
  • No significant differences in repeat pregnancies were observed between the groups at 1 year (P = .342), and immediate placement of an IUD was not associated with an increased risk for ectopic pregnancies.

IN PRACTICE:

“Because one of the main goals of immediate IUD is preventing short-interval unintended pregnancies, it is of critical importance to highlight that there was no difference in the pregnancy rate between groups in the study,” the authors wrote. “This study can guide patient counseling and consent for immediate IUD,” they further added.

SOURCE:

This study was led by Talis M. Swisher, MD, of the Department of Obstetrics and Gynecology at the San Leandro Medical Center of Kaiser Permanente in San Leandro, California. It was published online on December 12, 2024, in Obstetrics & Gynecology.

LIMITATIONS:

Data on patient satisfaction were not included in this study. No analysis of cost-benefit was carried out due to challenges in comparing differences in insurance plans and regional disparities in costs across the United States. The study setting was unique to Kaiser Permanente Northern California, in which all patients in the hospital had access to IUDs and multiple settings of ultrasonography were readily available. Visits carried out virtually were not included in the analysis.

DISCLOSURES:

This study was supported by the Kaiser Permanente Northern California Graduate Medical Education Program, Kaiser Foundation Hospitals. The authors reported no potential conflicts of interest.



This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Healthcare utilization after immediate and delayed intrauterine device (IUD) placement postpartum was comparable, with the immediate placement group making slightly fewer visits to obstetricians or gynecologists (ob/gyns). While immediate placement was associated with increased rates of imaging, it showed lower rates of laparoscopic surgery for IUD-related complications.

METHODOLOGY:

  • Researchers conducted a retrospective cohort study using data from Kaiser Permanente Northern California electronic health records to compare healthcare utilization after immediate (within 24 hours of placental delivery) and delayed (after 24 hours up to 6 weeks later) IUD placement.
  • They included 11,875 patients who delivered a live neonate and had an IUD placed between 0 and 63 days postpartum from 2016 to 2020, of whom 1543 received immediate IUD placement.
  • The primary outcome measures focused on the number of outpatient visits to ob/gyns for any indication within 1 year after delivery.
  • The secondary outcomes included pelvic or abdominal ultrasonograms performed in radiology departments, surgical interventions, hospitalizations related to IUD placement, and rates of pregnancy within 1 year.

TAKEAWAY:

  • Immediate placement of an IUD was associated with a modest decrease in the number of overall visits to ob/gyns compared with delayed placement (mean visits, 2.30 vs 2.47; adjusted risk ratio [aRR], 0.91; 95% CI, 0.87-0.94; P < .001).
  • Immediate placement of an IUD was associated with more imaging studies not within an ob/gyn visit (aRR, 2.26; P < .001); however, the rates of laparoscopic surgeries for complications related to IUD were lower in the immediate than in the delayed group (0.0% vs 0.4%; P = .005).
  • Hospitalizations related to IUD insertion were rare and increased in the immediate group (0.4% immediate; 0.02% delayed; P < .001).
  • No significant differences in repeat pregnancies were observed between the groups at 1 year (P = .342), and immediate placement of an IUD was not associated with an increased risk for ectopic pregnancies.

IN PRACTICE:

“Because one of the main goals of immediate IUD is preventing short-interval unintended pregnancies, it is of critical importance to highlight that there was no difference in the pregnancy rate between groups in the study,” the authors wrote. “This study can guide patient counseling and consent for immediate IUD,” they further added.

SOURCE:

This study was led by Talis M. Swisher, MD, of the Department of Obstetrics and Gynecology at the San Leandro Medical Center of Kaiser Permanente in San Leandro, California. It was published online on December 12, 2024, in Obstetrics & Gynecology.

LIMITATIONS:

Data on patient satisfaction were not included in this study. No analysis of cost-benefit was carried out due to challenges in comparing differences in insurance plans and regional disparities in costs across the United States. The study setting was unique to Kaiser Permanente Northern California, in which all patients in the hospital had access to IUDs and multiple settings of ultrasonography were readily available. Visits carried out virtually were not included in the analysis.

DISCLOSURES:

This study was supported by the Kaiser Permanente Northern California Graduate Medical Education Program, Kaiser Foundation Hospitals. The authors reported no potential conflicts of interest.



This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

TOPLINE:

Healthcare utilization after immediate and delayed intrauterine device (IUD) placement postpartum was comparable, with the immediate placement group making slightly fewer visits to obstetricians or gynecologists (ob/gyns). While immediate placement was associated with increased rates of imaging, it showed lower rates of laparoscopic surgery for IUD-related complications.

METHODOLOGY:

  • Researchers conducted a retrospective cohort study using data from Kaiser Permanente Northern California electronic health records to compare healthcare utilization after immediate (within 24 hours of placental delivery) and delayed (after 24 hours up to 6 weeks later) IUD placement.
  • They included 11,875 patients who delivered a live neonate and had an IUD placed between 0 and 63 days postpartum from 2016 to 2020, of whom 1543 received immediate IUD placement.
  • The primary outcome measures focused on the number of outpatient visits to ob/gyns for any indication within 1 year after delivery.
  • The secondary outcomes included pelvic or abdominal ultrasonograms performed in radiology departments, surgical interventions, hospitalizations related to IUD placement, and rates of pregnancy within 1 year.

TAKEAWAY:

  • Immediate placement of an IUD was associated with a modest decrease in the number of overall visits to ob/gyns compared with delayed placement (mean visits, 2.30 vs 2.47; adjusted risk ratio [aRR], 0.91; 95% CI, 0.87-0.94; P < .001).
  • Immediate placement of an IUD was associated with more imaging studies not within an ob/gyn visit (aRR, 2.26; P < .001); however, the rates of laparoscopic surgeries for complications related to IUD were lower in the immediate than in the delayed group (0.0% vs 0.4%; P = .005).
  • Hospitalizations related to IUD insertion were rare and increased in the immediate group (0.4% immediate; 0.02% delayed; P < .001).
  • No significant differences in repeat pregnancies were observed between the groups at 1 year (P = .342), and immediate placement of an IUD was not associated with an increased risk for ectopic pregnancies.

IN PRACTICE:

“Because one of the main goals of immediate IUD is preventing short-interval unintended pregnancies, it is of critical importance to highlight that there was no difference in the pregnancy rate between groups in the study,” the authors wrote. “This study can guide patient counseling and consent for immediate IUD,” they further added.

SOURCE:

This study was led by Talis M. Swisher, MD, of the Department of Obstetrics and Gynecology at the San Leandro Medical Center of Kaiser Permanente in San Leandro, California. It was published online on December 12, 2024, in Obstetrics & Gynecology.

LIMITATIONS:

Data on patient satisfaction were not included in this study. No analysis of cost-benefit was carried out due to challenges in comparing differences in insurance plans and regional disparities in costs across the United States. The study setting was unique to Kaiser Permanente Northern California, in which all patients in the hospital had access to IUDs and multiple settings of ultrasonography were readily available. Visits carried out virtually were not included in the analysis.

DISCLOSURES:

This study was supported by the Kaiser Permanente Northern California Graduate Medical Education Program, Kaiser Foundation Hospitals. The authors reported no potential conflicts of interest.



This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Imipenem-Cilastatin-Relebactam, the New Go-To for Pneumonia?

Article Type
Changed

TOPLINE:

In a multinational phase 3 trial, imipenem-cilastatin-relebactam demonstrated noninferiority to piperacillin-tazobactam in treating critically ill patients with hospital-acquired bacterial pneumonia (HABP) or ventilator-associated bacterial pneumonia (VABP), with a comparable safety profile.

METHODOLOGY:

  • This multinational phase 3 trial, conducted between September 2018 and July 2022, compared imipenem-cilastatin-relebactam with piperacillin-tazobactam for HABP and VABP to support its use across multiple countries.
  • Overall, 270 patients with HABP or VABP (mean age, 57.6 years; 73.3% men) were randomly assigned to receive either intravenous imipenem-cilastatin-relebactam (500 mg/250 mg) or piperacillin-tazobactam (4000 mg/500 mg) every 6 hours over 30 minutes for 7-14 days.
  • Both treatment groups included critically ill patients, with 54.5% and 55.1% of patients in the imipenem-cilastatin-relebactam and piperacillin-tazobactam groups, respectively, having an Acute Physiology and Chronic Health Evaluation II score ≥ 15.
  • The primary outcome was the 28-day all-cause mortality; secondary outcomes included the rates of clinical and microbiological responses, as well as the incidence of adverse events.

TAKEAWAY:

  • Imipenem-cilastatin-relebactam was noninferior to piperacillin-tazobactam in terms of 28-day all-cause mortality (adjusted difference, 5.2%; 95% CI, −1.5-12.4; P = .024 for noninferiority).
  • At the end of treatment, the rates of a favorable clinical response were comparable between the imipenem-cilastatin-relebactam (71.6%) and piperacillin-tazobactam (68.4%) groups.
  • After treatment, microbiological response rates were 48.8% in the imipenem-cilastatin-relebactam group, whereas the rates were 47.9% in the piperacillin-tazobactam group.
  • The incidence of drug-related adverse events was similar across the treatment groups, with diarrhea, increased levels of alanine aminotransferase and aspartate aminotransferase, and abnormal hepatic function being the most common events.

IN PRACTICE:

“These results support the use of IMI/REL [imipenem-cilastatin-relebactam] in MDR [multidrug-resistant] infections globally, including to expand the range of available treatments for critically ill patients with HABP/VABP in China, and provide additional data to inform the World Health Organization’s MDR pathogen strategy,” the authors wrote.

SOURCE:

This study was led by Junjie Li, Department of Pulmonary and Critical Care Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China. It was published online on December 12, 2024, in the International Journal of Infectious Diseases.

LIMITATIONS:

This study excluded patients with immunosuppression and those on intermittent hemodialysis, limiting the generalizability of the results to these populations.

DISCLOSURES:

This study was funded by Merck Sharp & Dohme LLC, a subsidiary of Merck & Co. Inc., Rahway, New Jersey. Some authors served as employees of Merck Sharp & Dohme LLC, New Jersey, and MSD, China.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

In a multinational phase 3 trial, imipenem-cilastatin-relebactam demonstrated noninferiority to piperacillin-tazobactam in treating critically ill patients with hospital-acquired bacterial pneumonia (HABP) or ventilator-associated bacterial pneumonia (VABP), with a comparable safety profile.

METHODOLOGY:

  • This multinational phase 3 trial, conducted between September 2018 and July 2022, compared imipenem-cilastatin-relebactam with piperacillin-tazobactam for HABP and VABP to support its use across multiple countries.
  • Overall, 270 patients with HABP or VABP (mean age, 57.6 years; 73.3% men) were randomly assigned to receive either intravenous imipenem-cilastatin-relebactam (500 mg/250 mg) or piperacillin-tazobactam (4000 mg/500 mg) every 6 hours over 30 minutes for 7-14 days.
  • Both treatment groups included critically ill patients, with 54.5% and 55.1% of patients in the imipenem-cilastatin-relebactam and piperacillin-tazobactam groups, respectively, having an Acute Physiology and Chronic Health Evaluation II score ≥ 15.
  • The primary outcome was the 28-day all-cause mortality; secondary outcomes included the rates of clinical and microbiological responses, as well as the incidence of adverse events.

TAKEAWAY:

  • Imipenem-cilastatin-relebactam was noninferior to piperacillin-tazobactam in terms of 28-day all-cause mortality (adjusted difference, 5.2%; 95% CI, −1.5-12.4; P = .024 for noninferiority).
  • At the end of treatment, the rates of a favorable clinical response were comparable between the imipenem-cilastatin-relebactam (71.6%) and piperacillin-tazobactam (68.4%) groups.
  • After treatment, microbiological response rates were 48.8% in the imipenem-cilastatin-relebactam group, whereas the rates were 47.9% in the piperacillin-tazobactam group.
  • The incidence of drug-related adverse events was similar across the treatment groups, with diarrhea, increased levels of alanine aminotransferase and aspartate aminotransferase, and abnormal hepatic function being the most common events.

IN PRACTICE:

“These results support the use of IMI/REL [imipenem-cilastatin-relebactam] in MDR [multidrug-resistant] infections globally, including to expand the range of available treatments for critically ill patients with HABP/VABP in China, and provide additional data to inform the World Health Organization’s MDR pathogen strategy,” the authors wrote.

SOURCE:

This study was led by Junjie Li, Department of Pulmonary and Critical Care Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China. It was published online on December 12, 2024, in the International Journal of Infectious Diseases.

LIMITATIONS:

This study excluded patients with immunosuppression and those on intermittent hemodialysis, limiting the generalizability of the results to these populations.

DISCLOSURES:

This study was funded by Merck Sharp & Dohme LLC, a subsidiary of Merck & Co. Inc., Rahway, New Jersey. Some authors served as employees of Merck Sharp & Dohme LLC, New Jersey, and MSD, China.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

In a multinational phase 3 trial, imipenem-cilastatin-relebactam demonstrated noninferiority to piperacillin-tazobactam in treating critically ill patients with hospital-acquired bacterial pneumonia (HABP) or ventilator-associated bacterial pneumonia (VABP), with a comparable safety profile.

METHODOLOGY:

  • This multinational phase 3 trial, conducted between September 2018 and July 2022, compared imipenem-cilastatin-relebactam with piperacillin-tazobactam for HABP and VABP to support its use across multiple countries.
  • Overall, 270 patients with HABP or VABP (mean age, 57.6 years; 73.3% men) were randomly assigned to receive either intravenous imipenem-cilastatin-relebactam (500 mg/250 mg) or piperacillin-tazobactam (4000 mg/500 mg) every 6 hours over 30 minutes for 7-14 days.
  • Both treatment groups included critically ill patients, with 54.5% and 55.1% of patients in the imipenem-cilastatin-relebactam and piperacillin-tazobactam groups, respectively, having an Acute Physiology and Chronic Health Evaluation II score ≥ 15.
  • The primary outcome was the 28-day all-cause mortality; secondary outcomes included the rates of clinical and microbiological responses, as well as the incidence of adverse events.

TAKEAWAY:

  • Imipenem-cilastatin-relebactam was noninferior to piperacillin-tazobactam in terms of 28-day all-cause mortality (adjusted difference, 5.2%; 95% CI, −1.5-12.4; P = .024 for noninferiority).
  • At the end of treatment, the rates of a favorable clinical response were comparable between the imipenem-cilastatin-relebactam (71.6%) and piperacillin-tazobactam (68.4%) groups.
  • After treatment, microbiological response rates were 48.8% in the imipenem-cilastatin-relebactam group, whereas the rates were 47.9% in the piperacillin-tazobactam group.
  • The incidence of drug-related adverse events was similar across the treatment groups, with diarrhea, increased levels of alanine aminotransferase and aspartate aminotransferase, and abnormal hepatic function being the most common events.

IN PRACTICE:

“These results support the use of IMI/REL [imipenem-cilastatin-relebactam] in MDR [multidrug-resistant] infections globally, including to expand the range of available treatments for critically ill patients with HABP/VABP in China, and provide additional data to inform the World Health Organization’s MDR pathogen strategy,” the authors wrote.

SOURCE:

This study was led by Junjie Li, Department of Pulmonary and Critical Care Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China. It was published online on December 12, 2024, in the International Journal of Infectious Diseases.

LIMITATIONS:

This study excluded patients with immunosuppression and those on intermittent hemodialysis, limiting the generalizability of the results to these populations.

DISCLOSURES:

This study was funded by Merck Sharp & Dohme LLC, a subsidiary of Merck & Co. Inc., Rahway, New Jersey. Some authors served as employees of Merck Sharp & Dohme LLC, New Jersey, and MSD, China.

 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Exercising Longer May Boost Weight Loss, Meta-Analysis Shows

Article Type
Changed

TOPLINE:

Aerobic exercise shows a linear relationship with weight loss, with 30 minutes of weekly exercise linked to reduced body weight, waist circumference, and body fat in adults who were overweight or had obesity.

METHODOLOGY:

  • Researchers conducted a meta-analysis of randomized clinical trials to investigate the association of varying intensities and durations of aerobic exercise with adiposity measures in adults with obesity or who were overweight.
  • Overall, 116 randomized clinical trials that spanned across North America, Asia, Europe, Australia, South America, and Africa and involved 6880 adults (mean age, 46 years; 61% women) were included.
  • The trials were required to have intervention durations of at least 8 weeks; all trials used supervised aerobic exercise, such as walking or running, while the control groups remained sedentary or continued usual activities.
  • The intensity of exercise was defined as: Light (40%-55% maximum heart rate), moderate (55%-70% maximum heart rate), and vigorous (70%-90% maximum heart rate).
  • The primary outcomes were body weight changes and adverse events; the secondary outcomes included changes in waist circumference, quality-of-life scores, and reduction in medications like antihypertensives.

TAKEAWAY:

  • Every 30 minutes of aerobic exercise per week was associated with a 1.14 lb reduction in body weight (certainty of evidence, moderate).
  • Every 30 minutes of aerobic exercise per week was also associated with lower waist circumference (mean difference, −0.56 cm; 95% CI, –0.67 to –0.45), body fat percentage (mean difference, –0.37%; 95% CI, –0.43 to –0.31), and body fat mass (mean difference, –0.20 kg; 95% CI, –0.32 to –0.08), along with reduced visceral and subcutaneous adipose tissue.
  • A dose-response meta-analysis revealed that body fat percentage improved most significantly with 150 minutes of aerobic exercise per week, while body weight and waist circumference decreased linearly with increasing duration of aerobic exercise at 300 min/wk at different intensities.
  • Adverse events with aerobic exercise were mostly mild or moderate musculoskeletal symptoms.

IN PRACTICE:

“Point-specific estimates for different aerobic exercise duration and intensity can help patients and healthcare professionals select the optimal aerobic exercise duration and intensity according to their weight loss goals,” the authors wrote.

 

SOURCE:

The study was led by Ahmad Jayedi, PhD, of the Department of Epidemiology and Biostatistics in the School of Public Health at the Imperial College London in England. It was published online on December 26, 2024, in JAMA Network Open.

 

LIMITATIONS:

High heterogeneity was present in the data. Only one trial included measures of health-related quality of life, and two studies included measures of medication use. Dietary habits and smoking status of participants were not included in studies, so any potential effects were not risk adjusted for.

 

DISCLOSURES:

No funding sources were reported. The authors reported no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Aerobic exercise shows a linear relationship with weight loss, with 30 minutes of weekly exercise linked to reduced body weight, waist circumference, and body fat in adults who were overweight or had obesity.

METHODOLOGY:

  • Researchers conducted a meta-analysis of randomized clinical trials to investigate the association of varying intensities and durations of aerobic exercise with adiposity measures in adults with obesity or who were overweight.
  • Overall, 116 randomized clinical trials that spanned across North America, Asia, Europe, Australia, South America, and Africa and involved 6880 adults (mean age, 46 years; 61% women) were included.
  • The trials were required to have intervention durations of at least 8 weeks; all trials used supervised aerobic exercise, such as walking or running, while the control groups remained sedentary or continued usual activities.
  • The intensity of exercise was defined as: Light (40%-55% maximum heart rate), moderate (55%-70% maximum heart rate), and vigorous (70%-90% maximum heart rate).
  • The primary outcomes were body weight changes and adverse events; the secondary outcomes included changes in waist circumference, quality-of-life scores, and reduction in medications like antihypertensives.

TAKEAWAY:

  • Every 30 minutes of aerobic exercise per week was associated with a 1.14 lb reduction in body weight (certainty of evidence, moderate).
  • Every 30 minutes of aerobic exercise per week was also associated with lower waist circumference (mean difference, −0.56 cm; 95% CI, –0.67 to –0.45), body fat percentage (mean difference, –0.37%; 95% CI, –0.43 to –0.31), and body fat mass (mean difference, –0.20 kg; 95% CI, –0.32 to –0.08), along with reduced visceral and subcutaneous adipose tissue.
  • A dose-response meta-analysis revealed that body fat percentage improved most significantly with 150 minutes of aerobic exercise per week, while body weight and waist circumference decreased linearly with increasing duration of aerobic exercise at 300 min/wk at different intensities.
  • Adverse events with aerobic exercise were mostly mild or moderate musculoskeletal symptoms.

IN PRACTICE:

“Point-specific estimates for different aerobic exercise duration and intensity can help patients and healthcare professionals select the optimal aerobic exercise duration and intensity according to their weight loss goals,” the authors wrote.

 

SOURCE:

The study was led by Ahmad Jayedi, PhD, of the Department of Epidemiology and Biostatistics in the School of Public Health at the Imperial College London in England. It was published online on December 26, 2024, in JAMA Network Open.

 

LIMITATIONS:

High heterogeneity was present in the data. Only one trial included measures of health-related quality of life, and two studies included measures of medication use. Dietary habits and smoking status of participants were not included in studies, so any potential effects were not risk adjusted for.

 

DISCLOSURES:

No funding sources were reported. The authors reported no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

TOPLINE:

Aerobic exercise shows a linear relationship with weight loss, with 30 minutes of weekly exercise linked to reduced body weight, waist circumference, and body fat in adults who were overweight or had obesity.

METHODOLOGY:

  • Researchers conducted a meta-analysis of randomized clinical trials to investigate the association of varying intensities and durations of aerobic exercise with adiposity measures in adults with obesity or who were overweight.
  • Overall, 116 randomized clinical trials that spanned across North America, Asia, Europe, Australia, South America, and Africa and involved 6880 adults (mean age, 46 years; 61% women) were included.
  • The trials were required to have intervention durations of at least 8 weeks; all trials used supervised aerobic exercise, such as walking or running, while the control groups remained sedentary or continued usual activities.
  • The intensity of exercise was defined as: Light (40%-55% maximum heart rate), moderate (55%-70% maximum heart rate), and vigorous (70%-90% maximum heart rate).
  • The primary outcomes were body weight changes and adverse events; the secondary outcomes included changes in waist circumference, quality-of-life scores, and reduction in medications like antihypertensives.

TAKEAWAY:

  • Every 30 minutes of aerobic exercise per week was associated with a 1.14 lb reduction in body weight (certainty of evidence, moderate).
  • Every 30 minutes of aerobic exercise per week was also associated with lower waist circumference (mean difference, −0.56 cm; 95% CI, –0.67 to –0.45), body fat percentage (mean difference, –0.37%; 95% CI, –0.43 to –0.31), and body fat mass (mean difference, –0.20 kg; 95% CI, –0.32 to –0.08), along with reduced visceral and subcutaneous adipose tissue.
  • A dose-response meta-analysis revealed that body fat percentage improved most significantly with 150 minutes of aerobic exercise per week, while body weight and waist circumference decreased linearly with increasing duration of aerobic exercise at 300 min/wk at different intensities.
  • Adverse events with aerobic exercise were mostly mild or moderate musculoskeletal symptoms.

IN PRACTICE:

“Point-specific estimates for different aerobic exercise duration and intensity can help patients and healthcare professionals select the optimal aerobic exercise duration and intensity according to their weight loss goals,” the authors wrote.

 

SOURCE:

The study was led by Ahmad Jayedi, PhD, of the Department of Epidemiology and Biostatistics in the School of Public Health at the Imperial College London in England. It was published online on December 26, 2024, in JAMA Network Open.

 

LIMITATIONS:

High heterogeneity was present in the data. Only one trial included measures of health-related quality of life, and two studies included measures of medication use. Dietary habits and smoking status of participants were not included in studies, so any potential effects were not risk adjusted for.

 

DISCLOSURES:

No funding sources were reported. The authors reported no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Brain Changes in Youth Who Use Substances: Cause or Effect?

Article Type
Changed

A widely accepted assumption in the addiction field is that neuroanatomical changes observed in young people who use alcohol or other substances are largely the consequence of exposure to these substances.

But a new study suggests that neuroanatomical features in children, including greater whole brain and cortical volumes, are evident before exposure to any substances.

The investigators, led by Alex P. Miller, PhD, assistant professor, Department of Psychiatry, Indiana University, Indianapolis, noted that the findings add to a growing body of work that suggests individual brain structure, along with environmental exposure and genetic risk, may influence risk for substance use disorder. 

The findings were published online in JAMA Network Open.

 

Neuroanatomy a Predisposing Risk Factor?

Earlier research showed that substance use is associated with lower gray matter volume, thinner cortex, and less white matter integrity. While it has been widely thought that these changes were induced by the use of alcohol or illicit drugs, recent longitudinal and genetic studies suggest that the neuroanatomical changes may also be predisposing risk factors for substance use.

To better understand the issue, investigators analyzed data on 9804 children (mean baseline age, 9.9 years; 53% men; 76% White) at 22 US sites enrolled in the Adolescent Brain Cognitive Development (ABCD) Study that’s examining brain and behavioral development from middle childhood to young adulthood.

The researchers collected information on the use of alcohol, nicotine, cannabis, and other illicit substances from in-person interviews at baseline and years 1, 2, and 3, as well as interim phone interviews at 6, 18, and 30 months. MRI scans provided extensive brain structural data, including global and regional cortical volume, thickness, surface area, sulcal depth, and subcortical volume.

Of the total, 3460 participants (35%) initiated substance use before age 15, with 90% reporting alcohol use initiation. There was considerable overlap between initiation of alcohol, nicotine, and cannabis.

The researchers tested whether baseline neuroanatomical variability was associated with any substance use initiation before or up to 3 years following initial neuroimaging scans. Study covariates included baseline age, sex, pubertal status, familial relationship (eg, sibling or twin), and prenatal substance exposures. Researchers didn’t control for sociodemographic characteristics as these could influence associations.

 

Significant Brain Differences

Compared with no substance use initiation, any substance use initiation was associated with larger global neuroanatomical indices, including whole brain (beta = 0.05; P = 2.80 × 10–8), total intracranial (beta = 0.04; P = 3.49 × 10−6), cortical (beta = 0.05; P = 4.31 × 10–8), and subcortical volumes (beta = 0.05; P = 4.39 × 10–8), as well as greater total cortical surface area (beta = 0.04; P = 6.05 × 10–7).

The direction of associations between cortical thickness and substance use initiation was regionally specific; any substance use initiation was characterized by thinner cortex in all frontal regions (eg, rostral middle frontal gyrus, beta = −0.03; P = 6.99 × 10–6), but thicker cortex in all other lobes. It was also associated with larger regional brain volumes, deeper regional sulci, and differences in regional cortical surface area.

The authors noted total cortical thickness peaks at age 1.7 years and steadily declines throughout life. By contrast, subcortical volumes peak at 14.4 years of age and generally remain stable before steep later life declines.

Secondary analyses compared initiation of the three most commonly used substances in early adolescence (alcohol, nicotine, and cannabis) with no substance use.

Findings for alcohol largely mirrored those for any substance use. However, the study uncovered additional significant associations, including greater left lateral occipital volume and bilateral para-hippocampal gyri cortical thickness and less bilateral superior frontal gyri cortical thickness.

Nicotine use was associated with lower right superior frontal gyrus volume and deeper left lateral orbitofrontal cortex sulci. And cannabis use was associated with thinner left precentral gyrus and lower right inferior parietal gyrus and right caudate volumes.

The authors noted results for nicotine and cannabis may not have had adequate statistical power, and small effects suggest these findings aren’t clinically informative for individuals. However, they wrote, “They do inform and challenge current theoretical models of addiction.”

 

Associations Precede Substance Use

A post hoc analysis further challenges current models of addiction. When researchers looked only at the 1203 youth who initiated substance use after the baseline neuroimaging session, they found most associations preceded substance use.

“That regional associations may precede substance use initiation, including less cortical thickness in the right rostral middle frontal gyrus, challenges predominant interpretations that these associations arise largely due to neurotoxic consequences of exposure and increases the plausibility that these features may, at least partially, reflect markers of predispositional risk,” wrote the authors.

A study limitation was that unmeasured confounders and undetected systemic differences in missing data may have influenced associations. Sociodemographic, environmental, and genetic variables that were not included as covariates are likely associated with both neuroanatomical variability and substance use initiation and may moderate associations between them, said the authors.

The ABCD Study provides “a robust and large database of longitudinal data” that goes beyond previous neuroimaging research “to understand the bidirectional relationship between brain structure and substance use,” Miller said in a press release.

“The hope is that these types of studies, in conjunction with other data on environmental exposures and genetic risk, could help change how we think about the development of substance use disorders and inform more accurate models of addiction moving forward,” Miller said.

 

Reevaluating Causal Assumptions

In an accompanying editorial, Felix Pichardo, MA, and Sylia Wilson, PhD, from the Institute of Child Development, University of Minnesota, Minneapolis, suggested that it may be time to “reevaluate the causal assumptions that underlie brain disease models of addiction” and the mechanisms by which it develops, persists, and becomes harmful.

Neurotoxic effects of substances are central to current brain disease models of addiction, wrote Pichardo and Wilson. “Substance exposure is thought to affect cortical and subcortical regions that support interrelated systems, resulting in desensitization of reward-related processing, increased stress that prompts cravings, negative emotions when cravings are unsated, and weakening of cognitive control abilities that leads to repeated returns to use.”

The editorial writers praised the ABCD Study for its large sample size for providing a level of precision, statistical accuracy, and ability to identify both larger and smaller effects, which are critical for addiction research.

Unlike most addiction research that relies on cross-sectional designs, the current study used longitudinal assessments, which is another of its strengths, they noted.

“Longitudinal study designs like in the ABCD Study are fundamental for establishing temporal ordering across constructs, which is important because establishing temporal precedence is a key step in determining causal links and underlying mechanisms.”

The inclusion of several genetically informative components, such as the family study design, nested twin subsamples, and DNA collection, “allows researchers to extend beyond temporal precedence toward increased causal inference and identification of mechanisms,” they added.

The study received support from the National Institutes of Health. The study authors and editorial writers had no relevant conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A widely accepted assumption in the addiction field is that neuroanatomical changes observed in young people who use alcohol or other substances are largely the consequence of exposure to these substances.

But a new study suggests that neuroanatomical features in children, including greater whole brain and cortical volumes, are evident before exposure to any substances.

The investigators, led by Alex P. Miller, PhD, assistant professor, Department of Psychiatry, Indiana University, Indianapolis, noted that the findings add to a growing body of work that suggests individual brain structure, along with environmental exposure and genetic risk, may influence risk for substance use disorder. 

The findings were published online in JAMA Network Open.

 

Neuroanatomy a Predisposing Risk Factor?

Earlier research showed that substance use is associated with lower gray matter volume, thinner cortex, and less white matter integrity. While it has been widely thought that these changes were induced by the use of alcohol or illicit drugs, recent longitudinal and genetic studies suggest that the neuroanatomical changes may also be predisposing risk factors for substance use.

To better understand the issue, investigators analyzed data on 9804 children (mean baseline age, 9.9 years; 53% men; 76% White) at 22 US sites enrolled in the Adolescent Brain Cognitive Development (ABCD) Study that’s examining brain and behavioral development from middle childhood to young adulthood.

The researchers collected information on the use of alcohol, nicotine, cannabis, and other illicit substances from in-person interviews at baseline and years 1, 2, and 3, as well as interim phone interviews at 6, 18, and 30 months. MRI scans provided extensive brain structural data, including global and regional cortical volume, thickness, surface area, sulcal depth, and subcortical volume.

Of the total, 3460 participants (35%) initiated substance use before age 15, with 90% reporting alcohol use initiation. There was considerable overlap between initiation of alcohol, nicotine, and cannabis.

The researchers tested whether baseline neuroanatomical variability was associated with any substance use initiation before or up to 3 years following initial neuroimaging scans. Study covariates included baseline age, sex, pubertal status, familial relationship (eg, sibling or twin), and prenatal substance exposures. Researchers didn’t control for sociodemographic characteristics as these could influence associations.

 

Significant Brain Differences

Compared with no substance use initiation, any substance use initiation was associated with larger global neuroanatomical indices, including whole brain (beta = 0.05; P = 2.80 × 10–8), total intracranial (beta = 0.04; P = 3.49 × 10−6), cortical (beta = 0.05; P = 4.31 × 10–8), and subcortical volumes (beta = 0.05; P = 4.39 × 10–8), as well as greater total cortical surface area (beta = 0.04; P = 6.05 × 10–7).

The direction of associations between cortical thickness and substance use initiation was regionally specific; any substance use initiation was characterized by thinner cortex in all frontal regions (eg, rostral middle frontal gyrus, beta = −0.03; P = 6.99 × 10–6), but thicker cortex in all other lobes. It was also associated with larger regional brain volumes, deeper regional sulci, and differences in regional cortical surface area.

The authors noted total cortical thickness peaks at age 1.7 years and steadily declines throughout life. By contrast, subcortical volumes peak at 14.4 years of age and generally remain stable before steep later life declines.

Secondary analyses compared initiation of the three most commonly used substances in early adolescence (alcohol, nicotine, and cannabis) with no substance use.

Findings for alcohol largely mirrored those for any substance use. However, the study uncovered additional significant associations, including greater left lateral occipital volume and bilateral para-hippocampal gyri cortical thickness and less bilateral superior frontal gyri cortical thickness.

Nicotine use was associated with lower right superior frontal gyrus volume and deeper left lateral orbitofrontal cortex sulci. And cannabis use was associated with thinner left precentral gyrus and lower right inferior parietal gyrus and right caudate volumes.

The authors noted results for nicotine and cannabis may not have had adequate statistical power, and small effects suggest these findings aren’t clinically informative for individuals. However, they wrote, “They do inform and challenge current theoretical models of addiction.”

 

Associations Precede Substance Use

A post hoc analysis further challenges current models of addiction. When researchers looked only at the 1203 youth who initiated substance use after the baseline neuroimaging session, they found most associations preceded substance use.

“That regional associations may precede substance use initiation, including less cortical thickness in the right rostral middle frontal gyrus, challenges predominant interpretations that these associations arise largely due to neurotoxic consequences of exposure and increases the plausibility that these features may, at least partially, reflect markers of predispositional risk,” wrote the authors.

A study limitation was that unmeasured confounders and undetected systemic differences in missing data may have influenced associations. Sociodemographic, environmental, and genetic variables that were not included as covariates are likely associated with both neuroanatomical variability and substance use initiation and may moderate associations between them, said the authors.

The ABCD Study provides “a robust and large database of longitudinal data” that goes beyond previous neuroimaging research “to understand the bidirectional relationship between brain structure and substance use,” Miller said in a press release.

“The hope is that these types of studies, in conjunction with other data on environmental exposures and genetic risk, could help change how we think about the development of substance use disorders and inform more accurate models of addiction moving forward,” Miller said.

 

Reevaluating Causal Assumptions

In an accompanying editorial, Felix Pichardo, MA, and Sylia Wilson, PhD, from the Institute of Child Development, University of Minnesota, Minneapolis, suggested that it may be time to “reevaluate the causal assumptions that underlie brain disease models of addiction” and the mechanisms by which it develops, persists, and becomes harmful.

Neurotoxic effects of substances are central to current brain disease models of addiction, wrote Pichardo and Wilson. “Substance exposure is thought to affect cortical and subcortical regions that support interrelated systems, resulting in desensitization of reward-related processing, increased stress that prompts cravings, negative emotions when cravings are unsated, and weakening of cognitive control abilities that leads to repeated returns to use.”

The editorial writers praised the ABCD Study for its large sample size for providing a level of precision, statistical accuracy, and ability to identify both larger and smaller effects, which are critical for addiction research.

Unlike most addiction research that relies on cross-sectional designs, the current study used longitudinal assessments, which is another of its strengths, they noted.

“Longitudinal study designs like in the ABCD Study are fundamental for establishing temporal ordering across constructs, which is important because establishing temporal precedence is a key step in determining causal links and underlying mechanisms.”

The inclusion of several genetically informative components, such as the family study design, nested twin subsamples, and DNA collection, “allows researchers to extend beyond temporal precedence toward increased causal inference and identification of mechanisms,” they added.

The study received support from the National Institutes of Health. The study authors and editorial writers had no relevant conflicts of interest.

A version of this article appeared on Medscape.com.

A widely accepted assumption in the addiction field is that neuroanatomical changes observed in young people who use alcohol or other substances are largely the consequence of exposure to these substances.

But a new study suggests that neuroanatomical features in children, including greater whole brain and cortical volumes, are evident before exposure to any substances.

The investigators, led by Alex P. Miller, PhD, assistant professor, Department of Psychiatry, Indiana University, Indianapolis, noted that the findings add to a growing body of work that suggests individual brain structure, along with environmental exposure and genetic risk, may influence risk for substance use disorder. 

The findings were published online in JAMA Network Open.

 

Neuroanatomy a Predisposing Risk Factor?

Earlier research showed that substance use is associated with lower gray matter volume, thinner cortex, and less white matter integrity. While it has been widely thought that these changes were induced by the use of alcohol or illicit drugs, recent longitudinal and genetic studies suggest that the neuroanatomical changes may also be predisposing risk factors for substance use.

To better understand the issue, investigators analyzed data on 9804 children (mean baseline age, 9.9 years; 53% men; 76% White) at 22 US sites enrolled in the Adolescent Brain Cognitive Development (ABCD) Study that’s examining brain and behavioral development from middle childhood to young adulthood.

The researchers collected information on the use of alcohol, nicotine, cannabis, and other illicit substances from in-person interviews at baseline and years 1, 2, and 3, as well as interim phone interviews at 6, 18, and 30 months. MRI scans provided extensive brain structural data, including global and regional cortical volume, thickness, surface area, sulcal depth, and subcortical volume.

Of the total, 3460 participants (35%) initiated substance use before age 15, with 90% reporting alcohol use initiation. There was considerable overlap between initiation of alcohol, nicotine, and cannabis.

The researchers tested whether baseline neuroanatomical variability was associated with any substance use initiation before or up to 3 years following initial neuroimaging scans. Study covariates included baseline age, sex, pubertal status, familial relationship (eg, sibling or twin), and prenatal substance exposures. Researchers didn’t control for sociodemographic characteristics as these could influence associations.

 

Significant Brain Differences

Compared with no substance use initiation, any substance use initiation was associated with larger global neuroanatomical indices, including whole brain (beta = 0.05; P = 2.80 × 10–8), total intracranial (beta = 0.04; P = 3.49 × 10−6), cortical (beta = 0.05; P = 4.31 × 10–8), and subcortical volumes (beta = 0.05; P = 4.39 × 10–8), as well as greater total cortical surface area (beta = 0.04; P = 6.05 × 10–7).

The direction of associations between cortical thickness and substance use initiation was regionally specific; any substance use initiation was characterized by thinner cortex in all frontal regions (eg, rostral middle frontal gyrus, beta = −0.03; P = 6.99 × 10–6), but thicker cortex in all other lobes. It was also associated with larger regional brain volumes, deeper regional sulci, and differences in regional cortical surface area.

The authors noted total cortical thickness peaks at age 1.7 years and steadily declines throughout life. By contrast, subcortical volumes peak at 14.4 years of age and generally remain stable before steep later life declines.

Secondary analyses compared initiation of the three most commonly used substances in early adolescence (alcohol, nicotine, and cannabis) with no substance use.

Findings for alcohol largely mirrored those for any substance use. However, the study uncovered additional significant associations, including greater left lateral occipital volume and bilateral para-hippocampal gyri cortical thickness and less bilateral superior frontal gyri cortical thickness.

Nicotine use was associated with lower right superior frontal gyrus volume and deeper left lateral orbitofrontal cortex sulci. And cannabis use was associated with thinner left precentral gyrus and lower right inferior parietal gyrus and right caudate volumes.

The authors noted results for nicotine and cannabis may not have had adequate statistical power, and small effects suggest these findings aren’t clinically informative for individuals. However, they wrote, “They do inform and challenge current theoretical models of addiction.”

 

Associations Precede Substance Use

A post hoc analysis further challenges current models of addiction. When researchers looked only at the 1203 youth who initiated substance use after the baseline neuroimaging session, they found most associations preceded substance use.

“That regional associations may precede substance use initiation, including less cortical thickness in the right rostral middle frontal gyrus, challenges predominant interpretations that these associations arise largely due to neurotoxic consequences of exposure and increases the plausibility that these features may, at least partially, reflect markers of predispositional risk,” wrote the authors.

A study limitation was that unmeasured confounders and undetected systemic differences in missing data may have influenced associations. Sociodemographic, environmental, and genetic variables that were not included as covariates are likely associated with both neuroanatomical variability and substance use initiation and may moderate associations between them, said the authors.

The ABCD Study provides “a robust and large database of longitudinal data” that goes beyond previous neuroimaging research “to understand the bidirectional relationship between brain structure and substance use,” Miller said in a press release.

“The hope is that these types of studies, in conjunction with other data on environmental exposures and genetic risk, could help change how we think about the development of substance use disorders and inform more accurate models of addiction moving forward,” Miller said.

 

Reevaluating Causal Assumptions

In an accompanying editorial, Felix Pichardo, MA, and Sylia Wilson, PhD, from the Institute of Child Development, University of Minnesota, Minneapolis, suggested that it may be time to “reevaluate the causal assumptions that underlie brain disease models of addiction” and the mechanisms by which it develops, persists, and becomes harmful.

Neurotoxic effects of substances are central to current brain disease models of addiction, wrote Pichardo and Wilson. “Substance exposure is thought to affect cortical and subcortical regions that support interrelated systems, resulting in desensitization of reward-related processing, increased stress that prompts cravings, negative emotions when cravings are unsated, and weakening of cognitive control abilities that leads to repeated returns to use.”

The editorial writers praised the ABCD Study for its large sample size for providing a level of precision, statistical accuracy, and ability to identify both larger and smaller effects, which are critical for addiction research.

Unlike most addiction research that relies on cross-sectional designs, the current study used longitudinal assessments, which is another of its strengths, they noted.

“Longitudinal study designs like in the ABCD Study are fundamental for establishing temporal ordering across constructs, which is important because establishing temporal precedence is a key step in determining causal links and underlying mechanisms.”

The inclusion of several genetically informative components, such as the family study design, nested twin subsamples, and DNA collection, “allows researchers to extend beyond temporal precedence toward increased causal inference and identification of mechanisms,” they added.

The study received support from the National Institutes of Health. The study authors and editorial writers had no relevant conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

AI Shows Early Promise in Detecting Infantile Spasms

Article Type
Changed

Artificial intelligence (AI) analysis of caregiver-recorded videos has the potential to diagnose infantile epileptic spasm syndrome, according to a new study.

Infants with the condition can have poor outcomes with even small delays in diagnosis and ensuing treatment, potentially leading to intellectual disability, autism, and worse epilepsy. “It’s super important to start the treatment early, but oftentimes, these symptoms are just misrecognized by primary care or ER physicians. It takes a long time to diagnose,” said Gadi Miron, MD, who presented the study at the American Epilepsy Society (AES) 78th Annual Meeting 2024.

 

What Is This? What Should I Do?

Parents who observe unusual behavior often seek advice from friends and family members and receive false reassurance that such behavior isn’t unusual. Even physicians may contribute if they are unaware of infantile spasms, which is a rare disorder. “And then again, they get false reassurance, and because of that false reassurance, you get a diagnostic delay,” said Shaun Hussain, MD, who was asked to comment on the study.

The timing and frequency of infantile spasms create challenges for diagnosis. They only last about 1 second, and they tend to cluster in the morning. By the time a caregiver brings an infant to a healthcare provider, they may have trouble describing the behavior. “Parents are struggling to describe what they saw, and it often just does not resonate, or doesn’t make the healthcare provider think about infantile spasms,” said Hussain.

The idea to employ AI came from looking at videos of infants on YouTube and the realization that many patients upload them in an effort to seek advice. “So many parents upload these videos and ask in the comments, ‘What is this? What should I do? Can somebody help me?’ said Miron, who is a neurologist and researcher at Charité — Universitätsmedizin Berlin in Germany.

 

AI and Video Can Aid Diagnosis

The researchers built a model that they trained to recognize epileptic spasms using openly available YouTube videos, including 141 infants, 991 recorded seizures, and 597 non-seizure video segments, along with a non-seizure cohort of 127 infants with an accompanying 1385 video segments.

Each video segment was reviewed by two specialists, and they had to agree for it to be counted as an epileptic spasm.

The model detected epileptic seizures with an area under the curve (AUC) of 0.96. It had a sensitivity of 82%, specificity of 90%, and accuracy of 85% when applied to the training set.

The researchers then tested it against three validation sets. In the first, a smartphone-based set retrieved from TikTok of 26 infants with 70 epileptic spasms and 31 non-seizure 5-second video segments, the model had an AUC of 0.98, a sensitivity of 89%, a specificity of 100%, and an accuracy of 92%.

A second smartphone-based set of 67 infants, drawn from YouTube, showed a false detection rate of 0.75% (five detections out of 666 video segments). A third dataset collected from in-hospital EEG monitoring of 21 infants without seizures revealed a false-positive rate of 3.4% (365 of 10,860 video segments).

The group is now developing an app that will allow parents to upload videos that can be analyzed using the model. Physicians can then view the video and determine if there is suspicion of a seizure.

Miron also believes that this approach could find use in other types of seizures and populations, including older children and adults. “We have actually built some models for detection of seizures for videos in adults as well. Looking more towards the future, I’m sure AI will be used to analyze videos of other neurological disorders with motor symptoms [such as] movement disorders and gait,” he said.

 

Encouraging Early Research

Hussain, who is a professor of pediatrics at UCLA Health, lauded the work generally but emphasized that it is still in the early stage. “Their comparison was a relatively easy one. They’re just comparing normal versus infantile spasms, and they’re looking at the seizure versus normal behavior. Usually, the distinction is much harder in that there are kids who are having behaviors that are maybe other types of seizures, which is much harder to distinguish from infantile spasms, in contrast to just normal behaviors. The other mimic of infantile spasms is things like infant heartburn. Those kids will often have some posturing, and they often will be in pain. They might cry. That’s something that infantile spasms will often generate, so that’s why there’s a lot of confusion between those two,” said Hussain.

He noted that there have been efforts to raise awareness of infantile spasms among physicians and the general public, but that hasn’t reduced the increased detection.

 

Another Resource

In fact, parents with suspicions often go to social media sites like YouTube and a Facebook group dedicated to infantile spasms. “You can Google infantile spasms, and you’ll see examples of weird behaviors, and then you’ll look in the comments, and you’ll see this commenter said: ‘These could be infantile spasms. You should go to a children’s hospital. Don’t leave until you get an EEG to make sure that these are not seizures. There’s all kinds of great advice there, and it really shouldn’t be the situation where to get the best care, you need to go on YouTube,’ ” said Hussain.

Miron and Hussain had no relevant financial disclosures.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Artificial intelligence (AI) analysis of caregiver-recorded videos has the potential to diagnose infantile epileptic spasm syndrome, according to a new study.

Infants with the condition can have poor outcomes with even small delays in diagnosis and ensuing treatment, potentially leading to intellectual disability, autism, and worse epilepsy. “It’s super important to start the treatment early, but oftentimes, these symptoms are just misrecognized by primary care or ER physicians. It takes a long time to diagnose,” said Gadi Miron, MD, who presented the study at the American Epilepsy Society (AES) 78th Annual Meeting 2024.

 

What Is This? What Should I Do?

Parents who observe unusual behavior often seek advice from friends and family members and receive false reassurance that such behavior isn’t unusual. Even physicians may contribute if they are unaware of infantile spasms, which is a rare disorder. “And then again, they get false reassurance, and because of that false reassurance, you get a diagnostic delay,” said Shaun Hussain, MD, who was asked to comment on the study.

The timing and frequency of infantile spasms create challenges for diagnosis. They only last about 1 second, and they tend to cluster in the morning. By the time a caregiver brings an infant to a healthcare provider, they may have trouble describing the behavior. “Parents are struggling to describe what they saw, and it often just does not resonate, or doesn’t make the healthcare provider think about infantile spasms,” said Hussain.

The idea to employ AI came from looking at videos of infants on YouTube and the realization that many patients upload them in an effort to seek advice. “So many parents upload these videos and ask in the comments, ‘What is this? What should I do? Can somebody help me?’ said Miron, who is a neurologist and researcher at Charité — Universitätsmedizin Berlin in Germany.

 

AI and Video Can Aid Diagnosis

The researchers built a model that they trained to recognize epileptic spasms using openly available YouTube videos, including 141 infants, 991 recorded seizures, and 597 non-seizure video segments, along with a non-seizure cohort of 127 infants with an accompanying 1385 video segments.

Each video segment was reviewed by two specialists, and they had to agree for it to be counted as an epileptic spasm.

The model detected epileptic seizures with an area under the curve (AUC) of 0.96. It had a sensitivity of 82%, specificity of 90%, and accuracy of 85% when applied to the training set.

The researchers then tested it against three validation sets. In the first, a smartphone-based set retrieved from TikTok of 26 infants with 70 epileptic spasms and 31 non-seizure 5-second video segments, the model had an AUC of 0.98, a sensitivity of 89%, a specificity of 100%, and an accuracy of 92%.

A second smartphone-based set of 67 infants, drawn from YouTube, showed a false detection rate of 0.75% (five detections out of 666 video segments). A third dataset collected from in-hospital EEG monitoring of 21 infants without seizures revealed a false-positive rate of 3.4% (365 of 10,860 video segments).

The group is now developing an app that will allow parents to upload videos that can be analyzed using the model. Physicians can then view the video and determine if there is suspicion of a seizure.

Miron also believes that this approach could find use in other types of seizures and populations, including older children and adults. “We have actually built some models for detection of seizures for videos in adults as well. Looking more towards the future, I’m sure AI will be used to analyze videos of other neurological disorders with motor symptoms [such as] movement disorders and gait,” he said.

 

Encouraging Early Research

Hussain, who is a professor of pediatrics at UCLA Health, lauded the work generally but emphasized that it is still in the early stage. “Their comparison was a relatively easy one. They’re just comparing normal versus infantile spasms, and they’re looking at the seizure versus normal behavior. Usually, the distinction is much harder in that there are kids who are having behaviors that are maybe other types of seizures, which is much harder to distinguish from infantile spasms, in contrast to just normal behaviors. The other mimic of infantile spasms is things like infant heartburn. Those kids will often have some posturing, and they often will be in pain. They might cry. That’s something that infantile spasms will often generate, so that’s why there’s a lot of confusion between those two,” said Hussain.

He noted that there have been efforts to raise awareness of infantile spasms among physicians and the general public, but that hasn’t reduced the increased detection.

 

Another Resource

In fact, parents with suspicions often go to social media sites like YouTube and a Facebook group dedicated to infantile spasms. “You can Google infantile spasms, and you’ll see examples of weird behaviors, and then you’ll look in the comments, and you’ll see this commenter said: ‘These could be infantile spasms. You should go to a children’s hospital. Don’t leave until you get an EEG to make sure that these are not seizures. There’s all kinds of great advice there, and it really shouldn’t be the situation where to get the best care, you need to go on YouTube,’ ” said Hussain.

Miron and Hussain had no relevant financial disclosures.

A version of this article first appeared on Medscape.com.

Artificial intelligence (AI) analysis of caregiver-recorded videos has the potential to diagnose infantile epileptic spasm syndrome, according to a new study.

Infants with the condition can have poor outcomes with even small delays in diagnosis and ensuing treatment, potentially leading to intellectual disability, autism, and worse epilepsy. “It’s super important to start the treatment early, but oftentimes, these symptoms are just misrecognized by primary care or ER physicians. It takes a long time to diagnose,” said Gadi Miron, MD, who presented the study at the American Epilepsy Society (AES) 78th Annual Meeting 2024.

 

What Is This? What Should I Do?

Parents who observe unusual behavior often seek advice from friends and family members and receive false reassurance that such behavior isn’t unusual. Even physicians may contribute if they are unaware of infantile spasms, which is a rare disorder. “And then again, they get false reassurance, and because of that false reassurance, you get a diagnostic delay,” said Shaun Hussain, MD, who was asked to comment on the study.

The timing and frequency of infantile spasms create challenges for diagnosis. They only last about 1 second, and they tend to cluster in the morning. By the time a caregiver brings an infant to a healthcare provider, they may have trouble describing the behavior. “Parents are struggling to describe what they saw, and it often just does not resonate, or doesn’t make the healthcare provider think about infantile spasms,” said Hussain.

The idea to employ AI came from looking at videos of infants on YouTube and the realization that many patients upload them in an effort to seek advice. “So many parents upload these videos and ask in the comments, ‘What is this? What should I do? Can somebody help me?’ said Miron, who is a neurologist and researcher at Charité — Universitätsmedizin Berlin in Germany.

 

AI and Video Can Aid Diagnosis

The researchers built a model that they trained to recognize epileptic spasms using openly available YouTube videos, including 141 infants, 991 recorded seizures, and 597 non-seizure video segments, along with a non-seizure cohort of 127 infants with an accompanying 1385 video segments.

Each video segment was reviewed by two specialists, and they had to agree for it to be counted as an epileptic spasm.

The model detected epileptic seizures with an area under the curve (AUC) of 0.96. It had a sensitivity of 82%, specificity of 90%, and accuracy of 85% when applied to the training set.

The researchers then tested it against three validation sets. In the first, a smartphone-based set retrieved from TikTok of 26 infants with 70 epileptic spasms and 31 non-seizure 5-second video segments, the model had an AUC of 0.98, a sensitivity of 89%, a specificity of 100%, and an accuracy of 92%.

A second smartphone-based set of 67 infants, drawn from YouTube, showed a false detection rate of 0.75% (five detections out of 666 video segments). A third dataset collected from in-hospital EEG monitoring of 21 infants without seizures revealed a false-positive rate of 3.4% (365 of 10,860 video segments).

The group is now developing an app that will allow parents to upload videos that can be analyzed using the model. Physicians can then view the video and determine if there is suspicion of a seizure.

Miron also believes that this approach could find use in other types of seizures and populations, including older children and adults. “We have actually built some models for detection of seizures for videos in adults as well. Looking more towards the future, I’m sure AI will be used to analyze videos of other neurological disorders with motor symptoms [such as] movement disorders and gait,” he said.

 

Encouraging Early Research

Hussain, who is a professor of pediatrics at UCLA Health, lauded the work generally but emphasized that it is still in the early stage. “Their comparison was a relatively easy one. They’re just comparing normal versus infantile spasms, and they’re looking at the seizure versus normal behavior. Usually, the distinction is much harder in that there are kids who are having behaviors that are maybe other types of seizures, which is much harder to distinguish from infantile spasms, in contrast to just normal behaviors. The other mimic of infantile spasms is things like infant heartburn. Those kids will often have some posturing, and they often will be in pain. They might cry. That’s something that infantile spasms will often generate, so that’s why there’s a lot of confusion between those two,” said Hussain.

He noted that there have been efforts to raise awareness of infantile spasms among physicians and the general public, but that hasn’t reduced the increased detection.

 

Another Resource

In fact, parents with suspicions often go to social media sites like YouTube and a Facebook group dedicated to infantile spasms. “You can Google infantile spasms, and you’ll see examples of weird behaviors, and then you’ll look in the comments, and you’ll see this commenter said: ‘These could be infantile spasms. You should go to a children’s hospital. Don’t leave until you get an EEG to make sure that these are not seizures. There’s all kinds of great advice there, and it really shouldn’t be the situation where to get the best care, you need to go on YouTube,’ ” said Hussain.

Miron and Hussain had no relevant financial disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AES 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Broken Sleep Linked to MASLD

Article Type
Changed

TOPLINE:

Fragmented sleep — that is, increased wakefulness and reduced sleep efficiency — is a sign of metabolic dysfunction–associated steatotic liver disease (MASLD), a study using actigraphy showed.

METHODOLOGY:

  • Researchers assessed sleep-wake rhythms in 35 patients with MASLD (median age, 58 years; 66% were men; 80% with metabolic syndrome) and 16 matched healthy controls (median age, 61 years; 50% were men) using data collected 24/7 via actigraphy for 4 weeks.
  • Sub-analyses were conducted with MASLD comparator groups: 16 patients with MASH, 8 with MASH with cirrhosis, and 11 with non-MASH–related cirrhosis.
  • All participants visited the clinic at baseline, week 2, and week 4 to undergo a clinical investigation and complete questionnaires about their sleep.
  • A standardized sleep hygiene education session was conducted at week 2.

TAKEAWAY:

  • Actigraphy data from patients with MASLD did not reveal significant differences in bedtime, sleep-onset latency, sleep duration, wake-up time, or time in bed compared with controls.
  • However, compared with controls, those with MASLD woke 55% more often at night (8.5 vs 5.5), lay awake 113% longer after having first fallen asleep (45.4 minutes vs 21.3 minutes), and slept more often and longer during the day (decreased sleep efficiency).
  • Subgroup analyses showed that actigraphy-measured sleep patterns and quality were similarly impaired in patients with MASH, MASH with cirrhosis, and non–MASH-related cirrhosis.
  • Patients with MASLD self-reported their fragmented sleep as shorter sleep with a delayed onset. In sleep diaries, 32% of patients with MASLD reported sleep disturbances caused by psychological stress, compared with only 6.25% of controls and 9% of patients with cirrhosis.
  • The sleep education session did not change the actigraphy measures or the sleep parameters assessed with sleep questionnaires at the end of the study.

IN PRACTICE:

“We concluded from our data that sleep fragmentation plays a role in the pathogenesis of human MASLD. Whether MASLD causes sleep disorders or vice versa remains unknown. The underlying mechanism presumably involves genetics, environmental factors, and the activation of immune responses — ultimately driven by obesity and metabolic syndrome,” said corresponding author.

SOURCE:

The study, led by Sofia Schaeffer, PhD, University of Basel, Switzerland, was published online in Frontiers in Network Physiology.

LIMITATIONS:

The study had several limitations. There was a significant difference in body mass index between patients with MASLD (median, 31) and controls (median, 23.5), representing a potential confounder that could explain the differences in sleep behavior. Undetected obstructive sleep apnea could also be a confounding factor. The small number of participants limited the interpretation and generalization of the data, especially in the MASLD subgroups.

DISCLOSURES:

This study was supported by a grant from the University of Basel. One coauthor received a research grant from the University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland. Another coauthor was employed by NovoLytiX. Schaeffer and the remaining coauthors declared that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

Fragmented sleep — that is, increased wakefulness and reduced sleep efficiency — is a sign of metabolic dysfunction–associated steatotic liver disease (MASLD), a study using actigraphy showed.

METHODOLOGY:

  • Researchers assessed sleep-wake rhythms in 35 patients with MASLD (median age, 58 years; 66% were men; 80% with metabolic syndrome) and 16 matched healthy controls (median age, 61 years; 50% were men) using data collected 24/7 via actigraphy for 4 weeks.
  • Sub-analyses were conducted with MASLD comparator groups: 16 patients with MASH, 8 with MASH with cirrhosis, and 11 with non-MASH–related cirrhosis.
  • All participants visited the clinic at baseline, week 2, and week 4 to undergo a clinical investigation and complete questionnaires about their sleep.
  • A standardized sleep hygiene education session was conducted at week 2.

TAKEAWAY:

  • Actigraphy data from patients with MASLD did not reveal significant differences in bedtime, sleep-onset latency, sleep duration, wake-up time, or time in bed compared with controls.
  • However, compared with controls, those with MASLD woke 55% more often at night (8.5 vs 5.5), lay awake 113% longer after having first fallen asleep (45.4 minutes vs 21.3 minutes), and slept more often and longer during the day (decreased sleep efficiency).
  • Subgroup analyses showed that actigraphy-measured sleep patterns and quality were similarly impaired in patients with MASH, MASH with cirrhosis, and non–MASH-related cirrhosis.
  • Patients with MASLD self-reported their fragmented sleep as shorter sleep with a delayed onset. In sleep diaries, 32% of patients with MASLD reported sleep disturbances caused by psychological stress, compared with only 6.25% of controls and 9% of patients with cirrhosis.
  • The sleep education session did not change the actigraphy measures or the sleep parameters assessed with sleep questionnaires at the end of the study.

IN PRACTICE:

“We concluded from our data that sleep fragmentation plays a role in the pathogenesis of human MASLD. Whether MASLD causes sleep disorders or vice versa remains unknown. The underlying mechanism presumably involves genetics, environmental factors, and the activation of immune responses — ultimately driven by obesity and metabolic syndrome,” said corresponding author.

SOURCE:

The study, led by Sofia Schaeffer, PhD, University of Basel, Switzerland, was published online in Frontiers in Network Physiology.

LIMITATIONS:

The study had several limitations. There was a significant difference in body mass index between patients with MASLD (median, 31) and controls (median, 23.5), representing a potential confounder that could explain the differences in sleep behavior. Undetected obstructive sleep apnea could also be a confounding factor. The small number of participants limited the interpretation and generalization of the data, especially in the MASLD subgroups.

DISCLOSURES:

This study was supported by a grant from the University of Basel. One coauthor received a research grant from the University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland. Another coauthor was employed by NovoLytiX. Schaeffer and the remaining coauthors declared that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

A version of this article first appeared on Medscape.com.

TOPLINE:

Fragmented sleep — that is, increased wakefulness and reduced sleep efficiency — is a sign of metabolic dysfunction–associated steatotic liver disease (MASLD), a study using actigraphy showed.

METHODOLOGY:

  • Researchers assessed sleep-wake rhythms in 35 patients with MASLD (median age, 58 years; 66% were men; 80% with metabolic syndrome) and 16 matched healthy controls (median age, 61 years; 50% were men) using data collected 24/7 via actigraphy for 4 weeks.
  • Sub-analyses were conducted with MASLD comparator groups: 16 patients with MASH, 8 with MASH with cirrhosis, and 11 with non-MASH–related cirrhosis.
  • All participants visited the clinic at baseline, week 2, and week 4 to undergo a clinical investigation and complete questionnaires about their sleep.
  • A standardized sleep hygiene education session was conducted at week 2.

TAKEAWAY:

  • Actigraphy data from patients with MASLD did not reveal significant differences in bedtime, sleep-onset latency, sleep duration, wake-up time, or time in bed compared with controls.
  • However, compared with controls, those with MASLD woke 55% more often at night (8.5 vs 5.5), lay awake 113% longer after having first fallen asleep (45.4 minutes vs 21.3 minutes), and slept more often and longer during the day (decreased sleep efficiency).
  • Subgroup analyses showed that actigraphy-measured sleep patterns and quality were similarly impaired in patients with MASH, MASH with cirrhosis, and non–MASH-related cirrhosis.
  • Patients with MASLD self-reported their fragmented sleep as shorter sleep with a delayed onset. In sleep diaries, 32% of patients with MASLD reported sleep disturbances caused by psychological stress, compared with only 6.25% of controls and 9% of patients with cirrhosis.
  • The sleep education session did not change the actigraphy measures or the sleep parameters assessed with sleep questionnaires at the end of the study.

IN PRACTICE:

“We concluded from our data that sleep fragmentation plays a role in the pathogenesis of human MASLD. Whether MASLD causes sleep disorders or vice versa remains unknown. The underlying mechanism presumably involves genetics, environmental factors, and the activation of immune responses — ultimately driven by obesity and metabolic syndrome,” said corresponding author.

SOURCE:

The study, led by Sofia Schaeffer, PhD, University of Basel, Switzerland, was published online in Frontiers in Network Physiology.

LIMITATIONS:

The study had several limitations. There was a significant difference in body mass index between patients with MASLD (median, 31) and controls (median, 23.5), representing a potential confounder that could explain the differences in sleep behavior. Undetected obstructive sleep apnea could also be a confounding factor. The small number of participants limited the interpretation and generalization of the data, especially in the MASLD subgroups.

DISCLOSURES:

This study was supported by a grant from the University of Basel. One coauthor received a research grant from the University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland. Another coauthor was employed by NovoLytiX. Schaeffer and the remaining coauthors declared that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Education Boosts Safe Sharps Disposal in Diabetic Care

Article Type
Changed

TOPLINE:

A program combining theoretical training with free disposal containers can effectively increase knowledge and improve sharps waste disposal practices among patients with diabetes.

METHODOLOGY:

  • A significant number of patients with diabetes administer insulin at home. Unsafe waste disposal including insulin pens, syringes, and lancets increases the risk for needle-stick injuries, microbial infections, and plastic waste accumulation, highlighting the need for safe disposal practices.
  • Researchers conducted an experimental study at El-Horraya Polyclinic in Alexandria, Egypt, between November 2022 and April 2023 to evaluate the effectiveness of an intervention program in improving knowledge and practices related to safe sharps disposal among patients with diabetes.
  • Overall, 100 patients (median age, 61 years; 92% living in urban areas) with either type 1 or type 2 diabetes were recruited and divided into the educational intervention (n = 50) and nonintervention (n = 50) groups; majority (67%) had diabetes for more than 10 years.
  • The intervention group received educational sessions addressing improper disposal risks and environmental impacts along with practical demonstrations of correct sharps disposal methods; they were also given free puncture-resistant containers to safely dispose of the sharp waste generated from diabetes management.
  • Assessments were performed at baseline, 2 months, and 4 months postintervention, evaluating knowledge levels (poor: < 50%, fair: 50% to < 70%, good: 70%-100%) and practice scores (poor: 0-6, fair: 7-10, good: 11-14).

TAKEAWAY:

  • Overall, 58% of the patients used insulin pens, and approximately 75% required two doses of insulin daily.
  • The median monthly disposal was 10 syringes per patient among syringe users and eight pen needles per patient among pen users.
  • At baseline, there were no differences in the knowledge scores between the intervention and nonintervention groups; however, at both 2 and 4 months, the intervention group showed a significantly higher median knowledge score than the nonintervention group (P < .001 for both).
  • Likewise, practice scores also showed marked improvements in the intervention group, compared with the nonintervention group at the end of the program (P < .001).

IN PRACTICE:

“The success of the environmental education program underscores the need for targeted interventions to enhance patient knowledge and safe sharps disposal practices. By offering accessible disposal options and raising awareness, healthcare facilities can significantly contribute to preventing accidental needle-stick injuries and reducing the risk of infectious disease transmission,” the authors wrote.

SOURCE:

This study was led by Hossam Mohamed Hassan Soliman, High Institute of Public Health, Alexandria University, Egypt. It was published online in Scientific Reports.

LIMITATIONS:

Interview bias and self-reporting bias in data collection were major limitations of this study. The quasi-experimental design, lacking randomization, may have limited the strength of causal inferences.

DISCLOSURES:

No funding was received for this study, and the authors reported no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

TOPLINE:

A program combining theoretical training with free disposal containers can effectively increase knowledge and improve sharps waste disposal practices among patients with diabetes.

METHODOLOGY:

  • A significant number of patients with diabetes administer insulin at home. Unsafe waste disposal including insulin pens, syringes, and lancets increases the risk for needle-stick injuries, microbial infections, and plastic waste accumulation, highlighting the need for safe disposal practices.
  • Researchers conducted an experimental study at El-Horraya Polyclinic in Alexandria, Egypt, between November 2022 and April 2023 to evaluate the effectiveness of an intervention program in improving knowledge and practices related to safe sharps disposal among patients with diabetes.
  • Overall, 100 patients (median age, 61 years; 92% living in urban areas) with either type 1 or type 2 diabetes were recruited and divided into the educational intervention (n = 50) and nonintervention (n = 50) groups; majority (67%) had diabetes for more than 10 years.
  • The intervention group received educational sessions addressing improper disposal risks and environmental impacts along with practical demonstrations of correct sharps disposal methods; they were also given free puncture-resistant containers to safely dispose of the sharp waste generated from diabetes management.
  • Assessments were performed at baseline, 2 months, and 4 months postintervention, evaluating knowledge levels (poor: < 50%, fair: 50% to < 70%, good: 70%-100%) and practice scores (poor: 0-6, fair: 7-10, good: 11-14).

TAKEAWAY:

  • Overall, 58% of the patients used insulin pens, and approximately 75% required two doses of insulin daily.
  • The median monthly disposal was 10 syringes per patient among syringe users and eight pen needles per patient among pen users.
  • At baseline, there were no differences in the knowledge scores between the intervention and nonintervention groups; however, at both 2 and 4 months, the intervention group showed a significantly higher median knowledge score than the nonintervention group (P < .001 for both).
  • Likewise, practice scores also showed marked improvements in the intervention group, compared with the nonintervention group at the end of the program (P < .001).

IN PRACTICE:

“The success of the environmental education program underscores the need for targeted interventions to enhance patient knowledge and safe sharps disposal practices. By offering accessible disposal options and raising awareness, healthcare facilities can significantly contribute to preventing accidental needle-stick injuries and reducing the risk of infectious disease transmission,” the authors wrote.

SOURCE:

This study was led by Hossam Mohamed Hassan Soliman, High Institute of Public Health, Alexandria University, Egypt. It was published online in Scientific Reports.

LIMITATIONS:

Interview bias and self-reporting bias in data collection were major limitations of this study. The quasi-experimental design, lacking randomization, may have limited the strength of causal inferences.

DISCLOSURES:

No funding was received for this study, and the authors reported no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

TOPLINE:

A program combining theoretical training with free disposal containers can effectively increase knowledge and improve sharps waste disposal practices among patients with diabetes.

METHODOLOGY:

  • A significant number of patients with diabetes administer insulin at home. Unsafe waste disposal including insulin pens, syringes, and lancets increases the risk for needle-stick injuries, microbial infections, and plastic waste accumulation, highlighting the need for safe disposal practices.
  • Researchers conducted an experimental study at El-Horraya Polyclinic in Alexandria, Egypt, between November 2022 and April 2023 to evaluate the effectiveness of an intervention program in improving knowledge and practices related to safe sharps disposal among patients with diabetes.
  • Overall, 100 patients (median age, 61 years; 92% living in urban areas) with either type 1 or type 2 diabetes were recruited and divided into the educational intervention (n = 50) and nonintervention (n = 50) groups; majority (67%) had diabetes for more than 10 years.
  • The intervention group received educational sessions addressing improper disposal risks and environmental impacts along with practical demonstrations of correct sharps disposal methods; they were also given free puncture-resistant containers to safely dispose of the sharp waste generated from diabetes management.
  • Assessments were performed at baseline, 2 months, and 4 months postintervention, evaluating knowledge levels (poor: < 50%, fair: 50% to < 70%, good: 70%-100%) and practice scores (poor: 0-6, fair: 7-10, good: 11-14).

TAKEAWAY:

  • Overall, 58% of the patients used insulin pens, and approximately 75% required two doses of insulin daily.
  • The median monthly disposal was 10 syringes per patient among syringe users and eight pen needles per patient among pen users.
  • At baseline, there were no differences in the knowledge scores between the intervention and nonintervention groups; however, at both 2 and 4 months, the intervention group showed a significantly higher median knowledge score than the nonintervention group (P < .001 for both).
  • Likewise, practice scores also showed marked improvements in the intervention group, compared with the nonintervention group at the end of the program (P < .001).

IN PRACTICE:

“The success of the environmental education program underscores the need for targeted interventions to enhance patient knowledge and safe sharps disposal practices. By offering accessible disposal options and raising awareness, healthcare facilities can significantly contribute to preventing accidental needle-stick injuries and reducing the risk of infectious disease transmission,” the authors wrote.

SOURCE:

This study was led by Hossam Mohamed Hassan Soliman, High Institute of Public Health, Alexandria University, Egypt. It was published online in Scientific Reports.

LIMITATIONS:

Interview bias and self-reporting bias in data collection were major limitations of this study. The quasi-experimental design, lacking randomization, may have limited the strength of causal inferences.

DISCLOSURES:

No funding was received for this study, and the authors reported no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date