User login
Early Oseltamivir Benefits Hospitalized Influenza Patients
TOPLINE:
Early treatment with oseltamivir on the same day as hospital admission was associated with fewer severe clinical outcomes, such as worsening pulmonary disease, need for invasive ventilation, organ failure, and in-hospital death in adults hospitalized with influenza.
METHODOLOGY:
- The 2018 guidelines from the Infectious Disease Society of America recommend prompt administration of oseltamivir to hospitalized patients with suspected or confirmed influenza, regardless of the time of symptom onset; however, variations in treatment practices and circulating virus strains may affect the effectiveness of this practice guideline.
- Researchers conducted a multicenter observational study across 24 hospitals in the United States during the 2022-2023 flu season to assess the benefits of initiating oseltamivir treatment on the same day as hospital admission for adults with acute influenza, compared with late or no treatment.
- They included 840 adults (age, ≥ 18 years) with laboratory-confirmed influenza, of which 415 patients initiated oseltamivir on the same day as hospital admission (early treatment).
- Among the 425 patients in the late/no treatment group, most (78%) received oseltamivir 1 day after admission, while 124 did not receive oseltamivir at all.
- The primary outcome was the peak pulmonary disease severity level that patients experienced during hospitalization, and secondary outcomes included hospital length of stay, ICU admission, initiation of extrapulmonary organ support using vasopressors or kidney replacement therapy, and in-hospital death.
TAKEAWAY:
- Patients in the early treatment group were less likely to experience progression and severe progression of pulmonary disease after the day of hospital admission, compared with those in the late or no treatment group (P < .001 and P = .027, respectively).
- Patients who received early oseltamivir treatment had 40% lower peak pulmonary disease severity than those who received late or no treatment (proportional adjusted odds ratio [paOR], 0.60; 95% CI, 0.49-0.72).
- They also showed lower odds of ICU admission (aOR, 0.25; 95% CI, 0.13-0.49) and use of acute kidney replacement therapy or vasopressors (aOR, 0.40; 95% CI, 0.22-0.67).
- Those in the early treatment group also had a shorter hospital length of stay (median, 4 days vs 4 days) and faced a 64% lower risk for in-hospital mortality (aOR, 0.36; 95% CI, 0.19-0.69) compared with those in the late or no treatment group.
IN PRACTICE:
“These findings support current recommendations, such as the IDSA [Infectious Disease Society of America] Influenza Clinical Practice Guidelines and CDC [Centers for Disease Control and Prevention] guidance, to initiate oseltamivir treatment as soon as possible for adult patients hospitalized with influenza,” the authors wrote.
SOURCE:
The study was led by Nathaniel M. Lewis, PhD, Influenza Division, CDC, Atlanta, Georgia, and was published online in Clinical Infectious Diseases.
LIMITATIONS:
This study may not be generalizable to seasons when influenza A(H1N1)pdm09 or B viruses are predominant as it was conducted during an influenza A(H3N2) virus–predominant season. The study lacked sufficient power to examine various oseltamivir treatment initiation timepoints or identify a potential maximum time-to-treatment threshold for effectiveness. Moreover, variables such as outpatient antiviral treatment before hospital admission and other treatments using macrolides, statins, corticosteroids, or immunomodulators before or during hospitalization were not collected, which may have influenced the study findings.
DISCLOSURES:
The study received funding from the CDC and the National Center for Immunization and Respiratory Diseases. Some authors reported receiving research support, consulting fees, funding, grants, or fees for participation in an advisory board and having other ties with certain institutions and pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
Early treatment with oseltamivir on the same day as hospital admission was associated with fewer severe clinical outcomes, such as worsening pulmonary disease, need for invasive ventilation, organ failure, and in-hospital death in adults hospitalized with influenza.
METHODOLOGY:
- The 2018 guidelines from the Infectious Disease Society of America recommend prompt administration of oseltamivir to hospitalized patients with suspected or confirmed influenza, regardless of the time of symptom onset; however, variations in treatment practices and circulating virus strains may affect the effectiveness of this practice guideline.
- Researchers conducted a multicenter observational study across 24 hospitals in the United States during the 2022-2023 flu season to assess the benefits of initiating oseltamivir treatment on the same day as hospital admission for adults with acute influenza, compared with late or no treatment.
- They included 840 adults (age, ≥ 18 years) with laboratory-confirmed influenza, of which 415 patients initiated oseltamivir on the same day as hospital admission (early treatment).
- Among the 425 patients in the late/no treatment group, most (78%) received oseltamivir 1 day after admission, while 124 did not receive oseltamivir at all.
- The primary outcome was the peak pulmonary disease severity level that patients experienced during hospitalization, and secondary outcomes included hospital length of stay, ICU admission, initiation of extrapulmonary organ support using vasopressors or kidney replacement therapy, and in-hospital death.
TAKEAWAY:
- Patients in the early treatment group were less likely to experience progression and severe progression of pulmonary disease after the day of hospital admission, compared with those in the late or no treatment group (P < .001 and P = .027, respectively).
- Patients who received early oseltamivir treatment had 40% lower peak pulmonary disease severity than those who received late or no treatment (proportional adjusted odds ratio [paOR], 0.60; 95% CI, 0.49-0.72).
- They also showed lower odds of ICU admission (aOR, 0.25; 95% CI, 0.13-0.49) and use of acute kidney replacement therapy or vasopressors (aOR, 0.40; 95% CI, 0.22-0.67).
- Those in the early treatment group also had a shorter hospital length of stay (median, 4 days vs 4 days) and faced a 64% lower risk for in-hospital mortality (aOR, 0.36; 95% CI, 0.19-0.69) compared with those in the late or no treatment group.
IN PRACTICE:
“These findings support current recommendations, such as the IDSA [Infectious Disease Society of America] Influenza Clinical Practice Guidelines and CDC [Centers for Disease Control and Prevention] guidance, to initiate oseltamivir treatment as soon as possible for adult patients hospitalized with influenza,” the authors wrote.
SOURCE:
The study was led by Nathaniel M. Lewis, PhD, Influenza Division, CDC, Atlanta, Georgia, and was published online in Clinical Infectious Diseases.
LIMITATIONS:
This study may not be generalizable to seasons when influenza A(H1N1)pdm09 or B viruses are predominant as it was conducted during an influenza A(H3N2) virus–predominant season. The study lacked sufficient power to examine various oseltamivir treatment initiation timepoints or identify a potential maximum time-to-treatment threshold for effectiveness. Moreover, variables such as outpatient antiviral treatment before hospital admission and other treatments using macrolides, statins, corticosteroids, or immunomodulators before or during hospitalization were not collected, which may have influenced the study findings.
DISCLOSURES:
The study received funding from the CDC and the National Center for Immunization and Respiratory Diseases. Some authors reported receiving research support, consulting fees, funding, grants, or fees for participation in an advisory board and having other ties with certain institutions and pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
TOPLINE:
Early treatment with oseltamivir on the same day as hospital admission was associated with fewer severe clinical outcomes, such as worsening pulmonary disease, need for invasive ventilation, organ failure, and in-hospital death in adults hospitalized with influenza.
METHODOLOGY:
- The 2018 guidelines from the Infectious Disease Society of America recommend prompt administration of oseltamivir to hospitalized patients with suspected or confirmed influenza, regardless of the time of symptom onset; however, variations in treatment practices and circulating virus strains may affect the effectiveness of this practice guideline.
- Researchers conducted a multicenter observational study across 24 hospitals in the United States during the 2022-2023 flu season to assess the benefits of initiating oseltamivir treatment on the same day as hospital admission for adults with acute influenza, compared with late or no treatment.
- They included 840 adults (age, ≥ 18 years) with laboratory-confirmed influenza, of which 415 patients initiated oseltamivir on the same day as hospital admission (early treatment).
- Among the 425 patients in the late/no treatment group, most (78%) received oseltamivir 1 day after admission, while 124 did not receive oseltamivir at all.
- The primary outcome was the peak pulmonary disease severity level that patients experienced during hospitalization, and secondary outcomes included hospital length of stay, ICU admission, initiation of extrapulmonary organ support using vasopressors or kidney replacement therapy, and in-hospital death.
TAKEAWAY:
- Patients in the early treatment group were less likely to experience progression and severe progression of pulmonary disease after the day of hospital admission, compared with those in the late or no treatment group (P < .001 and P = .027, respectively).
- Patients who received early oseltamivir treatment had 40% lower peak pulmonary disease severity than those who received late or no treatment (proportional adjusted odds ratio [paOR], 0.60; 95% CI, 0.49-0.72).
- They also showed lower odds of ICU admission (aOR, 0.25; 95% CI, 0.13-0.49) and use of acute kidney replacement therapy or vasopressors (aOR, 0.40; 95% CI, 0.22-0.67).
- Those in the early treatment group also had a shorter hospital length of stay (median, 4 days vs 4 days) and faced a 64% lower risk for in-hospital mortality (aOR, 0.36; 95% CI, 0.19-0.69) compared with those in the late or no treatment group.
IN PRACTICE:
“These findings support current recommendations, such as the IDSA [Infectious Disease Society of America] Influenza Clinical Practice Guidelines and CDC [Centers for Disease Control and Prevention] guidance, to initiate oseltamivir treatment as soon as possible for adult patients hospitalized with influenza,” the authors wrote.
SOURCE:
The study was led by Nathaniel M. Lewis, PhD, Influenza Division, CDC, Atlanta, Georgia, and was published online in Clinical Infectious Diseases.
LIMITATIONS:
This study may not be generalizable to seasons when influenza A(H1N1)pdm09 or B viruses are predominant as it was conducted during an influenza A(H3N2) virus–predominant season. The study lacked sufficient power to examine various oseltamivir treatment initiation timepoints or identify a potential maximum time-to-treatment threshold for effectiveness. Moreover, variables such as outpatient antiviral treatment before hospital admission and other treatments using macrolides, statins, corticosteroids, or immunomodulators before or during hospitalization were not collected, which may have influenced the study findings.
DISCLOSURES:
The study received funding from the CDC and the National Center for Immunization and Respiratory Diseases. Some authors reported receiving research support, consulting fees, funding, grants, or fees for participation in an advisory board and having other ties with certain institutions and pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article first appeared on Medscape.com.
Iron Overload: The Silent Bone Breaker
TOPLINE:
Patients with serum ferritin levels higher than 1000 μg/L show a 91% increased risk for any fracture, with a doubled risk for vertebral and humerus fractures compared with those without iron overload.
METHODOLOGY:
- Iron overload’s association with decreased bone mineral density is established, but its relationship to osteoporotic fracture risk has remained understudied and inconsistent across fracture sites.
- Researchers conducted a population-based cohort study using a UK general practice database to evaluate the fracture risk in 20,264 patients with iron overload and 192,956 matched controls without elevated ferritin (mean age, 57 years; about 40% women).
- Patients with iron overload were identified as those with laboratory-confirmed iron overload (serum ferritin levels > 1000 μg/L; n = 13,510) or a diagnosis of an iron overloading disorder, such as thalassemia major, sickle cell disease, or hemochromatosis (n = 6754).
- The primary outcome of interest was the first occurrence of an osteoporotic fracture after the diagnosis of iron overload or first record of high ferritin.
- A sensitivity analysis was conducted to check the impact of laboratory-confirmed iron overload on the risk for osteoporotic fracture compared with a diagnosis code without elevated ferritin.
TAKEAWAY:
- In the overall cohort, patients with iron overload had a 55% higher risk for any osteoporotic fracture than control individuals (adjusted hazard ratio [aHR], 1.55; 95% CI, 1.42-1.68), with the highest risk observed for vertebral fractures (aHR, 1.97; 95% CI, 1.63-2.37) and humerus fractures (aHR, 1.91; 95% CI, 1.61-2.26).
- Patients with laboratory-confirmed iron overload showed a 91% increased risk for any fracture (aHR, 1.91; 95% CI, 1.73-2.10), with a 2.5-fold higher risk observed for vertebral fractures (aHR, 2.51; 95% CI, 2.01-3.12), followed by humerus fractures (aHR, 2.41; 95% CI, 1.96-2.95).
- There was no increased risk for fracture at any site in patients with a diagnosis of an iron overloading disorder but no laboratory-confirmed iron overload.
- No sex-specific differences were identified in the association between iron overload and fracture risk.
IN PRACTICE:
“The main clinical message from our findings is that clinicians should consider iron overloading as a risk factor for fracture. Importantly, among high-risk patients presenting with serum ferritin values exceeding 1000 μg/L, osteoporosis screening and treatment strategies should be initiated in accordance with the guidelines for patients with hepatic disease,” the authors wrote.
SOURCE:
The study was led by Andrea Michelle Burden, PhD, Institute of Pharmaceutical Sciences, Department of Chemistry and Applied Biosciences, ETH Zürich in Switzerland, and was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
The study could not assess the duration of iron overload on fracture risk, and thus, patients could enter the cohort with a single elevated serum ferritin value that may not have reflected systemic iron overload. The authors also acknowledged potential exposure misclassification among matched control individuals because only 2.9% had a serum ferritin value available at baseline. Also, researchers were unable to adjust for inflammation status due to the limited availability of C-reactive protein measurements and the lack of leukocyte count data in primary care settings.
DISCLOSURES:
This study received support through grants from the German Research Foundation. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Patients with serum ferritin levels higher than 1000 μg/L show a 91% increased risk for any fracture, with a doubled risk for vertebral and humerus fractures compared with those without iron overload.
METHODOLOGY:
- Iron overload’s association with decreased bone mineral density is established, but its relationship to osteoporotic fracture risk has remained understudied and inconsistent across fracture sites.
- Researchers conducted a population-based cohort study using a UK general practice database to evaluate the fracture risk in 20,264 patients with iron overload and 192,956 matched controls without elevated ferritin (mean age, 57 years; about 40% women).
- Patients with iron overload were identified as those with laboratory-confirmed iron overload (serum ferritin levels > 1000 μg/L; n = 13,510) or a diagnosis of an iron overloading disorder, such as thalassemia major, sickle cell disease, or hemochromatosis (n = 6754).
- The primary outcome of interest was the first occurrence of an osteoporotic fracture after the diagnosis of iron overload or first record of high ferritin.
- A sensitivity analysis was conducted to check the impact of laboratory-confirmed iron overload on the risk for osteoporotic fracture compared with a diagnosis code without elevated ferritin.
TAKEAWAY:
- In the overall cohort, patients with iron overload had a 55% higher risk for any osteoporotic fracture than control individuals (adjusted hazard ratio [aHR], 1.55; 95% CI, 1.42-1.68), with the highest risk observed for vertebral fractures (aHR, 1.97; 95% CI, 1.63-2.37) and humerus fractures (aHR, 1.91; 95% CI, 1.61-2.26).
- Patients with laboratory-confirmed iron overload showed a 91% increased risk for any fracture (aHR, 1.91; 95% CI, 1.73-2.10), with a 2.5-fold higher risk observed for vertebral fractures (aHR, 2.51; 95% CI, 2.01-3.12), followed by humerus fractures (aHR, 2.41; 95% CI, 1.96-2.95).
- There was no increased risk for fracture at any site in patients with a diagnosis of an iron overloading disorder but no laboratory-confirmed iron overload.
- No sex-specific differences were identified in the association between iron overload and fracture risk.
IN PRACTICE:
“The main clinical message from our findings is that clinicians should consider iron overloading as a risk factor for fracture. Importantly, among high-risk patients presenting with serum ferritin values exceeding 1000 μg/L, osteoporosis screening and treatment strategies should be initiated in accordance with the guidelines for patients with hepatic disease,” the authors wrote.
SOURCE:
The study was led by Andrea Michelle Burden, PhD, Institute of Pharmaceutical Sciences, Department of Chemistry and Applied Biosciences, ETH Zürich in Switzerland, and was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
The study could not assess the duration of iron overload on fracture risk, and thus, patients could enter the cohort with a single elevated serum ferritin value that may not have reflected systemic iron overload. The authors also acknowledged potential exposure misclassification among matched control individuals because only 2.9% had a serum ferritin value available at baseline. Also, researchers were unable to adjust for inflammation status due to the limited availability of C-reactive protein measurements and the lack of leukocyte count data in primary care settings.
DISCLOSURES:
This study received support through grants from the German Research Foundation. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Patients with serum ferritin levels higher than 1000 μg/L show a 91% increased risk for any fracture, with a doubled risk for vertebral and humerus fractures compared with those without iron overload.
METHODOLOGY:
- Iron overload’s association with decreased bone mineral density is established, but its relationship to osteoporotic fracture risk has remained understudied and inconsistent across fracture sites.
- Researchers conducted a population-based cohort study using a UK general practice database to evaluate the fracture risk in 20,264 patients with iron overload and 192,956 matched controls without elevated ferritin (mean age, 57 years; about 40% women).
- Patients with iron overload were identified as those with laboratory-confirmed iron overload (serum ferritin levels > 1000 μg/L; n = 13,510) or a diagnosis of an iron overloading disorder, such as thalassemia major, sickle cell disease, or hemochromatosis (n = 6754).
- The primary outcome of interest was the first occurrence of an osteoporotic fracture after the diagnosis of iron overload or first record of high ferritin.
- A sensitivity analysis was conducted to check the impact of laboratory-confirmed iron overload on the risk for osteoporotic fracture compared with a diagnosis code without elevated ferritin.
TAKEAWAY:
- In the overall cohort, patients with iron overload had a 55% higher risk for any osteoporotic fracture than control individuals (adjusted hazard ratio [aHR], 1.55; 95% CI, 1.42-1.68), with the highest risk observed for vertebral fractures (aHR, 1.97; 95% CI, 1.63-2.37) and humerus fractures (aHR, 1.91; 95% CI, 1.61-2.26).
- Patients with laboratory-confirmed iron overload showed a 91% increased risk for any fracture (aHR, 1.91; 95% CI, 1.73-2.10), with a 2.5-fold higher risk observed for vertebral fractures (aHR, 2.51; 95% CI, 2.01-3.12), followed by humerus fractures (aHR, 2.41; 95% CI, 1.96-2.95).
- There was no increased risk for fracture at any site in patients with a diagnosis of an iron overloading disorder but no laboratory-confirmed iron overload.
- No sex-specific differences were identified in the association between iron overload and fracture risk.
IN PRACTICE:
“The main clinical message from our findings is that clinicians should consider iron overloading as a risk factor for fracture. Importantly, among high-risk patients presenting with serum ferritin values exceeding 1000 μg/L, osteoporosis screening and treatment strategies should be initiated in accordance with the guidelines for patients with hepatic disease,” the authors wrote.
SOURCE:
The study was led by Andrea Michelle Burden, PhD, Institute of Pharmaceutical Sciences, Department of Chemistry and Applied Biosciences, ETH Zürich in Switzerland, and was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
The study could not assess the duration of iron overload on fracture risk, and thus, patients could enter the cohort with a single elevated serum ferritin value that may not have reflected systemic iron overload. The authors also acknowledged potential exposure misclassification among matched control individuals because only 2.9% had a serum ferritin value available at baseline. Also, researchers were unable to adjust for inflammation status due to the limited availability of C-reactive protein measurements and the lack of leukocyte count data in primary care settings.
DISCLOSURES:
This study received support through grants from the German Research Foundation. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Flu Vaccine Guards Household Contacts of Infected People
TOPLINE:
Vaccination lowers the risk of contracting the infection among household contacts.
METHODOLOGY:
- Researchers conducted a prospective cohort study of data between 2017 and 2020 to determine the estimated effectiveness of influenza vaccines in preventing secondary infections in household contacts.
- Overall, 699 people were primary contacts, or the first in a household to get infected (median age, 13 years; 54.5% women); there were 1581 household contacts (median age, 31 years; 52.7% women), and both groups were followed for 7 days.
- Participants collected daily symptom diaries and nasal swabs during the follow-up period.
- Participants also submitted their history of influenza vaccination; 50.1% of household contacts had received a shot at least 14 days before the first case of disease onset in the household.
- The risk for secondary infection and vaccine effectiveness in preventing infection among household contacts was estimated overall and by virus type, subtype, and lineage.
TAKEAWAY:
- Nearly half (48.2%) of primary cases were from children and teens between ages 5 and 17 years.
- Overall, 22% household contacts had laboratory-confirmed influenza during follow-up, of which 7% were asymptomatic.
- The overall risk for secondary infection among unvaccinated household contacts was 18.8%, with the highest risk observed among children younger than age 5 years (29.9%).
- The overall effectiveness of influenza vaccines in preventing laboratory-confirmed infections among household contacts was 21% (95% CI, 1.4%-36.7%).
- The vaccine demonstrated specific protection against influenza B infection (56.4%; 95% CI, 30.1%-72.8%), particularly among those between ages 5 and 17 years.
IN PRACTICE:
“Although complementary preventive strategies to prevent influenza in household settings may be considered, seasonal influenza vaccination is the primary strategy recommended for prevention of influenza illness and its complications,” the authors wrote.
SOURCE:
The study was led by Carlos G. Grijalva, MD, MPH, of Vanderbilt University Medical Center in Nashville, Tennessee, and was published online in JAMA Network Open.
LIMITATIONS:
The recruitment of infected individuals from clinical testing pools may have limited the generalizability of the risk for secondary infection in households in which the primary case had a milder or asymptomatic infection. The study was unable to assess the effectiveness of specific vaccine formulations, such as those receiving high doses. The stratification of estimates by influenza subtypes and lineages was challenging because of small cell sizes.
DISCLOSURES:
This study was supported by grants from the Centers for Disease Control and Prevention (CDC) and authors reported support from grants from the National Institute Of Allergy And Infectious Diseases. Some authors reported contracts, receiving personal fees and grants from the CDC and various pharmaceutical companies such as Merck and Sanofi.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Vaccination lowers the risk of contracting the infection among household contacts.
METHODOLOGY:
- Researchers conducted a prospective cohort study of data between 2017 and 2020 to determine the estimated effectiveness of influenza vaccines in preventing secondary infections in household contacts.
- Overall, 699 people were primary contacts, or the first in a household to get infected (median age, 13 years; 54.5% women); there were 1581 household contacts (median age, 31 years; 52.7% women), and both groups were followed for 7 days.
- Participants collected daily symptom diaries and nasal swabs during the follow-up period.
- Participants also submitted their history of influenza vaccination; 50.1% of household contacts had received a shot at least 14 days before the first case of disease onset in the household.
- The risk for secondary infection and vaccine effectiveness in preventing infection among household contacts was estimated overall and by virus type, subtype, and lineage.
TAKEAWAY:
- Nearly half (48.2%) of primary cases were from children and teens between ages 5 and 17 years.
- Overall, 22% household contacts had laboratory-confirmed influenza during follow-up, of which 7% were asymptomatic.
- The overall risk for secondary infection among unvaccinated household contacts was 18.8%, with the highest risk observed among children younger than age 5 years (29.9%).
- The overall effectiveness of influenza vaccines in preventing laboratory-confirmed infections among household contacts was 21% (95% CI, 1.4%-36.7%).
- The vaccine demonstrated specific protection against influenza B infection (56.4%; 95% CI, 30.1%-72.8%), particularly among those between ages 5 and 17 years.
IN PRACTICE:
“Although complementary preventive strategies to prevent influenza in household settings may be considered, seasonal influenza vaccination is the primary strategy recommended for prevention of influenza illness and its complications,” the authors wrote.
SOURCE:
The study was led by Carlos G. Grijalva, MD, MPH, of Vanderbilt University Medical Center in Nashville, Tennessee, and was published online in JAMA Network Open.
LIMITATIONS:
The recruitment of infected individuals from clinical testing pools may have limited the generalizability of the risk for secondary infection in households in which the primary case had a milder or asymptomatic infection. The study was unable to assess the effectiveness of specific vaccine formulations, such as those receiving high doses. The stratification of estimates by influenza subtypes and lineages was challenging because of small cell sizes.
DISCLOSURES:
This study was supported by grants from the Centers for Disease Control and Prevention (CDC) and authors reported support from grants from the National Institute Of Allergy And Infectious Diseases. Some authors reported contracts, receiving personal fees and grants from the CDC and various pharmaceutical companies such as Merck and Sanofi.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Vaccination lowers the risk of contracting the infection among household contacts.
METHODOLOGY:
- Researchers conducted a prospective cohort study of data between 2017 and 2020 to determine the estimated effectiveness of influenza vaccines in preventing secondary infections in household contacts.
- Overall, 699 people were primary contacts, or the first in a household to get infected (median age, 13 years; 54.5% women); there were 1581 household contacts (median age, 31 years; 52.7% women), and both groups were followed for 7 days.
- Participants collected daily symptom diaries and nasal swabs during the follow-up period.
- Participants also submitted their history of influenza vaccination; 50.1% of household contacts had received a shot at least 14 days before the first case of disease onset in the household.
- The risk for secondary infection and vaccine effectiveness in preventing infection among household contacts was estimated overall and by virus type, subtype, and lineage.
TAKEAWAY:
- Nearly half (48.2%) of primary cases were from children and teens between ages 5 and 17 years.
- Overall, 22% household contacts had laboratory-confirmed influenza during follow-up, of which 7% were asymptomatic.
- The overall risk for secondary infection among unvaccinated household contacts was 18.8%, with the highest risk observed among children younger than age 5 years (29.9%).
- The overall effectiveness of influenza vaccines in preventing laboratory-confirmed infections among household contacts was 21% (95% CI, 1.4%-36.7%).
- The vaccine demonstrated specific protection against influenza B infection (56.4%; 95% CI, 30.1%-72.8%), particularly among those between ages 5 and 17 years.
IN PRACTICE:
“Although complementary preventive strategies to prevent influenza in household settings may be considered, seasonal influenza vaccination is the primary strategy recommended for prevention of influenza illness and its complications,” the authors wrote.
SOURCE:
The study was led by Carlos G. Grijalva, MD, MPH, of Vanderbilt University Medical Center in Nashville, Tennessee, and was published online in JAMA Network Open.
LIMITATIONS:
The recruitment of infected individuals from clinical testing pools may have limited the generalizability of the risk for secondary infection in households in which the primary case had a milder or asymptomatic infection. The study was unable to assess the effectiveness of specific vaccine formulations, such as those receiving high doses. The stratification of estimates by influenza subtypes and lineages was challenging because of small cell sizes.
DISCLOSURES:
This study was supported by grants from the Centers for Disease Control and Prevention (CDC) and authors reported support from grants from the National Institute Of Allergy And Infectious Diseases. Some authors reported contracts, receiving personal fees and grants from the CDC and various pharmaceutical companies such as Merck and Sanofi.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Oxidative Stress Marker May Signal Fracture Risk in T2D
TOPLINE:
Elevated levels of plasma F2-isoprostanes, a reliable marker of oxidative stress, are associated with an increased risk for fractures in older ambulatory patients with type 2 diabetes (T2D) independently of bone density.
METHODOLOGY:
- Patients with T2D face an increased risk for fractures at any given bone mineral density; oxidative stress levels (reflected in circulating F2-isoprostanes), which are elevated in T2D, are associated with other T2D complications, and may weaken bone integrity.
- Researchers analyzed data from an observational cohort study to investigate the association between the levels of circulating F2-isoprostanes and the risk for clinical fractures in older patients with T2D.
- The data included 703 older ambulatory adults (baseline age, 70-79 years; about half White individuals and half Black individuals ; about half men and half women) from the Health, Aging and Body Composition Study, of whom 132 had T2D.
- Plasma F2-isoprostane levels were measured using baseline serum samples; bone turnover markers were also measured including procollagen type 1 N-terminal propeptide, osteocalcin, and C-terminal telopeptide of type 1 collagen.
- Incident clinical fractures were tracked over a follow-up period of up to 17.3 years, with fractures verified through radiology reports.
TAKEAWAY:
- Overall, 25.8% patients in the T2D group and 23.5% adults in the non-diabetes group reported an incident clinical fracture during a mean follow-up period of 6.2 and 8.0 years, respectively.
- In patients with T2D, the risk for incident clinical fracture increased by 93% for every standard deviation increase in the log F2-isoprostane serum levels (hazard ratio [HR], 1.93; 95% CI, 1.26-2.95; P = .002) independently of baseline bone density, medication use, and other risk factors, with no such association reported in individuals without T2D (HR, 0.98; 95% CI, 0.81-1.18; P = .79).
- In the T2D group, elevated plasma F2-isoprostane levels were also associated with a decrease in total hip bone mineral density over 4 years (r = −0.28; P = .008), but not in the non-diabetes group.
- No correlation was found between plasma F2-isoprostane levels and circulating advanced glycoxidation end-products, bone turnover markers, or A1c levels in either group.
IN PRACTICE:
“Oxidative stress in T2D may play an important role in the decline of bone quality and not just bone quantity,” the authors wrote.
SOURCE:
This study was led by Bowen Wang, PhD, Rensselaer Polytechnic Institute, Troy, New York. It was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
This study was conducted in a well-functioning elderly population with only White and Black participants, which may limit the generalizability of the findings to other age groups or less healthy populations. Additionally, the study did not assess prevalent vertebral fracture risk due to the small sample size.
DISCLOSURES:
This study was supported by the US National Institute on Aging and the Intramural Research Program of the US National Institutes of Health and the Dr and Ms Sands and Sands Family for Orthopaedic Research. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Elevated levels of plasma F2-isoprostanes, a reliable marker of oxidative stress, are associated with an increased risk for fractures in older ambulatory patients with type 2 diabetes (T2D) independently of bone density.
METHODOLOGY:
- Patients with T2D face an increased risk for fractures at any given bone mineral density; oxidative stress levels (reflected in circulating F2-isoprostanes), which are elevated in T2D, are associated with other T2D complications, and may weaken bone integrity.
- Researchers analyzed data from an observational cohort study to investigate the association between the levels of circulating F2-isoprostanes and the risk for clinical fractures in older patients with T2D.
- The data included 703 older ambulatory adults (baseline age, 70-79 years; about half White individuals and half Black individuals ; about half men and half women) from the Health, Aging and Body Composition Study, of whom 132 had T2D.
- Plasma F2-isoprostane levels were measured using baseline serum samples; bone turnover markers were also measured including procollagen type 1 N-terminal propeptide, osteocalcin, and C-terminal telopeptide of type 1 collagen.
- Incident clinical fractures were tracked over a follow-up period of up to 17.3 years, with fractures verified through radiology reports.
TAKEAWAY:
- Overall, 25.8% patients in the T2D group and 23.5% adults in the non-diabetes group reported an incident clinical fracture during a mean follow-up period of 6.2 and 8.0 years, respectively.
- In patients with T2D, the risk for incident clinical fracture increased by 93% for every standard deviation increase in the log F2-isoprostane serum levels (hazard ratio [HR], 1.93; 95% CI, 1.26-2.95; P = .002) independently of baseline bone density, medication use, and other risk factors, with no such association reported in individuals without T2D (HR, 0.98; 95% CI, 0.81-1.18; P = .79).
- In the T2D group, elevated plasma F2-isoprostane levels were also associated with a decrease in total hip bone mineral density over 4 years (r = −0.28; P = .008), but not in the non-diabetes group.
- No correlation was found between plasma F2-isoprostane levels and circulating advanced glycoxidation end-products, bone turnover markers, or A1c levels in either group.
IN PRACTICE:
“Oxidative stress in T2D may play an important role in the decline of bone quality and not just bone quantity,” the authors wrote.
SOURCE:
This study was led by Bowen Wang, PhD, Rensselaer Polytechnic Institute, Troy, New York. It was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
This study was conducted in a well-functioning elderly population with only White and Black participants, which may limit the generalizability of the findings to other age groups or less healthy populations. Additionally, the study did not assess prevalent vertebral fracture risk due to the small sample size.
DISCLOSURES:
This study was supported by the US National Institute on Aging and the Intramural Research Program of the US National Institutes of Health and the Dr and Ms Sands and Sands Family for Orthopaedic Research. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Elevated levels of plasma F2-isoprostanes, a reliable marker of oxidative stress, are associated with an increased risk for fractures in older ambulatory patients with type 2 diabetes (T2D) independently of bone density.
METHODOLOGY:
- Patients with T2D face an increased risk for fractures at any given bone mineral density; oxidative stress levels (reflected in circulating F2-isoprostanes), which are elevated in T2D, are associated with other T2D complications, and may weaken bone integrity.
- Researchers analyzed data from an observational cohort study to investigate the association between the levels of circulating F2-isoprostanes and the risk for clinical fractures in older patients with T2D.
- The data included 703 older ambulatory adults (baseline age, 70-79 years; about half White individuals and half Black individuals ; about half men and half women) from the Health, Aging and Body Composition Study, of whom 132 had T2D.
- Plasma F2-isoprostane levels were measured using baseline serum samples; bone turnover markers were also measured including procollagen type 1 N-terminal propeptide, osteocalcin, and C-terminal telopeptide of type 1 collagen.
- Incident clinical fractures were tracked over a follow-up period of up to 17.3 years, with fractures verified through radiology reports.
TAKEAWAY:
- Overall, 25.8% patients in the T2D group and 23.5% adults in the non-diabetes group reported an incident clinical fracture during a mean follow-up period of 6.2 and 8.0 years, respectively.
- In patients with T2D, the risk for incident clinical fracture increased by 93% for every standard deviation increase in the log F2-isoprostane serum levels (hazard ratio [HR], 1.93; 95% CI, 1.26-2.95; P = .002) independently of baseline bone density, medication use, and other risk factors, with no such association reported in individuals without T2D (HR, 0.98; 95% CI, 0.81-1.18; P = .79).
- In the T2D group, elevated plasma F2-isoprostane levels were also associated with a decrease in total hip bone mineral density over 4 years (r = −0.28; P = .008), but not in the non-diabetes group.
- No correlation was found between plasma F2-isoprostane levels and circulating advanced glycoxidation end-products, bone turnover markers, or A1c levels in either group.
IN PRACTICE:
“Oxidative stress in T2D may play an important role in the decline of bone quality and not just bone quantity,” the authors wrote.
SOURCE:
This study was led by Bowen Wang, PhD, Rensselaer Polytechnic Institute, Troy, New York. It was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
This study was conducted in a well-functioning elderly population with only White and Black participants, which may limit the generalizability of the findings to other age groups or less healthy populations. Additionally, the study did not assess prevalent vertebral fracture risk due to the small sample size.
DISCLOSURES:
This study was supported by the US National Institute on Aging and the Intramural Research Program of the US National Institutes of Health and the Dr and Ms Sands and Sands Family for Orthopaedic Research. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Vaping Linked to Higher Risk of Blurred Vision & Eye Pain
TOPLINE:
Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.
METHODOLOGY:
- Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
- 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
- The e-cigarette users were propensity score–matched to non-e-cigarette-users.
- People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
- The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.
TAKEAWAY:
- E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
- This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
- The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
- A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).
IN PRACTICE:
“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.
SOURCE:
The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes.
DISCLOSURES:
Study authors reported no relevant financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.
METHODOLOGY:
- Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
- 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
- The e-cigarette users were propensity score–matched to non-e-cigarette-users.
- People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
- The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.
TAKEAWAY:
- E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
- This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
- The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
- A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).
IN PRACTICE:
“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.
SOURCE:
The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes.
DISCLOSURES:
Study authors reported no relevant financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.
METHODOLOGY:
- Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
- 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
- The e-cigarette users were propensity score–matched to non-e-cigarette-users.
- People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
- The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.
TAKEAWAY:
- E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
- This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
- The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
- A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).
IN PRACTICE:
“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.
SOURCE:
The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes.
DISCLOSURES:
Study authors reported no relevant financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Stages I-III Screen-Detected CRC Boosts Disease-Free Survival Rates
TOPLINE:
METHODOLOGY:
- Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
- A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
- Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.
TAKEAWAY:
- Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
- Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
- The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
- Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
- Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
- Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).
IN PRACTICE:
“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.
SOURCE:
The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.
LIMITATIONS:
The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.
DISCLOSURES:
There was no funding source for this study. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
- A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
- Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.
TAKEAWAY:
- Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
- Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
- The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
- Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
- Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
- Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).
IN PRACTICE:
“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.
SOURCE:
The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.
LIMITATIONS:
The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.
DISCLOSURES:
There was no funding source for this study. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
- A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
- Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.
TAKEAWAY:
- Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
- Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
- The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
- Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
- Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
- Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).
IN PRACTICE:
“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.
SOURCE:
The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.
LIMITATIONS:
The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.
DISCLOSURES:
There was no funding source for this study. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Does Semaglutide Increase Risk for Optic Neuropathy?
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the TriNetX Analytics Network to investigate the potential risk for NAION associated with semaglutide use in a broader population worldwide.
- They included Caucasians aged ≥ 18 years with only type 2 diabetes (n = 37,245) , only obesity (n = 138,391), or both (n = 64,989) who visited healthcare facilities three or more times.
- The participants were further grouped into those prescribed semaglutide and those using non–GLP-1 RA medications.
- Propensity score matching was performed to balance age, sex, body mass index, A1C levels, medications, and underlying comorbidities between the participants using semaglutide or non–GLP-1 RAs.
- The main outcome measure was the occurrence of NAION, evaluated at 1, 2, and 3 years of follow-up.
TAKEAWAY:
- The use of semaglutide vs non–GLP-1 RAs was not associated with an increased risk for NAION in people with only type 2 diabetes during the 1-year (hazard ratio [HR], 2.32; 95% CI, 0.60-8.97), 2-year (HR, 2.31; 95% CI, 0.86-6.17), and 3-year (HR, 1.51; 0.71-3.25) follow-up periods.
- Similarly, in the obesity-only cohort, use of semaglutide was not linked to the development of NAION across 1-year (HR, 0.41; 95% CI, 0.08-2.09), 2-year (HR, 0.67; 95% CI, 0.20-2.24), and 3-year (HR, 0.72; 95% CI, 0.24-2.17) follow-up periods.
- The patients with both diabetes and obesity also showed no significant association between use of semaglutide and the risk for NAION across each follow-up period.
- Sensitivity analysis confirmed the prescription of semaglutide was not associated with an increased risk for NAION compared with non–GLP-1 RA medications.
IN PRACTICE:
“Our large, multinational, population-based, real-world study found that semaglutide is not associated with an increased risk of NAION in the general population,” the authors of the study wrote.
SOURCE:
The study was led by Chien-Chih Chou, MD, PhD, of National Yang Ming Chiao Tung University, in Taipei City, Taiwan, and was published online on November 02, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study may have limited the ability to establish causality between the use of semaglutide and the risk for NAION. The reliance on diagnosis coding for NAION may have introduced a potential misclassification of cases. Moreover, approximately half of the healthcare organizations in the TriNetX network are based in the United States, potentially limiting the diversity of the data.
DISCLOSURES:
This study was supported by a grant from Taichung Veterans General Hospital. The authors declared no potential conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the TriNetX Analytics Network to investigate the potential risk for NAION associated with semaglutide use in a broader population worldwide.
- They included Caucasians aged ≥ 18 years with only type 2 diabetes (n = 37,245) , only obesity (n = 138,391), or both (n = 64,989) who visited healthcare facilities three or more times.
- The participants were further grouped into those prescribed semaglutide and those using non–GLP-1 RA medications.
- Propensity score matching was performed to balance age, sex, body mass index, A1C levels, medications, and underlying comorbidities between the participants using semaglutide or non–GLP-1 RAs.
- The main outcome measure was the occurrence of NAION, evaluated at 1, 2, and 3 years of follow-up.
TAKEAWAY:
- The use of semaglutide vs non–GLP-1 RAs was not associated with an increased risk for NAION in people with only type 2 diabetes during the 1-year (hazard ratio [HR], 2.32; 95% CI, 0.60-8.97), 2-year (HR, 2.31; 95% CI, 0.86-6.17), and 3-year (HR, 1.51; 0.71-3.25) follow-up periods.
- Similarly, in the obesity-only cohort, use of semaglutide was not linked to the development of NAION across 1-year (HR, 0.41; 95% CI, 0.08-2.09), 2-year (HR, 0.67; 95% CI, 0.20-2.24), and 3-year (HR, 0.72; 95% CI, 0.24-2.17) follow-up periods.
- The patients with both diabetes and obesity also showed no significant association between use of semaglutide and the risk for NAION across each follow-up period.
- Sensitivity analysis confirmed the prescription of semaglutide was not associated with an increased risk for NAION compared with non–GLP-1 RA medications.
IN PRACTICE:
“Our large, multinational, population-based, real-world study found that semaglutide is not associated with an increased risk of NAION in the general population,” the authors of the study wrote.
SOURCE:
The study was led by Chien-Chih Chou, MD, PhD, of National Yang Ming Chiao Tung University, in Taipei City, Taiwan, and was published online on November 02, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study may have limited the ability to establish causality between the use of semaglutide and the risk for NAION. The reliance on diagnosis coding for NAION may have introduced a potential misclassification of cases. Moreover, approximately half of the healthcare organizations in the TriNetX network are based in the United States, potentially limiting the diversity of the data.
DISCLOSURES:
This study was supported by a grant from Taichung Veterans General Hospital. The authors declared no potential conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the TriNetX Analytics Network to investigate the potential risk for NAION associated with semaglutide use in a broader population worldwide.
- They included Caucasians aged ≥ 18 years with only type 2 diabetes (n = 37,245) , only obesity (n = 138,391), or both (n = 64,989) who visited healthcare facilities three or more times.
- The participants were further grouped into those prescribed semaglutide and those using non–GLP-1 RA medications.
- Propensity score matching was performed to balance age, sex, body mass index, A1C levels, medications, and underlying comorbidities between the participants using semaglutide or non–GLP-1 RAs.
- The main outcome measure was the occurrence of NAION, evaluated at 1, 2, and 3 years of follow-up.
TAKEAWAY:
- The use of semaglutide vs non–GLP-1 RAs was not associated with an increased risk for NAION in people with only type 2 diabetes during the 1-year (hazard ratio [HR], 2.32; 95% CI, 0.60-8.97), 2-year (HR, 2.31; 95% CI, 0.86-6.17), and 3-year (HR, 1.51; 0.71-3.25) follow-up periods.
- Similarly, in the obesity-only cohort, use of semaglutide was not linked to the development of NAION across 1-year (HR, 0.41; 95% CI, 0.08-2.09), 2-year (HR, 0.67; 95% CI, 0.20-2.24), and 3-year (HR, 0.72; 95% CI, 0.24-2.17) follow-up periods.
- The patients with both diabetes and obesity also showed no significant association between use of semaglutide and the risk for NAION across each follow-up period.
- Sensitivity analysis confirmed the prescription of semaglutide was not associated with an increased risk for NAION compared with non–GLP-1 RA medications.
IN PRACTICE:
“Our large, multinational, population-based, real-world study found that semaglutide is not associated with an increased risk of NAION in the general population,” the authors of the study wrote.
SOURCE:
The study was led by Chien-Chih Chou, MD, PhD, of National Yang Ming Chiao Tung University, in Taipei City, Taiwan, and was published online on November 02, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study may have limited the ability to establish causality between the use of semaglutide and the risk for NAION. The reliance on diagnosis coding for NAION may have introduced a potential misclassification of cases. Moreover, approximately half of the healthcare organizations in the TriNetX network are based in the United States, potentially limiting the diversity of the data.
DISCLOSURES:
This study was supported by a grant from Taichung Veterans General Hospital. The authors declared no potential conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Home Spirometry Has Potential for Detecting Pulmonary Decline in Systemic Sclerosis
TOPLINE:
Home spirometry shows potential for early detection of pulmonary function decline in patients with systemic sclerosis–associated interstitial lung disease (SSc-ILD). It shows good cross-sectional correlation with hospital tests, along with 60% sensitivity and 87% specificity for detecting progressive ILD.
METHODOLOGY:
- Researchers conducted a prospective, observational study to examine the validity of home spirometry for detecting a decline in pulmonary function in patients with SSc-ILD.
- They included 43 patients aged 18 years or older with SSc-ILD from two tertiary referral centers in the Netherlands who received treatment with immunosuppressives for a maximum duration of 8 weeks prior to baseline.
- All participants were required to take weekly home spirometry measurements using a handheld spirometer for 1 year, with 35 completing 6 months of follow-up and 31 completing 12 months.
- Pulmonary function tests were conducted in the hospital at baseline and semiannual visits.
- The primary outcome was the κ (kappa statistic) agreement between home and hospital measurements after 1 year to detect a decline in forced vital capacity (FVC) of 5% or more; the sensitivity and specificity of home spirometry were also evaluated to detect an absolute decline in FVC%, using hospital tests as the gold standard.
TAKEAWAY:
- Home spirometry showed a fair agreement with the pulmonary function tests conducted at the hospital (κ, 0.40; 95% CI, 0.01-0.79).
- Home spirometry showed a sensitivity of 60% and specificity of 87% in detecting a decline in FVC% predicted of 5% or more.
- The intraclass correlation coefficient between home and hospital FVC measurements was moderate to high, with values of 0.85 at baseline, 0.84 at 6 months, and 0.72 at 12 months (P < .0001 for all).
- However, the longitudinal agreement between home and hospital measurements was lower with a correlation coefficient of 0.55.
IN PRACTICE:
“These findings suggest that home spirometry is both feasible and moderately accurate in patients with systemic sclerosis–associated ILD. However, where home spirometry fell short was the low sensitivity in detecting a decline in FVC% predicted,” experts wrote in an accompanying editorial.
“The results of this study support further evaluation of the implementation of home spirometry in addition to regular healthcare management but do not endorse relying solely on home monitoring to detect a decline in pulmonary function,” study authors wrote.
SOURCE:
The study was led by Arthiha Velauthapillai, MD, Department of Rheumatology, Radboud University Medical Center, Nijmegen, the Netherlands, and was published online November 8, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The study might have been underpowered because of inaccuracies in initial assumptions, with a lower-than-anticipated prevalence of progressive ILD and a higher dropout rate. The study included only Dutch patients, which may have limited the generalizability of its findings to other settings with lower internet access or literacy rates.
DISCLOSURES:
This study was partly supported by grants from Galapagos and Boehringer Ingelheim. Some authors received grants or consulting or speaker fees from Boehringer Ingelheim, AstraZeneca, and other pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Home spirometry shows potential for early detection of pulmonary function decline in patients with systemic sclerosis–associated interstitial lung disease (SSc-ILD). It shows good cross-sectional correlation with hospital tests, along with 60% sensitivity and 87% specificity for detecting progressive ILD.
METHODOLOGY:
- Researchers conducted a prospective, observational study to examine the validity of home spirometry for detecting a decline in pulmonary function in patients with SSc-ILD.
- They included 43 patients aged 18 years or older with SSc-ILD from two tertiary referral centers in the Netherlands who received treatment with immunosuppressives for a maximum duration of 8 weeks prior to baseline.
- All participants were required to take weekly home spirometry measurements using a handheld spirometer for 1 year, with 35 completing 6 months of follow-up and 31 completing 12 months.
- Pulmonary function tests were conducted in the hospital at baseline and semiannual visits.
- The primary outcome was the κ (kappa statistic) agreement between home and hospital measurements after 1 year to detect a decline in forced vital capacity (FVC) of 5% or more; the sensitivity and specificity of home spirometry were also evaluated to detect an absolute decline in FVC%, using hospital tests as the gold standard.
TAKEAWAY:
- Home spirometry showed a fair agreement with the pulmonary function tests conducted at the hospital (κ, 0.40; 95% CI, 0.01-0.79).
- Home spirometry showed a sensitivity of 60% and specificity of 87% in detecting a decline in FVC% predicted of 5% or more.
- The intraclass correlation coefficient between home and hospital FVC measurements was moderate to high, with values of 0.85 at baseline, 0.84 at 6 months, and 0.72 at 12 months (P < .0001 for all).
- However, the longitudinal agreement between home and hospital measurements was lower with a correlation coefficient of 0.55.
IN PRACTICE:
“These findings suggest that home spirometry is both feasible and moderately accurate in patients with systemic sclerosis–associated ILD. However, where home spirometry fell short was the low sensitivity in detecting a decline in FVC% predicted,” experts wrote in an accompanying editorial.
“The results of this study support further evaluation of the implementation of home spirometry in addition to regular healthcare management but do not endorse relying solely on home monitoring to detect a decline in pulmonary function,” study authors wrote.
SOURCE:
The study was led by Arthiha Velauthapillai, MD, Department of Rheumatology, Radboud University Medical Center, Nijmegen, the Netherlands, and was published online November 8, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The study might have been underpowered because of inaccuracies in initial assumptions, with a lower-than-anticipated prevalence of progressive ILD and a higher dropout rate. The study included only Dutch patients, which may have limited the generalizability of its findings to other settings with lower internet access or literacy rates.
DISCLOSURES:
This study was partly supported by grants from Galapagos and Boehringer Ingelheim. Some authors received grants or consulting or speaker fees from Boehringer Ingelheim, AstraZeneca, and other pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Home spirometry shows potential for early detection of pulmonary function decline in patients with systemic sclerosis–associated interstitial lung disease (SSc-ILD). It shows good cross-sectional correlation with hospital tests, along with 60% sensitivity and 87% specificity for detecting progressive ILD.
METHODOLOGY:
- Researchers conducted a prospective, observational study to examine the validity of home spirometry for detecting a decline in pulmonary function in patients with SSc-ILD.
- They included 43 patients aged 18 years or older with SSc-ILD from two tertiary referral centers in the Netherlands who received treatment with immunosuppressives for a maximum duration of 8 weeks prior to baseline.
- All participants were required to take weekly home spirometry measurements using a handheld spirometer for 1 year, with 35 completing 6 months of follow-up and 31 completing 12 months.
- Pulmonary function tests were conducted in the hospital at baseline and semiannual visits.
- The primary outcome was the κ (kappa statistic) agreement between home and hospital measurements after 1 year to detect a decline in forced vital capacity (FVC) of 5% or more; the sensitivity and specificity of home spirometry were also evaluated to detect an absolute decline in FVC%, using hospital tests as the gold standard.
TAKEAWAY:
- Home spirometry showed a fair agreement with the pulmonary function tests conducted at the hospital (κ, 0.40; 95% CI, 0.01-0.79).
- Home spirometry showed a sensitivity of 60% and specificity of 87% in detecting a decline in FVC% predicted of 5% or more.
- The intraclass correlation coefficient between home and hospital FVC measurements was moderate to high, with values of 0.85 at baseline, 0.84 at 6 months, and 0.72 at 12 months (P < .0001 for all).
- However, the longitudinal agreement between home and hospital measurements was lower with a correlation coefficient of 0.55.
IN PRACTICE:
“These findings suggest that home spirometry is both feasible and moderately accurate in patients with systemic sclerosis–associated ILD. However, where home spirometry fell short was the low sensitivity in detecting a decline in FVC% predicted,” experts wrote in an accompanying editorial.
“The results of this study support further evaluation of the implementation of home spirometry in addition to regular healthcare management but do not endorse relying solely on home monitoring to detect a decline in pulmonary function,” study authors wrote.
SOURCE:
The study was led by Arthiha Velauthapillai, MD, Department of Rheumatology, Radboud University Medical Center, Nijmegen, the Netherlands, and was published online November 8, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The study might have been underpowered because of inaccuracies in initial assumptions, with a lower-than-anticipated prevalence of progressive ILD and a higher dropout rate. The study included only Dutch patients, which may have limited the generalizability of its findings to other settings with lower internet access or literacy rates.
DISCLOSURES:
This study was partly supported by grants from Galapagos and Boehringer Ingelheim. Some authors received grants or consulting or speaker fees from Boehringer Ingelheim, AstraZeneca, and other pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
New Strategy Led to Modest Decline in Antibiotic Misuse
TOPLINE:
particularly in general practice.
METHODOLOGY:
- Researchers conducted this study to assess the impact of an intervention on antibiotic prescribing and dispensing for common infections.
- Healthcare professionals from general practice, out-of-hours services, nursing homes, and community pharmacies in France, Greece, Lithuania, Poland, and Spain registered their interactions with patients related to antibiotic prescribing and dispensing both prior to and following the intervention.
- Overall, 407 healthcare professionals participated in the first registration, of whom 345 undertook the intervention and participated in the second registration; they documented 10,744 infections during the initial registration and 10,132 cases during the second period.
- The 5-hour intervention included evaluating and discussing feedback on the outcomes of the initial registration, improving communication skills, and offering communication tools.
- The impact of this intervention was calculated from potential unnecessary antibiotic prescriptions, non–first-line antibiotic choices, and percentage of good and wrong safety advice given for each prescription.
TAKEAWAY:
- General practice clinicians showed a significant overall reduction in unnecessary antibiotic prescriptions from 72.2% during the first registration to 65.2% after the intervention (P < .001), with variations across countries ranging from a 19.9% reduction in Lithuania to a 1.3% increase in Greece.
- Out-of-hours services showed a minimal change in unnecessary antibiotic prescribing from 52.5% to 52.1%, whereas nursing homes showed a slight increase from 56.1% to 58.6%.
- Community pharmacies showed significant improvements, with the provision of correct advice increasing by 17% (P < .001) and safety checks improving from 47% to 55.3% in 1 year (P < .001).
- However, the choice of non–first-line antibiotics significantly increased by 29.2% in the second registration period (P < .001).
IN PRACTICE:
“These findings highlight the need for alternative and tailored approaches in antimicrobial stewardship programs in long-term care facilities, with a greater focus on nurses. This includes implementing hygiene measures and empowering nurses to improve the diagnosis of suspected infections, such as urinary tract infections, while debunking prevalent myths and providing clear-cut information for better management of these common infections,” the authors wrote.
SOURCE:
The study was led by Ana García-Sangenís, of Fundació Institut Universitari per a la Recerca a l’Atenció Primària de Salut Jordi Gol i Gurina, Barcelona, Spain, and was published online on November 12, 2024, in Family Practice.
LIMITATIONS:
The study lacked a control group, which limited the ability to attribute changes solely to the intervention. The voluntary participation of healthcare professionals might have introduced selection bias, as participants might have had a greater interest in quality improvement programs than the general population of healthcare providers. Clinical outcomes were not evaluated, which may have created ambiguity regarding whether complication rates or clinical failures varied between the groups.
DISCLOSURES:
This study received funding from the European Union’s Third Health Programme. One author reported receiving fees from pharmaceutical companies and acting as a member of the board of Steno Diabetes Center, Odense, Denmark.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
particularly in general practice.
METHODOLOGY:
- Researchers conducted this study to assess the impact of an intervention on antibiotic prescribing and dispensing for common infections.
- Healthcare professionals from general practice, out-of-hours services, nursing homes, and community pharmacies in France, Greece, Lithuania, Poland, and Spain registered their interactions with patients related to antibiotic prescribing and dispensing both prior to and following the intervention.
- Overall, 407 healthcare professionals participated in the first registration, of whom 345 undertook the intervention and participated in the second registration; they documented 10,744 infections during the initial registration and 10,132 cases during the second period.
- The 5-hour intervention included evaluating and discussing feedback on the outcomes of the initial registration, improving communication skills, and offering communication tools.
- The impact of this intervention was calculated from potential unnecessary antibiotic prescriptions, non–first-line antibiotic choices, and percentage of good and wrong safety advice given for each prescription.
TAKEAWAY:
- General practice clinicians showed a significant overall reduction in unnecessary antibiotic prescriptions from 72.2% during the first registration to 65.2% after the intervention (P < .001), with variations across countries ranging from a 19.9% reduction in Lithuania to a 1.3% increase in Greece.
- Out-of-hours services showed a minimal change in unnecessary antibiotic prescribing from 52.5% to 52.1%, whereas nursing homes showed a slight increase from 56.1% to 58.6%.
- Community pharmacies showed significant improvements, with the provision of correct advice increasing by 17% (P < .001) and safety checks improving from 47% to 55.3% in 1 year (P < .001).
- However, the choice of non–first-line antibiotics significantly increased by 29.2% in the second registration period (P < .001).
IN PRACTICE:
“These findings highlight the need for alternative and tailored approaches in antimicrobial stewardship programs in long-term care facilities, with a greater focus on nurses. This includes implementing hygiene measures and empowering nurses to improve the diagnosis of suspected infections, such as urinary tract infections, while debunking prevalent myths and providing clear-cut information for better management of these common infections,” the authors wrote.
SOURCE:
The study was led by Ana García-Sangenís, of Fundació Institut Universitari per a la Recerca a l’Atenció Primària de Salut Jordi Gol i Gurina, Barcelona, Spain, and was published online on November 12, 2024, in Family Practice.
LIMITATIONS:
The study lacked a control group, which limited the ability to attribute changes solely to the intervention. The voluntary participation of healthcare professionals might have introduced selection bias, as participants might have had a greater interest in quality improvement programs than the general population of healthcare providers. Clinical outcomes were not evaluated, which may have created ambiguity regarding whether complication rates or clinical failures varied between the groups.
DISCLOSURES:
This study received funding from the European Union’s Third Health Programme. One author reported receiving fees from pharmaceutical companies and acting as a member of the board of Steno Diabetes Center, Odense, Denmark.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
particularly in general practice.
METHODOLOGY:
- Researchers conducted this study to assess the impact of an intervention on antibiotic prescribing and dispensing for common infections.
- Healthcare professionals from general practice, out-of-hours services, nursing homes, and community pharmacies in France, Greece, Lithuania, Poland, and Spain registered their interactions with patients related to antibiotic prescribing and dispensing both prior to and following the intervention.
- Overall, 407 healthcare professionals participated in the first registration, of whom 345 undertook the intervention and participated in the second registration; they documented 10,744 infections during the initial registration and 10,132 cases during the second period.
- The 5-hour intervention included evaluating and discussing feedback on the outcomes of the initial registration, improving communication skills, and offering communication tools.
- The impact of this intervention was calculated from potential unnecessary antibiotic prescriptions, non–first-line antibiotic choices, and percentage of good and wrong safety advice given for each prescription.
TAKEAWAY:
- General practice clinicians showed a significant overall reduction in unnecessary antibiotic prescriptions from 72.2% during the first registration to 65.2% after the intervention (P < .001), with variations across countries ranging from a 19.9% reduction in Lithuania to a 1.3% increase in Greece.
- Out-of-hours services showed a minimal change in unnecessary antibiotic prescribing from 52.5% to 52.1%, whereas nursing homes showed a slight increase from 56.1% to 58.6%.
- Community pharmacies showed significant improvements, with the provision of correct advice increasing by 17% (P < .001) and safety checks improving from 47% to 55.3% in 1 year (P < .001).
- However, the choice of non–first-line antibiotics significantly increased by 29.2% in the second registration period (P < .001).
IN PRACTICE:
“These findings highlight the need for alternative and tailored approaches in antimicrobial stewardship programs in long-term care facilities, with a greater focus on nurses. This includes implementing hygiene measures and empowering nurses to improve the diagnosis of suspected infections, such as urinary tract infections, while debunking prevalent myths and providing clear-cut information for better management of these common infections,” the authors wrote.
SOURCE:
The study was led by Ana García-Sangenís, of Fundació Institut Universitari per a la Recerca a l’Atenció Primària de Salut Jordi Gol i Gurina, Barcelona, Spain, and was published online on November 12, 2024, in Family Practice.
LIMITATIONS:
The study lacked a control group, which limited the ability to attribute changes solely to the intervention. The voluntary participation of healthcare professionals might have introduced selection bias, as participants might have had a greater interest in quality improvement programs than the general population of healthcare providers. Clinical outcomes were not evaluated, which may have created ambiguity regarding whether complication rates or clinical failures varied between the groups.
DISCLOSURES:
This study received funding from the European Union’s Third Health Programme. One author reported receiving fees from pharmaceutical companies and acting as a member of the board of Steno Diabetes Center, Odense, Denmark.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Infliximab vs Adalimumab: Which Is Best for Behçet Syndrome?
TOPLINE:
Both infliximab and adalimumab are safe and effective in achieving remission in patients with severe mucocutaneous Behçet syndrome, with adalimumab demonstrating a quicker response time; both drugs also improve quality of life and disease activity scores.
METHODOLOGY:
- Researchers conducted a phase 3 prospective study to evaluate the efficacy and safety of the anti–tumor necrosis factor–alpha agents infliximab and adalimumab in patients with Behçet syndrome presenting with mucocutaneous manifestations and inadequate response to prior treatments who were recruited from four Italian tertiary referral centers specializing in Behçet syndrome.
- Patients were randomly assigned to receive either 5 mg/kg intravenous infliximab at weeks 0, 2, and 6 and then every 6-8 weeks (n = 22; mean age, 46 years; 32% women) or 40 mg subcutaneous adalimumab every 2 weeks (n = 18; mean age, 48 years; 28% women) for 24 weeks.
- Patients were followed-up for an additional 12 weeks after the treatment period, with regular assessments of disease activity, safety, and adherence to treatment.
- The primary outcome was the time to response of mucocutaneous manifestations over 6 months; the secondary outcomes included relapse rates; quality of life, assessed using the Short-Form Health Survey 36; and disease activity, assessed using the Behçet Disease Current Activity Form.
- The safety and tolerability of the drugs were evaluated as the frequency of treatment-emergent adverse events (AEs) and serious AEs, monitored every 2 weeks.
TAKEAWAY:
- The resolution of mucocutaneous manifestations was achieved significantly more quickly with adalimumab than with infliximab, with a median time to response of 42 vs 152 days (P = .001); the proportion of responders was also higher in the adalimumab group than in the infliximab group (94% vs 64%; P = .023).
- Patients in the infliximab group had a higher risk for nonresponse (adjusted hazard ratio [HR], 3.33; P = .012) and relapse (adjusted HR, 7.57; P = .036) than those in the adalimumab group.
- Both infliximab and adalimumab significantly improved the quality of life in all dimensions (P < .05 for all) and disease activity scores (P < .001 for both) from baseline to the end of the study period, with no significant differences found between the groups.
- Two AEs were reported in the adalimumab group, one of which was serious (myocardial infarction); three nonserious AEs were reported in the infliximab group.
IN PRACTICE:
“ADA [adalimumab] and IFX [infliximab] were generally well tolerated and efficacious in patients with BS [Behçet syndrome] who showed an inadequate response to prior treatments with at least AZA [azathioprine] or CyA [cyclosporine],” the authors wrote. “Although a more detailed treat-to-target profile is yet to be better defined, [the study] results are also crucial in terms of prescriptiveness (currently off label), not only in Italy but also beyond national borders, as the evidence coming from real life still needs to be confirmed by growing data from clinical trials.”
SOURCE:
The study was led by Rosaria Talarico, MD, PhD, University of Pisa in Italy, and was published online in Annals of the Rheumatic Diseases.
LIMITATIONS:
The small sample size and the distinctive study design may have limited the generalizability of the findings.
DISCLOSURES:
This study was funded through a grant from the Italian Medicines Agency. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Both infliximab and adalimumab are safe and effective in achieving remission in patients with severe mucocutaneous Behçet syndrome, with adalimumab demonstrating a quicker response time; both drugs also improve quality of life and disease activity scores.
METHODOLOGY:
- Researchers conducted a phase 3 prospective study to evaluate the efficacy and safety of the anti–tumor necrosis factor–alpha agents infliximab and adalimumab in patients with Behçet syndrome presenting with mucocutaneous manifestations and inadequate response to prior treatments who were recruited from four Italian tertiary referral centers specializing in Behçet syndrome.
- Patients were randomly assigned to receive either 5 mg/kg intravenous infliximab at weeks 0, 2, and 6 and then every 6-8 weeks (n = 22; mean age, 46 years; 32% women) or 40 mg subcutaneous adalimumab every 2 weeks (n = 18; mean age, 48 years; 28% women) for 24 weeks.
- Patients were followed-up for an additional 12 weeks after the treatment period, with regular assessments of disease activity, safety, and adherence to treatment.
- The primary outcome was the time to response of mucocutaneous manifestations over 6 months; the secondary outcomes included relapse rates; quality of life, assessed using the Short-Form Health Survey 36; and disease activity, assessed using the Behçet Disease Current Activity Form.
- The safety and tolerability of the drugs were evaluated as the frequency of treatment-emergent adverse events (AEs) and serious AEs, monitored every 2 weeks.
TAKEAWAY:
- The resolution of mucocutaneous manifestations was achieved significantly more quickly with adalimumab than with infliximab, with a median time to response of 42 vs 152 days (P = .001); the proportion of responders was also higher in the adalimumab group than in the infliximab group (94% vs 64%; P = .023).
- Patients in the infliximab group had a higher risk for nonresponse (adjusted hazard ratio [HR], 3.33; P = .012) and relapse (adjusted HR, 7.57; P = .036) than those in the adalimumab group.
- Both infliximab and adalimumab significantly improved the quality of life in all dimensions (P < .05 for all) and disease activity scores (P < .001 for both) from baseline to the end of the study period, with no significant differences found between the groups.
- Two AEs were reported in the adalimumab group, one of which was serious (myocardial infarction); three nonserious AEs were reported in the infliximab group.
IN PRACTICE:
“ADA [adalimumab] and IFX [infliximab] were generally well tolerated and efficacious in patients with BS [Behçet syndrome] who showed an inadequate response to prior treatments with at least AZA [azathioprine] or CyA [cyclosporine],” the authors wrote. “Although a more detailed treat-to-target profile is yet to be better defined, [the study] results are also crucial in terms of prescriptiveness (currently off label), not only in Italy but also beyond national borders, as the evidence coming from real life still needs to be confirmed by growing data from clinical trials.”
SOURCE:
The study was led by Rosaria Talarico, MD, PhD, University of Pisa in Italy, and was published online in Annals of the Rheumatic Diseases.
LIMITATIONS:
The small sample size and the distinctive study design may have limited the generalizability of the findings.
DISCLOSURES:
This study was funded through a grant from the Italian Medicines Agency. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Both infliximab and adalimumab are safe and effective in achieving remission in patients with severe mucocutaneous Behçet syndrome, with adalimumab demonstrating a quicker response time; both drugs also improve quality of life and disease activity scores.
METHODOLOGY:
- Researchers conducted a phase 3 prospective study to evaluate the efficacy and safety of the anti–tumor necrosis factor–alpha agents infliximab and adalimumab in patients with Behçet syndrome presenting with mucocutaneous manifestations and inadequate response to prior treatments who were recruited from four Italian tertiary referral centers specializing in Behçet syndrome.
- Patients were randomly assigned to receive either 5 mg/kg intravenous infliximab at weeks 0, 2, and 6 and then every 6-8 weeks (n = 22; mean age, 46 years; 32% women) or 40 mg subcutaneous adalimumab every 2 weeks (n = 18; mean age, 48 years; 28% women) for 24 weeks.
- Patients were followed-up for an additional 12 weeks after the treatment period, with regular assessments of disease activity, safety, and adherence to treatment.
- The primary outcome was the time to response of mucocutaneous manifestations over 6 months; the secondary outcomes included relapse rates; quality of life, assessed using the Short-Form Health Survey 36; and disease activity, assessed using the Behçet Disease Current Activity Form.
- The safety and tolerability of the drugs were evaluated as the frequency of treatment-emergent adverse events (AEs) and serious AEs, monitored every 2 weeks.
TAKEAWAY:
- The resolution of mucocutaneous manifestations was achieved significantly more quickly with adalimumab than with infliximab, with a median time to response of 42 vs 152 days (P = .001); the proportion of responders was also higher in the adalimumab group than in the infliximab group (94% vs 64%; P = .023).
- Patients in the infliximab group had a higher risk for nonresponse (adjusted hazard ratio [HR], 3.33; P = .012) and relapse (adjusted HR, 7.57; P = .036) than those in the adalimumab group.
- Both infliximab and adalimumab significantly improved the quality of life in all dimensions (P < .05 for all) and disease activity scores (P < .001 for both) from baseline to the end of the study period, with no significant differences found between the groups.
- Two AEs were reported in the adalimumab group, one of which was serious (myocardial infarction); three nonserious AEs were reported in the infliximab group.
IN PRACTICE:
“ADA [adalimumab] and IFX [infliximab] were generally well tolerated and efficacious in patients with BS [Behçet syndrome] who showed an inadequate response to prior treatments with at least AZA [azathioprine] or CyA [cyclosporine],” the authors wrote. “Although a more detailed treat-to-target profile is yet to be better defined, [the study] results are also crucial in terms of prescriptiveness (currently off label), not only in Italy but also beyond national borders, as the evidence coming from real life still needs to be confirmed by growing data from clinical trials.”
SOURCE:
The study was led by Rosaria Talarico, MD, PhD, University of Pisa in Italy, and was published online in Annals of the Rheumatic Diseases.
LIMITATIONS:
The small sample size and the distinctive study design may have limited the generalizability of the findings.
DISCLOSURES:
This study was funded through a grant from the Italian Medicines Agency. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.