User login
For MD-IQ use only
Vitamin D fails to stave off statin-related muscle symptoms
Vitamin D supplements do not prevent muscle symptoms in new statin users or affect the likelihood of discontinuing a statin due to muscle pain and discomfort, a substudy of the VITAL trial indicates.
Among more than 2,000 randomized participants, statin-associated muscle symptoms (SAMS) were reported by 31% assigned to vitamin D and 31% assigned to placebo.
The two groups were equally likely to stop taking a statin due to muscle symptoms, at 13%.
No significant difference was observed in SAMS (odds ratio [OR], 0.97; 95% confidence interval [CI], 0.80-1.18) or statin discontinuations (OR, 1.04; 95% CI, 0.80-1.35) after adjustment for baseline variables and other characteristics, namely age, sex, and African-American race, previously found to be associated with SAMS in VITAL.
“We actually thought when we started out that maybe we were going to show something, that maybe it was going to be that the people who got the vitamin D were least likely to have a problem with a statin than all those who didn’t get vitamin D, but that is not what we showed,” senior author Neil J. Stone, MD, Northwestern University, Chicago, told this news organization.
He noted that patients in the clinic with low levels of vitamin D often have muscle pain and discomfort and that previous unblinded studies suggested vitamin D might benefit patients with SAMS and reduce statin intolerance.
As previously reported, the double-blind VITAL trial showed no difference in the primary prevention of cardiovascular disease or cancer at 5 years among 25,871 middle-aged adults randomized to vitamin D3 at 2000 IU/d or placebo, regardless of their baseline vitamin D level.
Unlike previous studies showing a benefit with vitamin D on SAMS, importantly, VITAL participants were unaware of whether they were taking vitamin D or placebo and were not expecting any help with their muscle symptoms, first author Mark A. Hlatky, MD, Stanford (Calif.) University, pointed out in an interview.
As to how many statin users turn to the popular supplement for SAMS, he said that number couldn’t be pinned down, despite a lengthy search. “But I think it’s very common, because up to half of people stop taking their statins within a year and many of these do so because of statin-associated muscle symptoms, and we found it in about 30% of people who have them. I have them myself and was motivated to study it because I thought this was an interesting question.”
The results were published online in JAMA Cardiology.
SAMS by baseline 25-OHD
The substudy included 2,083 patients who initiated statin therapy after randomization and were surveyed in early 2016 about their statin use and muscle symptoms.
Two-thirds, or 1,397 patients, had 25-hydroxy vitamin D (25-OHD) measured at baseline, with 47% having levels < 30 ng/mL and 13% levels < 20 ng/mL.
Serum 25-OHD levels were virtually identical in the two treatment groups (mean, 30.4 ng/mL; median, 30.0 ng/mL). The frequency of SAMS did not differ between those assigned to vitamin D or placebo (28% vs. 31%).
The odds ratios for the association with vitamin D on SAMS were:
- 0.86 in all respondents with 25-OHD measured (95% CI, 0.69-1.09).
- 0.87 in those with levels ≥ 30 ng/mL (95% CI, 0.64-1.19).
- 0.85 with levels of 20-30 ng/mL (95% CI, 0.56-1.28).
- 0.93 with levels < 20 ng/mL (95% CI, 0.50-1.74).
The test for treatment effect modification by baseline serum 25-OHD level was not significant (P for interaction = .83).
In addition, the rate of muscle symptoms was similar between participants randomized to vitamin D and placebo when researchers used a cutpoint to define low 25-OHD of < 30 ng/mL (27% vs. 30%) or < 20 ng/mL (33% vs. 35%).
“We didn’t find any evidence at all that the people who came into the study with low levels of vitamin D did better with the supplement in this case,” Dr. Hlatky said. “So that wasn’t the reason we didn’t see anything.”
Critics may suggest the trial didn’t use a high enough dose of vitamin D, but both Dr. Hlatky and Dr. Stone say that’s unlikely to be a factor in the results because 2,000 IU/d is a substantial dose and well above the recommended adult daily dose of 600-800 IU.
They caution that the substudy wasn’t prespecified, was smaller than the parent trial, and did not have a protocol in place to detail SAMS. They also can’t rule out the possibility that vitamin D may have an effect in patients who have confirmed intolerance to multiple statins, especially after adjustment for the statin type and dose.
“If you’re taking vitamin D to keep from having statin-associated muscle symptoms, this very carefully done substudy with the various caveats doesn’t support that and that’s not something I would give my patients,” Dr. Stone said.
“The most important thing from a negative study is that it allows you to focus your attention on things that may be much more productive rather than assuming that just giving everybody vitamin D will take care of the statin issue,” he added. “Maybe the answer is going to be somewhere else, and there’ll be a lot of people I’m sure who will offer their advice as what the answer is but, I would argue, we want to see more studies to pin it down. So people can get some science behind what they do to try to reduce statin-associated muscle symptoms.”
Paul D. Thompson, MD, chief of cardiology emeritus at Hartford (Conn.) Hospital, and a SAMS expert who was not involved with the research, said, “This is a useful publication, and it’s smart in that it took advantage of a study that was already done.”
He acknowledged being skeptical of a beneficial effect of vitamin D supplementation on SAMS, because some previous data have been retracted, but said that potential treatments are best tested in patients with confirmed statin myalgia, as was the case in his team’s negative trial of CoQ10 supplementation.
That said, the present “study was able to at least give some of the best evidence so far that vitamin D doesn’t do anything to improve symptoms,” Dr. Thompson said. “So maybe it will cut down on so many vitamin D levels [being measured] and use of vitamin D when you don’t really need it.”
The study was sponsored by the Hyperlipidemia Research Fund at Northwestern University. The VITAL trial was supported by grants from the National Institutes of Health, and Quest Diagnostics performed the laboratory measurements at no additional costs. Dr. Hlatky reports no relevant financial relationships. Dr. Stone reports a grant from the Hyperlipidemia Research Fund at Northwestern and honorarium for educational activity for Knowledge to Practice. Dr. Thompson is on the executive committee for a study examining bempedoic acid in patients with statin-associated muscle symptoms.
A version of this article first appeared on Medscape.com.
Vitamin D supplements do not prevent muscle symptoms in new statin users or affect the likelihood of discontinuing a statin due to muscle pain and discomfort, a substudy of the VITAL trial indicates.
Among more than 2,000 randomized participants, statin-associated muscle symptoms (SAMS) were reported by 31% assigned to vitamin D and 31% assigned to placebo.
The two groups were equally likely to stop taking a statin due to muscle symptoms, at 13%.
No significant difference was observed in SAMS (odds ratio [OR], 0.97; 95% confidence interval [CI], 0.80-1.18) or statin discontinuations (OR, 1.04; 95% CI, 0.80-1.35) after adjustment for baseline variables and other characteristics, namely age, sex, and African-American race, previously found to be associated with SAMS in VITAL.
“We actually thought when we started out that maybe we were going to show something, that maybe it was going to be that the people who got the vitamin D were least likely to have a problem with a statin than all those who didn’t get vitamin D, but that is not what we showed,” senior author Neil J. Stone, MD, Northwestern University, Chicago, told this news organization.
He noted that patients in the clinic with low levels of vitamin D often have muscle pain and discomfort and that previous unblinded studies suggested vitamin D might benefit patients with SAMS and reduce statin intolerance.
As previously reported, the double-blind VITAL trial showed no difference in the primary prevention of cardiovascular disease or cancer at 5 years among 25,871 middle-aged adults randomized to vitamin D3 at 2000 IU/d or placebo, regardless of their baseline vitamin D level.
Unlike previous studies showing a benefit with vitamin D on SAMS, importantly, VITAL participants were unaware of whether they were taking vitamin D or placebo and were not expecting any help with their muscle symptoms, first author Mark A. Hlatky, MD, Stanford (Calif.) University, pointed out in an interview.
As to how many statin users turn to the popular supplement for SAMS, he said that number couldn’t be pinned down, despite a lengthy search. “But I think it’s very common, because up to half of people stop taking their statins within a year and many of these do so because of statin-associated muscle symptoms, and we found it in about 30% of people who have them. I have them myself and was motivated to study it because I thought this was an interesting question.”
The results were published online in JAMA Cardiology.
SAMS by baseline 25-OHD
The substudy included 2,083 patients who initiated statin therapy after randomization and were surveyed in early 2016 about their statin use and muscle symptoms.
Two-thirds, or 1,397 patients, had 25-hydroxy vitamin D (25-OHD) measured at baseline, with 47% having levels < 30 ng/mL and 13% levels < 20 ng/mL.
Serum 25-OHD levels were virtually identical in the two treatment groups (mean, 30.4 ng/mL; median, 30.0 ng/mL). The frequency of SAMS did not differ between those assigned to vitamin D or placebo (28% vs. 31%).
The odds ratios for the association with vitamin D on SAMS were:
- 0.86 in all respondents with 25-OHD measured (95% CI, 0.69-1.09).
- 0.87 in those with levels ≥ 30 ng/mL (95% CI, 0.64-1.19).
- 0.85 with levels of 20-30 ng/mL (95% CI, 0.56-1.28).
- 0.93 with levels < 20 ng/mL (95% CI, 0.50-1.74).
The test for treatment effect modification by baseline serum 25-OHD level was not significant (P for interaction = .83).
In addition, the rate of muscle symptoms was similar between participants randomized to vitamin D and placebo when researchers used a cutpoint to define low 25-OHD of < 30 ng/mL (27% vs. 30%) or < 20 ng/mL (33% vs. 35%).
“We didn’t find any evidence at all that the people who came into the study with low levels of vitamin D did better with the supplement in this case,” Dr. Hlatky said. “So that wasn’t the reason we didn’t see anything.”
Critics may suggest the trial didn’t use a high enough dose of vitamin D, but both Dr. Hlatky and Dr. Stone say that’s unlikely to be a factor in the results because 2,000 IU/d is a substantial dose and well above the recommended adult daily dose of 600-800 IU.
They caution that the substudy wasn’t prespecified, was smaller than the parent trial, and did not have a protocol in place to detail SAMS. They also can’t rule out the possibility that vitamin D may have an effect in patients who have confirmed intolerance to multiple statins, especially after adjustment for the statin type and dose.
“If you’re taking vitamin D to keep from having statin-associated muscle symptoms, this very carefully done substudy with the various caveats doesn’t support that and that’s not something I would give my patients,” Dr. Stone said.
“The most important thing from a negative study is that it allows you to focus your attention on things that may be much more productive rather than assuming that just giving everybody vitamin D will take care of the statin issue,” he added. “Maybe the answer is going to be somewhere else, and there’ll be a lot of people I’m sure who will offer their advice as what the answer is but, I would argue, we want to see more studies to pin it down. So people can get some science behind what they do to try to reduce statin-associated muscle symptoms.”
Paul D. Thompson, MD, chief of cardiology emeritus at Hartford (Conn.) Hospital, and a SAMS expert who was not involved with the research, said, “This is a useful publication, and it’s smart in that it took advantage of a study that was already done.”
He acknowledged being skeptical of a beneficial effect of vitamin D supplementation on SAMS, because some previous data have been retracted, but said that potential treatments are best tested in patients with confirmed statin myalgia, as was the case in his team’s negative trial of CoQ10 supplementation.
That said, the present “study was able to at least give some of the best evidence so far that vitamin D doesn’t do anything to improve symptoms,” Dr. Thompson said. “So maybe it will cut down on so many vitamin D levels [being measured] and use of vitamin D when you don’t really need it.”
The study was sponsored by the Hyperlipidemia Research Fund at Northwestern University. The VITAL trial was supported by grants from the National Institutes of Health, and Quest Diagnostics performed the laboratory measurements at no additional costs. Dr. Hlatky reports no relevant financial relationships. Dr. Stone reports a grant from the Hyperlipidemia Research Fund at Northwestern and honorarium for educational activity for Knowledge to Practice. Dr. Thompson is on the executive committee for a study examining bempedoic acid in patients with statin-associated muscle symptoms.
A version of this article first appeared on Medscape.com.
Vitamin D supplements do not prevent muscle symptoms in new statin users or affect the likelihood of discontinuing a statin due to muscle pain and discomfort, a substudy of the VITAL trial indicates.
Among more than 2,000 randomized participants, statin-associated muscle symptoms (SAMS) were reported by 31% assigned to vitamin D and 31% assigned to placebo.
The two groups were equally likely to stop taking a statin due to muscle symptoms, at 13%.
No significant difference was observed in SAMS (odds ratio [OR], 0.97; 95% confidence interval [CI], 0.80-1.18) or statin discontinuations (OR, 1.04; 95% CI, 0.80-1.35) after adjustment for baseline variables and other characteristics, namely age, sex, and African-American race, previously found to be associated with SAMS in VITAL.
“We actually thought when we started out that maybe we were going to show something, that maybe it was going to be that the people who got the vitamin D were least likely to have a problem with a statin than all those who didn’t get vitamin D, but that is not what we showed,” senior author Neil J. Stone, MD, Northwestern University, Chicago, told this news organization.
He noted that patients in the clinic with low levels of vitamin D often have muscle pain and discomfort and that previous unblinded studies suggested vitamin D might benefit patients with SAMS and reduce statin intolerance.
As previously reported, the double-blind VITAL trial showed no difference in the primary prevention of cardiovascular disease or cancer at 5 years among 25,871 middle-aged adults randomized to vitamin D3 at 2000 IU/d or placebo, regardless of their baseline vitamin D level.
Unlike previous studies showing a benefit with vitamin D on SAMS, importantly, VITAL participants were unaware of whether they were taking vitamin D or placebo and were not expecting any help with their muscle symptoms, first author Mark A. Hlatky, MD, Stanford (Calif.) University, pointed out in an interview.
As to how many statin users turn to the popular supplement for SAMS, he said that number couldn’t be pinned down, despite a lengthy search. “But I think it’s very common, because up to half of people stop taking their statins within a year and many of these do so because of statin-associated muscle symptoms, and we found it in about 30% of people who have them. I have them myself and was motivated to study it because I thought this was an interesting question.”
The results were published online in JAMA Cardiology.
SAMS by baseline 25-OHD
The substudy included 2,083 patients who initiated statin therapy after randomization and were surveyed in early 2016 about their statin use and muscle symptoms.
Two-thirds, or 1,397 patients, had 25-hydroxy vitamin D (25-OHD) measured at baseline, with 47% having levels < 30 ng/mL and 13% levels < 20 ng/mL.
Serum 25-OHD levels were virtually identical in the two treatment groups (mean, 30.4 ng/mL; median, 30.0 ng/mL). The frequency of SAMS did not differ between those assigned to vitamin D or placebo (28% vs. 31%).
The odds ratios for the association with vitamin D on SAMS were:
- 0.86 in all respondents with 25-OHD measured (95% CI, 0.69-1.09).
- 0.87 in those with levels ≥ 30 ng/mL (95% CI, 0.64-1.19).
- 0.85 with levels of 20-30 ng/mL (95% CI, 0.56-1.28).
- 0.93 with levels < 20 ng/mL (95% CI, 0.50-1.74).
The test for treatment effect modification by baseline serum 25-OHD level was not significant (P for interaction = .83).
In addition, the rate of muscle symptoms was similar between participants randomized to vitamin D and placebo when researchers used a cutpoint to define low 25-OHD of < 30 ng/mL (27% vs. 30%) or < 20 ng/mL (33% vs. 35%).
“We didn’t find any evidence at all that the people who came into the study with low levels of vitamin D did better with the supplement in this case,” Dr. Hlatky said. “So that wasn’t the reason we didn’t see anything.”
Critics may suggest the trial didn’t use a high enough dose of vitamin D, but both Dr. Hlatky and Dr. Stone say that’s unlikely to be a factor in the results because 2,000 IU/d is a substantial dose and well above the recommended adult daily dose of 600-800 IU.
They caution that the substudy wasn’t prespecified, was smaller than the parent trial, and did not have a protocol in place to detail SAMS. They also can’t rule out the possibility that vitamin D may have an effect in patients who have confirmed intolerance to multiple statins, especially after adjustment for the statin type and dose.
“If you’re taking vitamin D to keep from having statin-associated muscle symptoms, this very carefully done substudy with the various caveats doesn’t support that and that’s not something I would give my patients,” Dr. Stone said.
“The most important thing from a negative study is that it allows you to focus your attention on things that may be much more productive rather than assuming that just giving everybody vitamin D will take care of the statin issue,” he added. “Maybe the answer is going to be somewhere else, and there’ll be a lot of people I’m sure who will offer their advice as what the answer is but, I would argue, we want to see more studies to pin it down. So people can get some science behind what they do to try to reduce statin-associated muscle symptoms.”
Paul D. Thompson, MD, chief of cardiology emeritus at Hartford (Conn.) Hospital, and a SAMS expert who was not involved with the research, said, “This is a useful publication, and it’s smart in that it took advantage of a study that was already done.”
He acknowledged being skeptical of a beneficial effect of vitamin D supplementation on SAMS, because some previous data have been retracted, but said that potential treatments are best tested in patients with confirmed statin myalgia, as was the case in his team’s negative trial of CoQ10 supplementation.
That said, the present “study was able to at least give some of the best evidence so far that vitamin D doesn’t do anything to improve symptoms,” Dr. Thompson said. “So maybe it will cut down on so many vitamin D levels [being measured] and use of vitamin D when you don’t really need it.”
The study was sponsored by the Hyperlipidemia Research Fund at Northwestern University. The VITAL trial was supported by grants from the National Institutes of Health, and Quest Diagnostics performed the laboratory measurements at no additional costs. Dr. Hlatky reports no relevant financial relationships. Dr. Stone reports a grant from the Hyperlipidemia Research Fund at Northwestern and honorarium for educational activity for Knowledge to Practice. Dr. Thompson is on the executive committee for a study examining bempedoic acid in patients with statin-associated muscle symptoms.
A version of this article first appeared on Medscape.com.
More vaccinated people dying of COVID as fewer get booster shots
“We can no longer say this is a pandemic of the unvaccinated,” Kaiser Family Foundation Vice President Cynthia Cox, who conducted the analysis, told The Washington Post.
People who had been vaccinated or boosted made up 58% of COVID-19 deaths in August, the analysis showed. The rate has been on the rise: 23% of coronavirus deaths were among vaccinated people in September 2021, and the vaccinated made up 42% of deaths in January and February 2022, the Post reported.
Research continues to show that people who are vaccinated or boosted have a lower risk of death. The rise in deaths among the vaccinated is the result of three factors, Ms. Cox said.
- A large majority of people in the United States have been vaccinated (267 million people, the said).
- People who are at the greatest risk of dying from COVID-19 are more likely to be vaccinated and boosted, such as the elderly.
- Vaccines lose their effectiveness over time; the virus changes to avoid vaccines; and people need to choose to get boosters to continue to be protected.
The case for the effectiveness of vaccines and boosters versus skipping the shots remains strong. People age 6 months and older who are unvaccinated are six times more likely to die of COVID-19, compared to those who got the primary series of shots, the Post reported. Survival rates were even better with additional booster shots, particularly among older people.
“I feel very confident that if people continue to get vaccinated at good numbers, if people get boosted, we can absolutely have a very safe and healthy holiday season,” Ashish Jha, White House coronavirus czar, said on Nov. 22.
The number of Americans who have gotten the most recent booster has been increasing ahead of the holidays. CDC data show that 12% of the U.S. population age 5 and older has received a booster.
A new study by a team of researchers from Harvard University and Yale University estimates that 94% of the U.S. population has been infected with COVID-19 at least once, leaving just 1 in 20 people who have never had the virus.
“Despite these high exposure numbers, there is still substantial population susceptibility to infection with an Omicron variant,” the authors wrote.
They said that if all states achieved the vaccination levels of Vermont, where 55% of people had at least one booster and 22% got a second one, there would be “an appreciable improvement in population immunity, with greater relative impact for protection against infection versus severe disease. This additional protection results from both the recovery of immunity lost due to waning and the increased effectiveness of the bivalent booster against Omicron infections.”
A version of this article first appeared on WebMD.com.
“We can no longer say this is a pandemic of the unvaccinated,” Kaiser Family Foundation Vice President Cynthia Cox, who conducted the analysis, told The Washington Post.
People who had been vaccinated or boosted made up 58% of COVID-19 deaths in August, the analysis showed. The rate has been on the rise: 23% of coronavirus deaths were among vaccinated people in September 2021, and the vaccinated made up 42% of deaths in January and February 2022, the Post reported.
Research continues to show that people who are vaccinated or boosted have a lower risk of death. The rise in deaths among the vaccinated is the result of three factors, Ms. Cox said.
- A large majority of people in the United States have been vaccinated (267 million people, the said).
- People who are at the greatest risk of dying from COVID-19 are more likely to be vaccinated and boosted, such as the elderly.
- Vaccines lose their effectiveness over time; the virus changes to avoid vaccines; and people need to choose to get boosters to continue to be protected.
The case for the effectiveness of vaccines and boosters versus skipping the shots remains strong. People age 6 months and older who are unvaccinated are six times more likely to die of COVID-19, compared to those who got the primary series of shots, the Post reported. Survival rates were even better with additional booster shots, particularly among older people.
“I feel very confident that if people continue to get vaccinated at good numbers, if people get boosted, we can absolutely have a very safe and healthy holiday season,” Ashish Jha, White House coronavirus czar, said on Nov. 22.
The number of Americans who have gotten the most recent booster has been increasing ahead of the holidays. CDC data show that 12% of the U.S. population age 5 and older has received a booster.
A new study by a team of researchers from Harvard University and Yale University estimates that 94% of the U.S. population has been infected with COVID-19 at least once, leaving just 1 in 20 people who have never had the virus.
“Despite these high exposure numbers, there is still substantial population susceptibility to infection with an Omicron variant,” the authors wrote.
They said that if all states achieved the vaccination levels of Vermont, where 55% of people had at least one booster and 22% got a second one, there would be “an appreciable improvement in population immunity, with greater relative impact for protection against infection versus severe disease. This additional protection results from both the recovery of immunity lost due to waning and the increased effectiveness of the bivalent booster against Omicron infections.”
A version of this article first appeared on WebMD.com.
“We can no longer say this is a pandemic of the unvaccinated,” Kaiser Family Foundation Vice President Cynthia Cox, who conducted the analysis, told The Washington Post.
People who had been vaccinated or boosted made up 58% of COVID-19 deaths in August, the analysis showed. The rate has been on the rise: 23% of coronavirus deaths were among vaccinated people in September 2021, and the vaccinated made up 42% of deaths in January and February 2022, the Post reported.
Research continues to show that people who are vaccinated or boosted have a lower risk of death. The rise in deaths among the vaccinated is the result of three factors, Ms. Cox said.
- A large majority of people in the United States have been vaccinated (267 million people, the said).
- People who are at the greatest risk of dying from COVID-19 are more likely to be vaccinated and boosted, such as the elderly.
- Vaccines lose their effectiveness over time; the virus changes to avoid vaccines; and people need to choose to get boosters to continue to be protected.
The case for the effectiveness of vaccines and boosters versus skipping the shots remains strong. People age 6 months and older who are unvaccinated are six times more likely to die of COVID-19, compared to those who got the primary series of shots, the Post reported. Survival rates were even better with additional booster shots, particularly among older people.
“I feel very confident that if people continue to get vaccinated at good numbers, if people get boosted, we can absolutely have a very safe and healthy holiday season,” Ashish Jha, White House coronavirus czar, said on Nov. 22.
The number of Americans who have gotten the most recent booster has been increasing ahead of the holidays. CDC data show that 12% of the U.S. population age 5 and older has received a booster.
A new study by a team of researchers from Harvard University and Yale University estimates that 94% of the U.S. population has been infected with COVID-19 at least once, leaving just 1 in 20 people who have never had the virus.
“Despite these high exposure numbers, there is still substantial population susceptibility to infection with an Omicron variant,” the authors wrote.
They said that if all states achieved the vaccination levels of Vermont, where 55% of people had at least one booster and 22% got a second one, there would be “an appreciable improvement in population immunity, with greater relative impact for protection against infection versus severe disease. This additional protection results from both the recovery of immunity lost due to waning and the increased effectiveness of the bivalent booster against Omicron infections.”
A version of this article first appeared on WebMD.com.
Study supports banning probiotics from the ICU
NASHVILLE, TENN. – Supported by several cases series, according to new findings presented at the annual meeting of the American College of Chest Physicians (CHEST).
According to data presented by Scott Mayer, MD, chief resident at HealthONE Denver, which is part of the HCA Healthcare chain of hospitals, the risk is increased by any probiotic exposure. However, the risk is particularly acute for powdered formulations, presumably because powder more easily disseminates to contaminate central venous catheters.
“We think that probiotics should be eliminated entirely from the ICU. If not, we encourage eliminating the powder formulations,” said Dr. Mayer, who led the study.
The data linking probiotics to ICU bacteremia were drawn from 23,533 ICU admissions over a 5-year period in the HCA hospital database. Bacteremia proven to be probiotic-related was uncommon (0.37%), but the consequences were serious.
For those with probiotic-related bacteremia, the mortality rate was 25.6% or essentially twofold greater than the 13.5% mortality rate among those without probiotic bacteremia. An odds ratio drawn from a regression analysis confirmed a significant difference (OR, 2.23; 95% confidence interval, 1.30-3.71; P < .01).
“The absolute risk of mortality is modest but not insignificant,” said Dr. Mayer. This suggests one probiotic-related mortality for about every 200 patients taking a probiotic in the ICU.
These deaths occur without any clear compensatory benefit from taking probiotics, according to Dr. Mayer. There is a long list of potential benefits from probiotics that might be relevant to patients in the ICU, particularly prophylaxis for Clostridioides difficile infection, but also including a variety of gastrointestinal disorders, such as irritable bowel syndrome; however, none of these are firmly established in general, and particularly for patients in the ICU.
“The American College of Gastroenterology currently recommends against probiotics for the prevention of C. diff.,” Dr. Mayer said. Although the American Gastroenterological Association has issued a “conditional recommendation” for prevention of C. diff. infection with probiotics, Dr. Mayer pointed out this is qualified by a “low quality of evidence” and it is not specific to the ICU setting.
“The evidence for benefit is weak or nonexistent, but the risks are real,” Dr. Mayer said.
To confirm that probiotic-associated ICU bacteremias in the HCA hospital database were, in fact, related to probiotics being taken by patients at time of admission, Dr. Mayer evaluated the record of each of the 86 patients with probiotic bacteremia–associated mortality.
“I identified the organism that grew from the blood cultures to confirm that it was contained in the probiotic the patient was taking,” explained Dr. Mayer, who said this information was available in the electronic medical records.
The risk of probiotic-associated bacteremia in ICU patients was consistent with a series of case series that prompted the study. Dr. Mayer explained that he became interested when he encountered patients on his ICU rounds who were taking probiotics. He knew very little about these agents and explored the medical literature to see what evidence was available.
“I found several case reports of ICU patients with probiotic-associated infections, several of which were suspected of being associated with contamination of the central lines,” Dr. Mayer said. In one case, the patient was not taking a probiotic, but a patient in an adjacent bed was receiving a powdered probiotic that was implicated. This prompted suspicion that the cause was central-line contamination.
This was evaluated in the HCA ICU database and also found to be a significant risk. Among the 67 patients in whom a capsule or tablet was used, the rate of probiotic-associated bacteremia was 0.33%. For those in which the probiotic was a powdered formulation, the rate was 0.76%, a significant difference (P < .01).
Dr. Mayer acknowledged that these data do not rule out all potential benefits from probiotics in the ICU. He believes an obstacle to proving benefit has been the heterogeneity of available products, which are likely to be relevant to any therapeutic role, including prevention of C. diff. infection.
“There are now a large number of products available, and they contain a large variety of strains of organisms, so this has been a difficult area to study,” he said. However, he maintains it is prudent at this point to avoid probiotics in the ICU because the risks are not confined to the patient making this choice.
“My concern is not just the lack of evidence of benefit relative to the risk for the patient but the potential for probiotics in the ICU to place other patients at risk,” Dr. Mayer said.
Others have also noted the potential benefits of probiotics in the ICU, but the promise remains elusive. In a 2018 review article published in the Journal of Emergency and Critical Care Medicine, the authors evaluated a series of potential applications of probiotics in critically ill patients. These included treatment of ventilator-associated pneumonia (VAP), catheter-associated urinary tract infections (CAUTI), and surgical-site infections (SSI). For each, the data were negative or inconclusive.
Over the 4 years that have passed since the review was published, several trials have further explored the potential benefits of probiotics in the ICU but none have changed this basic conclusion. For example, a 2021 multinational trial, published in The Lancet, randomized more than 2,600 patients to probiotics or placebo and showed no effect on VAP incidence (21.9% vs. 21.3%).
The lead author of the 2018 review, Heather A. Vitko, PhD, an associate professor in the department of acute and tertiary care, University of Pittsburgh School of Nursing, also emphasized that the potential for benefit cannot be considered without the potential for risk. She, like Dr. Mayer, cited the case studies implicating probiotics in systemic infections.
For administration, probiotic capsules or sachets “often need to be opened for administration through a feeding tube,” she noted. The risk of contamination comes from both the air and contaminated hands, the latter of which “can cause a translocation to a central line catheter where the microbes have direct entry into the systemic circulation.”
She did not call for a ban of probiotics in the ICU, but she did recommend “a precautionary approach,” encouraging clinicians to “distinguish between reality [of what has been proven] and what is presented in the marketing of antibiotics.”
Dr. Mayer and Dr. Vitko have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
NASHVILLE, TENN. – Supported by several cases series, according to new findings presented at the annual meeting of the American College of Chest Physicians (CHEST).
According to data presented by Scott Mayer, MD, chief resident at HealthONE Denver, which is part of the HCA Healthcare chain of hospitals, the risk is increased by any probiotic exposure. However, the risk is particularly acute for powdered formulations, presumably because powder more easily disseminates to contaminate central venous catheters.
“We think that probiotics should be eliminated entirely from the ICU. If not, we encourage eliminating the powder formulations,” said Dr. Mayer, who led the study.
The data linking probiotics to ICU bacteremia were drawn from 23,533 ICU admissions over a 5-year period in the HCA hospital database. Bacteremia proven to be probiotic-related was uncommon (0.37%), but the consequences were serious.
For those with probiotic-related bacteremia, the mortality rate was 25.6% or essentially twofold greater than the 13.5% mortality rate among those without probiotic bacteremia. An odds ratio drawn from a regression analysis confirmed a significant difference (OR, 2.23; 95% confidence interval, 1.30-3.71; P < .01).
“The absolute risk of mortality is modest but not insignificant,” said Dr. Mayer. This suggests one probiotic-related mortality for about every 200 patients taking a probiotic in the ICU.
These deaths occur without any clear compensatory benefit from taking probiotics, according to Dr. Mayer. There is a long list of potential benefits from probiotics that might be relevant to patients in the ICU, particularly prophylaxis for Clostridioides difficile infection, but also including a variety of gastrointestinal disorders, such as irritable bowel syndrome; however, none of these are firmly established in general, and particularly for patients in the ICU.
“The American College of Gastroenterology currently recommends against probiotics for the prevention of C. diff.,” Dr. Mayer said. Although the American Gastroenterological Association has issued a “conditional recommendation” for prevention of C. diff. infection with probiotics, Dr. Mayer pointed out this is qualified by a “low quality of evidence” and it is not specific to the ICU setting.
“The evidence for benefit is weak or nonexistent, but the risks are real,” Dr. Mayer said.
To confirm that probiotic-associated ICU bacteremias in the HCA hospital database were, in fact, related to probiotics being taken by patients at time of admission, Dr. Mayer evaluated the record of each of the 86 patients with probiotic bacteremia–associated mortality.
“I identified the organism that grew from the blood cultures to confirm that it was contained in the probiotic the patient was taking,” explained Dr. Mayer, who said this information was available in the electronic medical records.
The risk of probiotic-associated bacteremia in ICU patients was consistent with a series of case series that prompted the study. Dr. Mayer explained that he became interested when he encountered patients on his ICU rounds who were taking probiotics. He knew very little about these agents and explored the medical literature to see what evidence was available.
“I found several case reports of ICU patients with probiotic-associated infections, several of which were suspected of being associated with contamination of the central lines,” Dr. Mayer said. In one case, the patient was not taking a probiotic, but a patient in an adjacent bed was receiving a powdered probiotic that was implicated. This prompted suspicion that the cause was central-line contamination.
This was evaluated in the HCA ICU database and also found to be a significant risk. Among the 67 patients in whom a capsule or tablet was used, the rate of probiotic-associated bacteremia was 0.33%. For those in which the probiotic was a powdered formulation, the rate was 0.76%, a significant difference (P < .01).
Dr. Mayer acknowledged that these data do not rule out all potential benefits from probiotics in the ICU. He believes an obstacle to proving benefit has been the heterogeneity of available products, which are likely to be relevant to any therapeutic role, including prevention of C. diff. infection.
“There are now a large number of products available, and they contain a large variety of strains of organisms, so this has been a difficult area to study,” he said. However, he maintains it is prudent at this point to avoid probiotics in the ICU because the risks are not confined to the patient making this choice.
“My concern is not just the lack of evidence of benefit relative to the risk for the patient but the potential for probiotics in the ICU to place other patients at risk,” Dr. Mayer said.
Others have also noted the potential benefits of probiotics in the ICU, but the promise remains elusive. In a 2018 review article published in the Journal of Emergency and Critical Care Medicine, the authors evaluated a series of potential applications of probiotics in critically ill patients. These included treatment of ventilator-associated pneumonia (VAP), catheter-associated urinary tract infections (CAUTI), and surgical-site infections (SSI). For each, the data were negative or inconclusive.
Over the 4 years that have passed since the review was published, several trials have further explored the potential benefits of probiotics in the ICU but none have changed this basic conclusion. For example, a 2021 multinational trial, published in The Lancet, randomized more than 2,600 patients to probiotics or placebo and showed no effect on VAP incidence (21.9% vs. 21.3%).
The lead author of the 2018 review, Heather A. Vitko, PhD, an associate professor in the department of acute and tertiary care, University of Pittsburgh School of Nursing, also emphasized that the potential for benefit cannot be considered without the potential for risk. She, like Dr. Mayer, cited the case studies implicating probiotics in systemic infections.
For administration, probiotic capsules or sachets “often need to be opened for administration through a feeding tube,” she noted. The risk of contamination comes from both the air and contaminated hands, the latter of which “can cause a translocation to a central line catheter where the microbes have direct entry into the systemic circulation.”
She did not call for a ban of probiotics in the ICU, but she did recommend “a precautionary approach,” encouraging clinicians to “distinguish between reality [of what has been proven] and what is presented in the marketing of antibiotics.”
Dr. Mayer and Dr. Vitko have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
NASHVILLE, TENN. – Supported by several cases series, according to new findings presented at the annual meeting of the American College of Chest Physicians (CHEST).
According to data presented by Scott Mayer, MD, chief resident at HealthONE Denver, which is part of the HCA Healthcare chain of hospitals, the risk is increased by any probiotic exposure. However, the risk is particularly acute for powdered formulations, presumably because powder more easily disseminates to contaminate central venous catheters.
“We think that probiotics should be eliminated entirely from the ICU. If not, we encourage eliminating the powder formulations,” said Dr. Mayer, who led the study.
The data linking probiotics to ICU bacteremia were drawn from 23,533 ICU admissions over a 5-year period in the HCA hospital database. Bacteremia proven to be probiotic-related was uncommon (0.37%), but the consequences were serious.
For those with probiotic-related bacteremia, the mortality rate was 25.6% or essentially twofold greater than the 13.5% mortality rate among those without probiotic bacteremia. An odds ratio drawn from a regression analysis confirmed a significant difference (OR, 2.23; 95% confidence interval, 1.30-3.71; P < .01).
“The absolute risk of mortality is modest but not insignificant,” said Dr. Mayer. This suggests one probiotic-related mortality for about every 200 patients taking a probiotic in the ICU.
These deaths occur without any clear compensatory benefit from taking probiotics, according to Dr. Mayer. There is a long list of potential benefits from probiotics that might be relevant to patients in the ICU, particularly prophylaxis for Clostridioides difficile infection, but also including a variety of gastrointestinal disorders, such as irritable bowel syndrome; however, none of these are firmly established in general, and particularly for patients in the ICU.
“The American College of Gastroenterology currently recommends against probiotics for the prevention of C. diff.,” Dr. Mayer said. Although the American Gastroenterological Association has issued a “conditional recommendation” for prevention of C. diff. infection with probiotics, Dr. Mayer pointed out this is qualified by a “low quality of evidence” and it is not specific to the ICU setting.
“The evidence for benefit is weak or nonexistent, but the risks are real,” Dr. Mayer said.
To confirm that probiotic-associated ICU bacteremias in the HCA hospital database were, in fact, related to probiotics being taken by patients at time of admission, Dr. Mayer evaluated the record of each of the 86 patients with probiotic bacteremia–associated mortality.
“I identified the organism that grew from the blood cultures to confirm that it was contained in the probiotic the patient was taking,” explained Dr. Mayer, who said this information was available in the electronic medical records.
The risk of probiotic-associated bacteremia in ICU patients was consistent with a series of case series that prompted the study. Dr. Mayer explained that he became interested when he encountered patients on his ICU rounds who were taking probiotics. He knew very little about these agents and explored the medical literature to see what evidence was available.
“I found several case reports of ICU patients with probiotic-associated infections, several of which were suspected of being associated with contamination of the central lines,” Dr. Mayer said. In one case, the patient was not taking a probiotic, but a patient in an adjacent bed was receiving a powdered probiotic that was implicated. This prompted suspicion that the cause was central-line contamination.
This was evaluated in the HCA ICU database and also found to be a significant risk. Among the 67 patients in whom a capsule or tablet was used, the rate of probiotic-associated bacteremia was 0.33%. For those in which the probiotic was a powdered formulation, the rate was 0.76%, a significant difference (P < .01).
Dr. Mayer acknowledged that these data do not rule out all potential benefits from probiotics in the ICU. He believes an obstacle to proving benefit has been the heterogeneity of available products, which are likely to be relevant to any therapeutic role, including prevention of C. diff. infection.
“There are now a large number of products available, and they contain a large variety of strains of organisms, so this has been a difficult area to study,” he said. However, he maintains it is prudent at this point to avoid probiotics in the ICU because the risks are not confined to the patient making this choice.
“My concern is not just the lack of evidence of benefit relative to the risk for the patient but the potential for probiotics in the ICU to place other patients at risk,” Dr. Mayer said.
Others have also noted the potential benefits of probiotics in the ICU, but the promise remains elusive. In a 2018 review article published in the Journal of Emergency and Critical Care Medicine, the authors evaluated a series of potential applications of probiotics in critically ill patients. These included treatment of ventilator-associated pneumonia (VAP), catheter-associated urinary tract infections (CAUTI), and surgical-site infections (SSI). For each, the data were negative or inconclusive.
Over the 4 years that have passed since the review was published, several trials have further explored the potential benefits of probiotics in the ICU but none have changed this basic conclusion. For example, a 2021 multinational trial, published in The Lancet, randomized more than 2,600 patients to probiotics or placebo and showed no effect on VAP incidence (21.9% vs. 21.3%).
The lead author of the 2018 review, Heather A. Vitko, PhD, an associate professor in the department of acute and tertiary care, University of Pittsburgh School of Nursing, also emphasized that the potential for benefit cannot be considered without the potential for risk. She, like Dr. Mayer, cited the case studies implicating probiotics in systemic infections.
For administration, probiotic capsules or sachets “often need to be opened for administration through a feeding tube,” she noted. The risk of contamination comes from both the air and contaminated hands, the latter of which “can cause a translocation to a central line catheter where the microbes have direct entry into the systemic circulation.”
She did not call for a ban of probiotics in the ICU, but she did recommend “a precautionary approach,” encouraging clinicians to “distinguish between reality [of what has been proven] and what is presented in the marketing of antibiotics.”
Dr. Mayer and Dr. Vitko have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM CHEST 2022
Opt-out HIV testing in EDs can help identify undiagnosed cases
in populations with an HIV positivity rate greater than 0.2%.
On implementation of opt-out testing of patients aged 18-59 years admitted to the ED at St. George’s University Hospital in London, the proportion of tests performed increased from 57.9% to 69%. Upon increasing the age range to those 16 and older and implementing notional consent, overall testing coverage improved to 74.2%.
“An opt-out HIV testing program in the emergency department provides an excellent opportunity to diagnose patients who do not perceive themselves to be at risk or who have never tested before,” lead author Rebecca Marchant, MBBS, of St. George’s Hospital, said in an interview.
The study was published online in HIV Medicine.
She continued, “I think this take-away message would be applicable to other countries with prevalence of HIV greater than 2 per 1,000 people, as routine HIV testing in areas of high prevalence removes the need to target testing of specific populations, potentially preventing stigmatization.”
Despite excellent uptake of HIV testing in antenatal and sexual health services, 6% of people living in the United Kingdom are unaware of their status, and up to 42% of people living with HIV are diagnosed at a late stage of infection. Because blood is routinely drawn in EDs, it’s an excellent opportunity for increased testing. Late-stage diagnosis carries an increased risk of developing an AIDS-related illness, a sevenfold increase in risk for death in the first year after diagnosis, and increased rates of HIV transmission and health care costs.
The study was conducted in a region of London that has an HIV prevalence of 5.4 cases per 1,000 residents aged 15-59 years. Opt-out HIV testing was implemented in February 2019 for people aged 18-59, and in March 2021, this was changed to include those aged 16-plus years along with a move to notional consent.
Out of 78,333 HIV tests, there were 1054 reactive results. Of these, 728 (69%) were known people living with HIV, 8 (0.8%) were not contactable, 2 (0.2%) retested elsewhere and 3 (0.3%) declined a retest. A total of 259 false positives were determined by follow-up testing.
Of those who received a confirmed HIV diagnosis, 50 (4.8%) were newly diagnosed. HIV was suspected in only 22% of these people, and 48% had never previously tested for the virus. New diagnoses were 80% male with a median age of 42 years. CD4 counts varied widely (3 cells/mcL to 1,344 cells/mcL), with 60% diagnosed at a late stage (CD4 < 350 cells/mcL) and 40% with advanced immunosuppression (CD4 < 200 cells/mcL).
“It did not surprise me that heterosexuals made up 62% of all new diagnoses,” Dr. Marchant noted. “This is because routine opt-out testing in the ED offers the opportunity to test people who don’t perceive themselves to be at risk or who have never tested before, and I believe heterosexual people are more likely to fit into those categories. In London, new HIV diagnoses amongst men who have sex with men have fallen year on year likely due to pre-exposure prophylaxis being more readily available and a generally good awareness of HIV and testing amongst MSM.”
Michael D. Levine, MD, associate professor of emergency medicine at the University of California, Los Angeles, agreed with its main findings.
“Doing widespread screening of patients in the emergency department is a feasible option,” Dr. Levine, who was not involved with this study, said in an interview. “But it only makes sense to do this in a population with some prevalence of HIV. With some forms of testing, like rapid HIV tests, you only get a presumptive positive and you then have a confirmatory test. The presumptive positives do have false positives associated with them. So if you’re in a population with very few cases of HIV, and you have a significant number of false positives, that’s going to be problematic. It’s going to add a tremendous amount of stress to the patient.”
A version of this article first appeared on Medscape.com.
in populations with an HIV positivity rate greater than 0.2%.
On implementation of opt-out testing of patients aged 18-59 years admitted to the ED at St. George’s University Hospital in London, the proportion of tests performed increased from 57.9% to 69%. Upon increasing the age range to those 16 and older and implementing notional consent, overall testing coverage improved to 74.2%.
“An opt-out HIV testing program in the emergency department provides an excellent opportunity to diagnose patients who do not perceive themselves to be at risk or who have never tested before,” lead author Rebecca Marchant, MBBS, of St. George’s Hospital, said in an interview.
The study was published online in HIV Medicine.
She continued, “I think this take-away message would be applicable to other countries with prevalence of HIV greater than 2 per 1,000 people, as routine HIV testing in areas of high prevalence removes the need to target testing of specific populations, potentially preventing stigmatization.”
Despite excellent uptake of HIV testing in antenatal and sexual health services, 6% of people living in the United Kingdom are unaware of their status, and up to 42% of people living with HIV are diagnosed at a late stage of infection. Because blood is routinely drawn in EDs, it’s an excellent opportunity for increased testing. Late-stage diagnosis carries an increased risk of developing an AIDS-related illness, a sevenfold increase in risk for death in the first year after diagnosis, and increased rates of HIV transmission and health care costs.
The study was conducted in a region of London that has an HIV prevalence of 5.4 cases per 1,000 residents aged 15-59 years. Opt-out HIV testing was implemented in February 2019 for people aged 18-59, and in March 2021, this was changed to include those aged 16-plus years along with a move to notional consent.
Out of 78,333 HIV tests, there were 1054 reactive results. Of these, 728 (69%) were known people living with HIV, 8 (0.8%) were not contactable, 2 (0.2%) retested elsewhere and 3 (0.3%) declined a retest. A total of 259 false positives were determined by follow-up testing.
Of those who received a confirmed HIV diagnosis, 50 (4.8%) were newly diagnosed. HIV was suspected in only 22% of these people, and 48% had never previously tested for the virus. New diagnoses were 80% male with a median age of 42 years. CD4 counts varied widely (3 cells/mcL to 1,344 cells/mcL), with 60% diagnosed at a late stage (CD4 < 350 cells/mcL) and 40% with advanced immunosuppression (CD4 < 200 cells/mcL).
“It did not surprise me that heterosexuals made up 62% of all new diagnoses,” Dr. Marchant noted. “This is because routine opt-out testing in the ED offers the opportunity to test people who don’t perceive themselves to be at risk or who have never tested before, and I believe heterosexual people are more likely to fit into those categories. In London, new HIV diagnoses amongst men who have sex with men have fallen year on year likely due to pre-exposure prophylaxis being more readily available and a generally good awareness of HIV and testing amongst MSM.”
Michael D. Levine, MD, associate professor of emergency medicine at the University of California, Los Angeles, agreed with its main findings.
“Doing widespread screening of patients in the emergency department is a feasible option,” Dr. Levine, who was not involved with this study, said in an interview. “But it only makes sense to do this in a population with some prevalence of HIV. With some forms of testing, like rapid HIV tests, you only get a presumptive positive and you then have a confirmatory test. The presumptive positives do have false positives associated with them. So if you’re in a population with very few cases of HIV, and you have a significant number of false positives, that’s going to be problematic. It’s going to add a tremendous amount of stress to the patient.”
A version of this article first appeared on Medscape.com.
in populations with an HIV positivity rate greater than 0.2%.
On implementation of opt-out testing of patients aged 18-59 years admitted to the ED at St. George’s University Hospital in London, the proportion of tests performed increased from 57.9% to 69%. Upon increasing the age range to those 16 and older and implementing notional consent, overall testing coverage improved to 74.2%.
“An opt-out HIV testing program in the emergency department provides an excellent opportunity to diagnose patients who do not perceive themselves to be at risk or who have never tested before,” lead author Rebecca Marchant, MBBS, of St. George’s Hospital, said in an interview.
The study was published online in HIV Medicine.
She continued, “I think this take-away message would be applicable to other countries with prevalence of HIV greater than 2 per 1,000 people, as routine HIV testing in areas of high prevalence removes the need to target testing of specific populations, potentially preventing stigmatization.”
Despite excellent uptake of HIV testing in antenatal and sexual health services, 6% of people living in the United Kingdom are unaware of their status, and up to 42% of people living with HIV are diagnosed at a late stage of infection. Because blood is routinely drawn in EDs, it’s an excellent opportunity for increased testing. Late-stage diagnosis carries an increased risk of developing an AIDS-related illness, a sevenfold increase in risk for death in the first year after diagnosis, and increased rates of HIV transmission and health care costs.
The study was conducted in a region of London that has an HIV prevalence of 5.4 cases per 1,000 residents aged 15-59 years. Opt-out HIV testing was implemented in February 2019 for people aged 18-59, and in March 2021, this was changed to include those aged 16-plus years along with a move to notional consent.
Out of 78,333 HIV tests, there were 1054 reactive results. Of these, 728 (69%) were known people living with HIV, 8 (0.8%) were not contactable, 2 (0.2%) retested elsewhere and 3 (0.3%) declined a retest. A total of 259 false positives were determined by follow-up testing.
Of those who received a confirmed HIV diagnosis, 50 (4.8%) were newly diagnosed. HIV was suspected in only 22% of these people, and 48% had never previously tested for the virus. New diagnoses were 80% male with a median age of 42 years. CD4 counts varied widely (3 cells/mcL to 1,344 cells/mcL), with 60% diagnosed at a late stage (CD4 < 350 cells/mcL) and 40% with advanced immunosuppression (CD4 < 200 cells/mcL).
“It did not surprise me that heterosexuals made up 62% of all new diagnoses,” Dr. Marchant noted. “This is because routine opt-out testing in the ED offers the opportunity to test people who don’t perceive themselves to be at risk or who have never tested before, and I believe heterosexual people are more likely to fit into those categories. In London, new HIV diagnoses amongst men who have sex with men have fallen year on year likely due to pre-exposure prophylaxis being more readily available and a generally good awareness of HIV and testing amongst MSM.”
Michael D. Levine, MD, associate professor of emergency medicine at the University of California, Los Angeles, agreed with its main findings.
“Doing widespread screening of patients in the emergency department is a feasible option,” Dr. Levine, who was not involved with this study, said in an interview. “But it only makes sense to do this in a population with some prevalence of HIV. With some forms of testing, like rapid HIV tests, you only get a presumptive positive and you then have a confirmatory test. The presumptive positives do have false positives associated with them. So if you’re in a population with very few cases of HIV, and you have a significant number of false positives, that’s going to be problematic. It’s going to add a tremendous amount of stress to the patient.”
A version of this article first appeared on Medscape.com.
FROM HIV MEDICINE
Patients trying to lose weight overestimate their diet quality
Only 28% of the participants had good agreement – defined as a difference of 6 points or less – between their perceived diet quality and its actual quality based on Healthy Eating Index–2015 (HEI) scores at the end of the 12-month intervention.
Even fewer – only 13% – had good agreement with their perceived and actual improvement in diet quality.
Jessica Cheng, PhD, Harvard School of Public Health, Boston, presented the findings in an oral session at the American Heart Association scientific sessions.
The study suggests that “patients can benefit from concrete advice on aspects of their diet that could most benefit by being changed,” Dr. Cheng said in an interview.
“But once they know what to change, they may need additional advice on how to make and sustain those changes. Providers may direct their patients to resources such as dietitians, medically tailored meals, MyPlate, healthy recipes, etc.,” she advised.
“The findings are not surprising given that dietary recalls are subject to recall bias and depend on the person’s baseline nutrition knowledge or literacy,” Deepika Laddu, PhD, who was not involved with this research, said in an interview.
Misperception of diet intake is common in individuals with overweight or obesity, and one 90-minute session with a dietitian is not enough, according to Dr. Laddu, assistant professor at the University of Illinois at Chicago.
“The Dietary Guidelines for Americans does a really nice job at presenting all of the options,” she said. However, “understanding what a healthy diet pattern is, or how to adopt it, is confusing, due to a lot of ‘noise’, that is, the mixed messaging and unproven health claims, which add to inadequacies in health or nutrition literacy.”
“It is important to recognize that changing dietary practices is behaviorally challenging and complex,” she emphasized.
People who are interested in making dietary changes need to have ongoing conversations with a qualified health care professional, which most often starts with their primary care clinician.
“Given the well-known time constraints during a typical clinical visit, beyond that initial conversation, it is absolutely critical that patients be referred to qualified healthcare professionals such as a registered dietitian, nurse practitioner, health coach/educator or diabetes educator, etc, for ongoing support.”
These providers can assess the patient’s initial diet, perceptions of a healthy diet, and diet goals, and address any gaps in health literacy, to enable the patient to develop long-lasting, realistic, and healthy eating behaviors.
Perceived vs. actual diet quality
Healthy eating is essential for heart and general health and longevity, but it is unclear if people who make lifestyle (diet and physical activity) changes to lose weight have an accurate perception of diet quality.
The researchers analyzed data from the SMARTER trial of 502 adults aged 35-58 living in the greater Pittsburgh area who were trying to lose weight.
Participants received a 90-minute weight loss counseling session addressing behavioral strategies and establishing dietary and physical activity goals. They all received instructions on how to monitor their diet, physical activity, and weight daily, using a smartphone app, a wristband tracker (Fitbit Charge 2), and a smart wireless scale. Half of the participants also received real-time personalized feedback on those behaviors, up to three times a day, via the study app.
The participants replied to two 24-hour dietary recall questionnaires at study entry and two questionnaires at 12 months.
Researchers analyzed data from the 116 participants who provided information about diet quality. At 1 year, they were asked to rate their diet quality, but also rate their diet quality 12 months earlier at baseline, on a scale of 0-100, where 100 is best.
The average weight loss at 12 months was similar in the groups with and without feedback from the app (roughly 3.2% of baseline weight), so the two study arms were combined. The participants had a mean age of 52 years; 80% were women and 87% were White. They had an average body mass index of 33 kg/m2.
Based on the information from the food recall questionnaires, the researchers calculated the patients’ HEI scores at the start and end of the study. The HEI score is a measure of how well a person’s diet adheres to the 2015-2020 Dietary Guidelines for Americans. It is based on an adequate consumption of nine types of foods – total fruits, whole fruits, total vegetables, greens and beans, total protein foods, seafood, and plant proteins (up to 5 points each), and whole grains, dairy, and fatty acids (up to 10 points each) – and reduced consumption of four dietary components – refined grains, sodium, added sugars, and saturated fats (up to 10 points each).
The healthiest diet has an HEI score of 100, and the Healthy People 2020 goal was an HEI score of 74, Dr. Cheng noted.
At 12 months, on average, the participants rated their diet quality at 70.5 points, whereas the researchers calculated that their average HEI score was only 56.
Participants thought they had improved their diet quality by about 20 points, Dr. Cheng reported. “However, the HEI would suggest they’ve improved it by 1.5 points, which is not a lot out of 100.”
“Future studies should examine the effects of helping people close the gap between their perceptions and objective diet quality measurements,” Dr. Cheng said in a press release from the AHA.
The study was funded by the National Heart, Lung, and Blood Institute, a division of the National Institutes of Health. Dr. Cheng and Dr. Laddu reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Only 28% of the participants had good agreement – defined as a difference of 6 points or less – between their perceived diet quality and its actual quality based on Healthy Eating Index–2015 (HEI) scores at the end of the 12-month intervention.
Even fewer – only 13% – had good agreement with their perceived and actual improvement in diet quality.
Jessica Cheng, PhD, Harvard School of Public Health, Boston, presented the findings in an oral session at the American Heart Association scientific sessions.
The study suggests that “patients can benefit from concrete advice on aspects of their diet that could most benefit by being changed,” Dr. Cheng said in an interview.
“But once they know what to change, they may need additional advice on how to make and sustain those changes. Providers may direct their patients to resources such as dietitians, medically tailored meals, MyPlate, healthy recipes, etc.,” she advised.
“The findings are not surprising given that dietary recalls are subject to recall bias and depend on the person’s baseline nutrition knowledge or literacy,” Deepika Laddu, PhD, who was not involved with this research, said in an interview.
Misperception of diet intake is common in individuals with overweight or obesity, and one 90-minute session with a dietitian is not enough, according to Dr. Laddu, assistant professor at the University of Illinois at Chicago.
“The Dietary Guidelines for Americans does a really nice job at presenting all of the options,” she said. However, “understanding what a healthy diet pattern is, or how to adopt it, is confusing, due to a lot of ‘noise’, that is, the mixed messaging and unproven health claims, which add to inadequacies in health or nutrition literacy.”
“It is important to recognize that changing dietary practices is behaviorally challenging and complex,” she emphasized.
People who are interested in making dietary changes need to have ongoing conversations with a qualified health care professional, which most often starts with their primary care clinician.
“Given the well-known time constraints during a typical clinical visit, beyond that initial conversation, it is absolutely critical that patients be referred to qualified healthcare professionals such as a registered dietitian, nurse practitioner, health coach/educator or diabetes educator, etc, for ongoing support.”
These providers can assess the patient’s initial diet, perceptions of a healthy diet, and diet goals, and address any gaps in health literacy, to enable the patient to develop long-lasting, realistic, and healthy eating behaviors.
Perceived vs. actual diet quality
Healthy eating is essential for heart and general health and longevity, but it is unclear if people who make lifestyle (diet and physical activity) changes to lose weight have an accurate perception of diet quality.
The researchers analyzed data from the SMARTER trial of 502 adults aged 35-58 living in the greater Pittsburgh area who were trying to lose weight.
Participants received a 90-minute weight loss counseling session addressing behavioral strategies and establishing dietary and physical activity goals. They all received instructions on how to monitor their diet, physical activity, and weight daily, using a smartphone app, a wristband tracker (Fitbit Charge 2), and a smart wireless scale. Half of the participants also received real-time personalized feedback on those behaviors, up to three times a day, via the study app.
The participants replied to two 24-hour dietary recall questionnaires at study entry and two questionnaires at 12 months.
Researchers analyzed data from the 116 participants who provided information about diet quality. At 1 year, they were asked to rate their diet quality, but also rate their diet quality 12 months earlier at baseline, on a scale of 0-100, where 100 is best.
The average weight loss at 12 months was similar in the groups with and without feedback from the app (roughly 3.2% of baseline weight), so the two study arms were combined. The participants had a mean age of 52 years; 80% were women and 87% were White. They had an average body mass index of 33 kg/m2.
Based on the information from the food recall questionnaires, the researchers calculated the patients’ HEI scores at the start and end of the study. The HEI score is a measure of how well a person’s diet adheres to the 2015-2020 Dietary Guidelines for Americans. It is based on an adequate consumption of nine types of foods – total fruits, whole fruits, total vegetables, greens and beans, total protein foods, seafood, and plant proteins (up to 5 points each), and whole grains, dairy, and fatty acids (up to 10 points each) – and reduced consumption of four dietary components – refined grains, sodium, added sugars, and saturated fats (up to 10 points each).
The healthiest diet has an HEI score of 100, and the Healthy People 2020 goal was an HEI score of 74, Dr. Cheng noted.
At 12 months, on average, the participants rated their diet quality at 70.5 points, whereas the researchers calculated that their average HEI score was only 56.
Participants thought they had improved their diet quality by about 20 points, Dr. Cheng reported. “However, the HEI would suggest they’ve improved it by 1.5 points, which is not a lot out of 100.”
“Future studies should examine the effects of helping people close the gap between their perceptions and objective diet quality measurements,” Dr. Cheng said in a press release from the AHA.
The study was funded by the National Heart, Lung, and Blood Institute, a division of the National Institutes of Health. Dr. Cheng and Dr. Laddu reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Only 28% of the participants had good agreement – defined as a difference of 6 points or less – between their perceived diet quality and its actual quality based on Healthy Eating Index–2015 (HEI) scores at the end of the 12-month intervention.
Even fewer – only 13% – had good agreement with their perceived and actual improvement in diet quality.
Jessica Cheng, PhD, Harvard School of Public Health, Boston, presented the findings in an oral session at the American Heart Association scientific sessions.
The study suggests that “patients can benefit from concrete advice on aspects of their diet that could most benefit by being changed,” Dr. Cheng said in an interview.
“But once they know what to change, they may need additional advice on how to make and sustain those changes. Providers may direct their patients to resources such as dietitians, medically tailored meals, MyPlate, healthy recipes, etc.,” she advised.
“The findings are not surprising given that dietary recalls are subject to recall bias and depend on the person’s baseline nutrition knowledge or literacy,” Deepika Laddu, PhD, who was not involved with this research, said in an interview.
Misperception of diet intake is common in individuals with overweight or obesity, and one 90-minute session with a dietitian is not enough, according to Dr. Laddu, assistant professor at the University of Illinois at Chicago.
“The Dietary Guidelines for Americans does a really nice job at presenting all of the options,” she said. However, “understanding what a healthy diet pattern is, or how to adopt it, is confusing, due to a lot of ‘noise’, that is, the mixed messaging and unproven health claims, which add to inadequacies in health or nutrition literacy.”
“It is important to recognize that changing dietary practices is behaviorally challenging and complex,” she emphasized.
People who are interested in making dietary changes need to have ongoing conversations with a qualified health care professional, which most often starts with their primary care clinician.
“Given the well-known time constraints during a typical clinical visit, beyond that initial conversation, it is absolutely critical that patients be referred to qualified healthcare professionals such as a registered dietitian, nurse practitioner, health coach/educator or diabetes educator, etc, for ongoing support.”
These providers can assess the patient’s initial diet, perceptions of a healthy diet, and diet goals, and address any gaps in health literacy, to enable the patient to develop long-lasting, realistic, and healthy eating behaviors.
Perceived vs. actual diet quality
Healthy eating is essential for heart and general health and longevity, but it is unclear if people who make lifestyle (diet and physical activity) changes to lose weight have an accurate perception of diet quality.
The researchers analyzed data from the SMARTER trial of 502 adults aged 35-58 living in the greater Pittsburgh area who were trying to lose weight.
Participants received a 90-minute weight loss counseling session addressing behavioral strategies and establishing dietary and physical activity goals. They all received instructions on how to monitor their diet, physical activity, and weight daily, using a smartphone app, a wristband tracker (Fitbit Charge 2), and a smart wireless scale. Half of the participants also received real-time personalized feedback on those behaviors, up to three times a day, via the study app.
The participants replied to two 24-hour dietary recall questionnaires at study entry and two questionnaires at 12 months.
Researchers analyzed data from the 116 participants who provided information about diet quality. At 1 year, they were asked to rate their diet quality, but also rate their diet quality 12 months earlier at baseline, on a scale of 0-100, where 100 is best.
The average weight loss at 12 months was similar in the groups with and without feedback from the app (roughly 3.2% of baseline weight), so the two study arms were combined. The participants had a mean age of 52 years; 80% were women and 87% were White. They had an average body mass index of 33 kg/m2.
Based on the information from the food recall questionnaires, the researchers calculated the patients’ HEI scores at the start and end of the study. The HEI score is a measure of how well a person’s diet adheres to the 2015-2020 Dietary Guidelines for Americans. It is based on an adequate consumption of nine types of foods – total fruits, whole fruits, total vegetables, greens and beans, total protein foods, seafood, and plant proteins (up to 5 points each), and whole grains, dairy, and fatty acids (up to 10 points each) – and reduced consumption of four dietary components – refined grains, sodium, added sugars, and saturated fats (up to 10 points each).
The healthiest diet has an HEI score of 100, and the Healthy People 2020 goal was an HEI score of 74, Dr. Cheng noted.
At 12 months, on average, the participants rated their diet quality at 70.5 points, whereas the researchers calculated that their average HEI score was only 56.
Participants thought they had improved their diet quality by about 20 points, Dr. Cheng reported. “However, the HEI would suggest they’ve improved it by 1.5 points, which is not a lot out of 100.”
“Future studies should examine the effects of helping people close the gap between their perceptions and objective diet quality measurements,” Dr. Cheng said in a press release from the AHA.
The study was funded by the National Heart, Lung, and Blood Institute, a division of the National Institutes of Health. Dr. Cheng and Dr. Laddu reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM AHA 2022
Intermittent fasting diet trend linked to disordered eating
Researchers from the University of Toronto analyzed data from more than 2700 adolescents and young adults from the Canadian Study of Adolescent Health Behaviors, and found that for women, IF was significantly associated with overeating, binge eating, vomiting, laxative use, and compulsive exercise.
IF in women was also associated with higher scores on the Eating Disorder Examination Questionnaire (EDE-Q), which was used to determine ED psychopathology.
Study investigator Kyle Ganson, PhD, assistant professor in the Factor-Inwentash Faculty of Social Work at the University of Toronto, said in an interview that evidence on the effectiveness of IF for weight loss and disease prevention is mixed, and that it’s important to understand the potential harms of IF – even if there are benefits for some.
“If anything, this study shines light on the fact that engagement in IF may be connected with problematic ED behaviors, requiring health care professionals to be very aware of this contemporary and popular dietary trend, despite proponents on social media touting the effectiveness and benefits,” he said.
The study was published online in Eating Behaviors.
Touted for health benefits
The practice of IF has been gaining popularity partly because of reputable medical experts touting its health benefits. Johns Hopkins Medicine, for instance, cited evidence that IF boosts working memory, improves blood pressure, enhances physical performance, and prevents obesity. Yet there has been little research on its harms.
As part of the Canadian Study of Adolescent Health Behaviors, Dr. Ganson and associates analyzed data on 2,700 adolescents and young adults aged 16-30 recruited from social media ads in November and December 2021. The sample included women, men, and transgender or gender-nonconforming individuals.
Study participants answered questions about weight perception, current weight change behavior, engagement in IF, and participation in eating disorder behaviors. They were also administered the EDE-Q, which measures eating disorder psychopathology.
In total, 47% of women (n = 1,470), 38% of men (n = 1,060), and 52% transgender or gender-nonconforming individuals (n = 225) reported engaging in IF during the past year.
Dr. Ganson and associates found that, for women, IF in the past 12 months and past 30 days were significantly associated with all eating disorder behaviors, including overeating, loss of control, binge eating, vomiting, laxative use, compulsive exercise, and fasting – as well as higher overall EDE-Q global scores.
For men, IF in the past 12 months was significantly associated with compulsive exercise, and higher overall EDE-Q global scores.
The team found that for TGNC participants, IF was positively associated with higher EDE-Q global scores.
The investigators acknowledged some limitations with the study – the method of recruiting, which involved ads placed on social media, could cause selection bias. In addition to this, data collection methods relied heavily on participants’ self-reporting, which could also be susceptible to bias.
“Certainly, there needs to be more investigation on this dietary practice,” said Dr. Ganson. “If anything, this study shines light on the fact that engagement in IF may be connected with problematic ED behaviors requiring healthcare professionals to be very aware of this contemporary and popular dietary trend – despite proponents on social media touting the effectiveness and benefits.”
Screening warranted
Dr. Ganson noted that additional research is needed to support the findings from his study, and to further illuminate the potential harms of IF.
Health care professionals “need to be aware of common, contemporary dietary trends that young people engage in and are commonly discussed on social media, such as IF,” he noted. In addition, he’d like to see health care professionals assess their patients for IF who are dieting and to follow-up with assessments for ED-related attitudes and behaviors.
“Additionally, there are likely bidirectional relationships between IF and ED attitudes and behaviors, so professionals should be aware the ways in which ED behaviors are masked as IF engagement,” Dr. Ganson said.
More research needed
Commenting on the findings, Angela Guarda, MD, professor of eating disorders at Johns Hopkins University and director of the eating disorders program at Johns Hopkins Hospital, both in Baltimore, said more research is needed on outcomes for IF.
“We lack a definitive answer. The reality is that IF may help some and harm others and is most likely not healthy for all,” she said, noting that the study results “support what many in the eating disorders field believe, namely that IF for someone who is at risk for an eating disorder is likely to be ill advised.”
She added that “continued research is needed to establish its safety, and for whom it may be a therapeutic versus an iatrogenic recommendation.”
The study was funded by the Connaught New Researcher Award. The authors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Researchers from the University of Toronto analyzed data from more than 2700 adolescents and young adults from the Canadian Study of Adolescent Health Behaviors, and found that for women, IF was significantly associated with overeating, binge eating, vomiting, laxative use, and compulsive exercise.
IF in women was also associated with higher scores on the Eating Disorder Examination Questionnaire (EDE-Q), which was used to determine ED psychopathology.
Study investigator Kyle Ganson, PhD, assistant professor in the Factor-Inwentash Faculty of Social Work at the University of Toronto, said in an interview that evidence on the effectiveness of IF for weight loss and disease prevention is mixed, and that it’s important to understand the potential harms of IF – even if there are benefits for some.
“If anything, this study shines light on the fact that engagement in IF may be connected with problematic ED behaviors, requiring health care professionals to be very aware of this contemporary and popular dietary trend, despite proponents on social media touting the effectiveness and benefits,” he said.
The study was published online in Eating Behaviors.
Touted for health benefits
The practice of IF has been gaining popularity partly because of reputable medical experts touting its health benefits. Johns Hopkins Medicine, for instance, cited evidence that IF boosts working memory, improves blood pressure, enhances physical performance, and prevents obesity. Yet there has been little research on its harms.
As part of the Canadian Study of Adolescent Health Behaviors, Dr. Ganson and associates analyzed data on 2,700 adolescents and young adults aged 16-30 recruited from social media ads in November and December 2021. The sample included women, men, and transgender or gender-nonconforming individuals.
Study participants answered questions about weight perception, current weight change behavior, engagement in IF, and participation in eating disorder behaviors. They were also administered the EDE-Q, which measures eating disorder psychopathology.
In total, 47% of women (n = 1,470), 38% of men (n = 1,060), and 52% transgender or gender-nonconforming individuals (n = 225) reported engaging in IF during the past year.
Dr. Ganson and associates found that, for women, IF in the past 12 months and past 30 days were significantly associated with all eating disorder behaviors, including overeating, loss of control, binge eating, vomiting, laxative use, compulsive exercise, and fasting – as well as higher overall EDE-Q global scores.
For men, IF in the past 12 months was significantly associated with compulsive exercise, and higher overall EDE-Q global scores.
The team found that for TGNC participants, IF was positively associated with higher EDE-Q global scores.
The investigators acknowledged some limitations with the study – the method of recruiting, which involved ads placed on social media, could cause selection bias. In addition to this, data collection methods relied heavily on participants’ self-reporting, which could also be susceptible to bias.
“Certainly, there needs to be more investigation on this dietary practice,” said Dr. Ganson. “If anything, this study shines light on the fact that engagement in IF may be connected with problematic ED behaviors requiring healthcare professionals to be very aware of this contemporary and popular dietary trend – despite proponents on social media touting the effectiveness and benefits.”
Screening warranted
Dr. Ganson noted that additional research is needed to support the findings from his study, and to further illuminate the potential harms of IF.
Health care professionals “need to be aware of common, contemporary dietary trends that young people engage in and are commonly discussed on social media, such as IF,” he noted. In addition, he’d like to see health care professionals assess their patients for IF who are dieting and to follow-up with assessments for ED-related attitudes and behaviors.
“Additionally, there are likely bidirectional relationships between IF and ED attitudes and behaviors, so professionals should be aware the ways in which ED behaviors are masked as IF engagement,” Dr. Ganson said.
More research needed
Commenting on the findings, Angela Guarda, MD, professor of eating disorders at Johns Hopkins University and director of the eating disorders program at Johns Hopkins Hospital, both in Baltimore, said more research is needed on outcomes for IF.
“We lack a definitive answer. The reality is that IF may help some and harm others and is most likely not healthy for all,” she said, noting that the study results “support what many in the eating disorders field believe, namely that IF for someone who is at risk for an eating disorder is likely to be ill advised.”
She added that “continued research is needed to establish its safety, and for whom it may be a therapeutic versus an iatrogenic recommendation.”
The study was funded by the Connaught New Researcher Award. The authors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Researchers from the University of Toronto analyzed data from more than 2700 adolescents and young adults from the Canadian Study of Adolescent Health Behaviors, and found that for women, IF was significantly associated with overeating, binge eating, vomiting, laxative use, and compulsive exercise.
IF in women was also associated with higher scores on the Eating Disorder Examination Questionnaire (EDE-Q), which was used to determine ED psychopathology.
Study investigator Kyle Ganson, PhD, assistant professor in the Factor-Inwentash Faculty of Social Work at the University of Toronto, said in an interview that evidence on the effectiveness of IF for weight loss and disease prevention is mixed, and that it’s important to understand the potential harms of IF – even if there are benefits for some.
“If anything, this study shines light on the fact that engagement in IF may be connected with problematic ED behaviors, requiring health care professionals to be very aware of this contemporary and popular dietary trend, despite proponents on social media touting the effectiveness and benefits,” he said.
The study was published online in Eating Behaviors.
Touted for health benefits
The practice of IF has been gaining popularity partly because of reputable medical experts touting its health benefits. Johns Hopkins Medicine, for instance, cited evidence that IF boosts working memory, improves blood pressure, enhances physical performance, and prevents obesity. Yet there has been little research on its harms.
As part of the Canadian Study of Adolescent Health Behaviors, Dr. Ganson and associates analyzed data on 2,700 adolescents and young adults aged 16-30 recruited from social media ads in November and December 2021. The sample included women, men, and transgender or gender-nonconforming individuals.
Study participants answered questions about weight perception, current weight change behavior, engagement in IF, and participation in eating disorder behaviors. They were also administered the EDE-Q, which measures eating disorder psychopathology.
In total, 47% of women (n = 1,470), 38% of men (n = 1,060), and 52% transgender or gender-nonconforming individuals (n = 225) reported engaging in IF during the past year.
Dr. Ganson and associates found that, for women, IF in the past 12 months and past 30 days were significantly associated with all eating disorder behaviors, including overeating, loss of control, binge eating, vomiting, laxative use, compulsive exercise, and fasting – as well as higher overall EDE-Q global scores.
For men, IF in the past 12 months was significantly associated with compulsive exercise, and higher overall EDE-Q global scores.
The team found that for TGNC participants, IF was positively associated with higher EDE-Q global scores.
The investigators acknowledged some limitations with the study – the method of recruiting, which involved ads placed on social media, could cause selection bias. In addition to this, data collection methods relied heavily on participants’ self-reporting, which could also be susceptible to bias.
“Certainly, there needs to be more investigation on this dietary practice,” said Dr. Ganson. “If anything, this study shines light on the fact that engagement in IF may be connected with problematic ED behaviors requiring healthcare professionals to be very aware of this contemporary and popular dietary trend – despite proponents on social media touting the effectiveness and benefits.”
Screening warranted
Dr. Ganson noted that additional research is needed to support the findings from his study, and to further illuminate the potential harms of IF.
Health care professionals “need to be aware of common, contemporary dietary trends that young people engage in and are commonly discussed on social media, such as IF,” he noted. In addition, he’d like to see health care professionals assess their patients for IF who are dieting and to follow-up with assessments for ED-related attitudes and behaviors.
“Additionally, there are likely bidirectional relationships between IF and ED attitudes and behaviors, so professionals should be aware the ways in which ED behaviors are masked as IF engagement,” Dr. Ganson said.
More research needed
Commenting on the findings, Angela Guarda, MD, professor of eating disorders at Johns Hopkins University and director of the eating disorders program at Johns Hopkins Hospital, both in Baltimore, said more research is needed on outcomes for IF.
“We lack a definitive answer. The reality is that IF may help some and harm others and is most likely not healthy for all,” she said, noting that the study results “support what many in the eating disorders field believe, namely that IF for someone who is at risk for an eating disorder is likely to be ill advised.”
She added that “continued research is needed to establish its safety, and for whom it may be a therapeutic versus an iatrogenic recommendation.”
The study was funded by the Connaught New Researcher Award. The authors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM EATING DISORDERS
Children with autism show distinct brain features related to motor impairment
Previous research suggests that individuals with ASD overlap in motor impairment with those with DCD. But these two conditions may differ significantly in some areas, as children with ASD tend to show weaker skills in social motor tasks such as imitation, wrote Emil Kilroy, PhD, of the University of Southern California, Los Angeles, and colleagues.
The neurobiological basis of autism remains unknown, despite many research efforts, in part because of the heterogeneity of the disease, said corresponding author Lisa Aziz-Zadeh, PhD, also of the University of Southern California, in an interview.
Comorbidity with other disorders is a strong contributing factor to heterogeneity, and approximately 80% of autistic individuals have motor impairments and meet criteria for a diagnosis of DCD, said Dr. Aziz-Zadeh. “Controlling for other comorbidities, such as developmental coordination disorder, when trying to understand the neural basis of autism is important, so that we can understand which neural circuits are related to [core symptoms of autism] and which ones are related to motor impairments that are comorbid with autism, but not necessarily part of the core symptomology,” she explained. “We focused on white matter pathways here because many researchers now think the underlying basis of autism, besides genetics, is brain connectivity differences.”
In their study published in Scientific Reports, the researchers reviewed data from whole-brain correlational tractography for 22 individuals with autism spectrum disorder, 16 with developmental coordination disorder, and 21 normally developing individuals, who served as the control group. The mean age of the participants was approximately 11 years; the age range was 8-17 years.
Overall, patterns of brain diffusion (movement of fluid, mainly water molecules, in the brain) were significantly different in ASD children, compared with typically developing children.
The ASD group showed significantly reduced diffusivity in the bilateral fronto-parietal cingulum and the left parolfactory cingulum. This finding reflects previous studies suggesting an association between brain patterns in the cingulum area and ASD. But the current study is “the first to identify the fronto-parietal and the parolfactory portions of the cingulum as well as the anterior caudal u-fibers as specific to core ASD symptomatology and not related to motor-related comorbidity,” the researchers wrote.
Differences in brain diffusivity were associated with worse performance on motor skills and behavioral measures for children with ASD and children with DCD, compared with controls.
Motor development was assessed using the Total Movement Assessment Battery for Children-2 (MABC-2) and the Florida Apraxia Battery modified for children (FAB-M). The MABC-2 is among the most common tools for measuring motor skills and identifying clinically relevant motor deficits in children and teens aged 3-16 years. The test includes three subtest scores (manual dexterity, gross-motor aiming and catching, and balance) and a total score. Scores are based on a child’s best performance on each component, and higher scores indicate better functioning. In the new study, The MABC-2 total scores averaged 10.57 for controls, compared with 5.76 in the ASD group, and 4.31 in the DCD group.
Children with ASD differed from the other groups in social measures. Social skills were measured using several tools, including the Social Responsivity Scale (SRS Total), which is a parent-completed survey that includes a total score designed to reflect the severity of social deficits in ASD. It is divided into five subscales for parents to assess a child’s social skill impairment: social awareness, social cognition, social communication, social motivation, and mannerisms. Scores for the SRS are calculated in T-scores, in which a score of 50 represents the mean. T-scores of 59 and below are generally not associated with ASD, and patients with these scores are considered to have low to no symptomatology. Scores on the SRS Total in the new study were 45.95, 77.45, and 55.81 for the controls, ASD group, and DCD group, respectively.
Results should raise awareness
“The results were largely predicted in our hypotheses – that we would find specific white matter pathways in autism that would differ from [what we saw in typically developing patients and those with DCD], and that diffusivity in ASD would be related to socioemotional differences,” Dr. Aziz-Zadeh said, in an interview.
“What was surprising was that some pathways that had previously been thought to be different in autism were also compromised in DCD, indicating that they were common to motor deficits which both groups shared, not to core autism symptomology,” she noted.
A message for clinicians from the study is that a dual diagnosis of DCD is often missing in ASD practice, said Dr. Aziz-Zadeh. “Given that approximately 80% of children with ASD have DCD, testing for DCD and addressing potential motor issues should be more common practice,” she said.
Dr. Aziz-Zadeh and colleagues are now investigating relationships between the brain, behavior, and the gut microbiome. “We think that understanding autism from a full-body perspective, examining interactions between the brain and the body, will be an important step in this field,” she emphasized.
The study was limited by several factors, including the small sample size, the use of only right-handed participants, and the use of self-reports by children and parents, the researchers noted. Additionally, they noted that white matter develops at different rates in different age groups, and future studies might consider age as a factor, as well as further behavioral assessments, they said.
Small sample size limits conclusions
“Understanding the neuroanatomic differences that may contribute to the core symptoms of ASD is a very important goal for the field, particularly how they relate to other comorbid symptoms and neurodevelopmental disorders,” said Michael Gandal, MD, of the department of psychiatry at the University of Pennsylvania, Philadelphia, and a member of the Lifespan Brain Institute at the Children’s Hospital of Philadelphia, in an interview.
“While this study provides some clues into how structural connectivity may relate to motor coordination in ASD, it will be important to replicate these findings in a much larger sample before we can really appreciate how robust these findings are and how well they generalize to the broader ASD population,” Dr. Gandal emphasized.
The study was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The researchers had no financial conflicts to disclose. Dr. Gandal had no financial conflicts to disclose.
Previous research suggests that individuals with ASD overlap in motor impairment with those with DCD. But these two conditions may differ significantly in some areas, as children with ASD tend to show weaker skills in social motor tasks such as imitation, wrote Emil Kilroy, PhD, of the University of Southern California, Los Angeles, and colleagues.
The neurobiological basis of autism remains unknown, despite many research efforts, in part because of the heterogeneity of the disease, said corresponding author Lisa Aziz-Zadeh, PhD, also of the University of Southern California, in an interview.
Comorbidity with other disorders is a strong contributing factor to heterogeneity, and approximately 80% of autistic individuals have motor impairments and meet criteria for a diagnosis of DCD, said Dr. Aziz-Zadeh. “Controlling for other comorbidities, such as developmental coordination disorder, when trying to understand the neural basis of autism is important, so that we can understand which neural circuits are related to [core symptoms of autism] and which ones are related to motor impairments that are comorbid with autism, but not necessarily part of the core symptomology,” she explained. “We focused on white matter pathways here because many researchers now think the underlying basis of autism, besides genetics, is brain connectivity differences.”
In their study published in Scientific Reports, the researchers reviewed data from whole-brain correlational tractography for 22 individuals with autism spectrum disorder, 16 with developmental coordination disorder, and 21 normally developing individuals, who served as the control group. The mean age of the participants was approximately 11 years; the age range was 8-17 years.
Overall, patterns of brain diffusion (movement of fluid, mainly water molecules, in the brain) were significantly different in ASD children, compared with typically developing children.
The ASD group showed significantly reduced diffusivity in the bilateral fronto-parietal cingulum and the left parolfactory cingulum. This finding reflects previous studies suggesting an association between brain patterns in the cingulum area and ASD. But the current study is “the first to identify the fronto-parietal and the parolfactory portions of the cingulum as well as the anterior caudal u-fibers as specific to core ASD symptomatology and not related to motor-related comorbidity,” the researchers wrote.
Differences in brain diffusivity were associated with worse performance on motor skills and behavioral measures for children with ASD and children with DCD, compared with controls.
Motor development was assessed using the Total Movement Assessment Battery for Children-2 (MABC-2) and the Florida Apraxia Battery modified for children (FAB-M). The MABC-2 is among the most common tools for measuring motor skills and identifying clinically relevant motor deficits in children and teens aged 3-16 years. The test includes three subtest scores (manual dexterity, gross-motor aiming and catching, and balance) and a total score. Scores are based on a child’s best performance on each component, and higher scores indicate better functioning. In the new study, The MABC-2 total scores averaged 10.57 for controls, compared with 5.76 in the ASD group, and 4.31 in the DCD group.
Children with ASD differed from the other groups in social measures. Social skills were measured using several tools, including the Social Responsivity Scale (SRS Total), which is a parent-completed survey that includes a total score designed to reflect the severity of social deficits in ASD. It is divided into five subscales for parents to assess a child’s social skill impairment: social awareness, social cognition, social communication, social motivation, and mannerisms. Scores for the SRS are calculated in T-scores, in which a score of 50 represents the mean. T-scores of 59 and below are generally not associated with ASD, and patients with these scores are considered to have low to no symptomatology. Scores on the SRS Total in the new study were 45.95, 77.45, and 55.81 for the controls, ASD group, and DCD group, respectively.
Results should raise awareness
“The results were largely predicted in our hypotheses – that we would find specific white matter pathways in autism that would differ from [what we saw in typically developing patients and those with DCD], and that diffusivity in ASD would be related to socioemotional differences,” Dr. Aziz-Zadeh said, in an interview.
“What was surprising was that some pathways that had previously been thought to be different in autism were also compromised in DCD, indicating that they were common to motor deficits which both groups shared, not to core autism symptomology,” she noted.
A message for clinicians from the study is that a dual diagnosis of DCD is often missing in ASD practice, said Dr. Aziz-Zadeh. “Given that approximately 80% of children with ASD have DCD, testing for DCD and addressing potential motor issues should be more common practice,” she said.
Dr. Aziz-Zadeh and colleagues are now investigating relationships between the brain, behavior, and the gut microbiome. “We think that understanding autism from a full-body perspective, examining interactions between the brain and the body, will be an important step in this field,” she emphasized.
The study was limited by several factors, including the small sample size, the use of only right-handed participants, and the use of self-reports by children and parents, the researchers noted. Additionally, they noted that white matter develops at different rates in different age groups, and future studies might consider age as a factor, as well as further behavioral assessments, they said.
Small sample size limits conclusions
“Understanding the neuroanatomic differences that may contribute to the core symptoms of ASD is a very important goal for the field, particularly how they relate to other comorbid symptoms and neurodevelopmental disorders,” said Michael Gandal, MD, of the department of psychiatry at the University of Pennsylvania, Philadelphia, and a member of the Lifespan Brain Institute at the Children’s Hospital of Philadelphia, in an interview.
“While this study provides some clues into how structural connectivity may relate to motor coordination in ASD, it will be important to replicate these findings in a much larger sample before we can really appreciate how robust these findings are and how well they generalize to the broader ASD population,” Dr. Gandal emphasized.
The study was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The researchers had no financial conflicts to disclose. Dr. Gandal had no financial conflicts to disclose.
Previous research suggests that individuals with ASD overlap in motor impairment with those with DCD. But these two conditions may differ significantly in some areas, as children with ASD tend to show weaker skills in social motor tasks such as imitation, wrote Emil Kilroy, PhD, of the University of Southern California, Los Angeles, and colleagues.
The neurobiological basis of autism remains unknown, despite many research efforts, in part because of the heterogeneity of the disease, said corresponding author Lisa Aziz-Zadeh, PhD, also of the University of Southern California, in an interview.
Comorbidity with other disorders is a strong contributing factor to heterogeneity, and approximately 80% of autistic individuals have motor impairments and meet criteria for a diagnosis of DCD, said Dr. Aziz-Zadeh. “Controlling for other comorbidities, such as developmental coordination disorder, when trying to understand the neural basis of autism is important, so that we can understand which neural circuits are related to [core symptoms of autism] and which ones are related to motor impairments that are comorbid with autism, but not necessarily part of the core symptomology,” she explained. “We focused on white matter pathways here because many researchers now think the underlying basis of autism, besides genetics, is brain connectivity differences.”
In their study published in Scientific Reports, the researchers reviewed data from whole-brain correlational tractography for 22 individuals with autism spectrum disorder, 16 with developmental coordination disorder, and 21 normally developing individuals, who served as the control group. The mean age of the participants was approximately 11 years; the age range was 8-17 years.
Overall, patterns of brain diffusion (movement of fluid, mainly water molecules, in the brain) were significantly different in ASD children, compared with typically developing children.
The ASD group showed significantly reduced diffusivity in the bilateral fronto-parietal cingulum and the left parolfactory cingulum. This finding reflects previous studies suggesting an association between brain patterns in the cingulum area and ASD. But the current study is “the first to identify the fronto-parietal and the parolfactory portions of the cingulum as well as the anterior caudal u-fibers as specific to core ASD symptomatology and not related to motor-related comorbidity,” the researchers wrote.
Differences in brain diffusivity were associated with worse performance on motor skills and behavioral measures for children with ASD and children with DCD, compared with controls.
Motor development was assessed using the Total Movement Assessment Battery for Children-2 (MABC-2) and the Florida Apraxia Battery modified for children (FAB-M). The MABC-2 is among the most common tools for measuring motor skills and identifying clinically relevant motor deficits in children and teens aged 3-16 years. The test includes three subtest scores (manual dexterity, gross-motor aiming and catching, and balance) and a total score. Scores are based on a child’s best performance on each component, and higher scores indicate better functioning. In the new study, The MABC-2 total scores averaged 10.57 for controls, compared with 5.76 in the ASD group, and 4.31 in the DCD group.
Children with ASD differed from the other groups in social measures. Social skills were measured using several tools, including the Social Responsivity Scale (SRS Total), which is a parent-completed survey that includes a total score designed to reflect the severity of social deficits in ASD. It is divided into five subscales for parents to assess a child’s social skill impairment: social awareness, social cognition, social communication, social motivation, and mannerisms. Scores for the SRS are calculated in T-scores, in which a score of 50 represents the mean. T-scores of 59 and below are generally not associated with ASD, and patients with these scores are considered to have low to no symptomatology. Scores on the SRS Total in the new study were 45.95, 77.45, and 55.81 for the controls, ASD group, and DCD group, respectively.
Results should raise awareness
“The results were largely predicted in our hypotheses – that we would find specific white matter pathways in autism that would differ from [what we saw in typically developing patients and those with DCD], and that diffusivity in ASD would be related to socioemotional differences,” Dr. Aziz-Zadeh said, in an interview.
“What was surprising was that some pathways that had previously been thought to be different in autism were also compromised in DCD, indicating that they were common to motor deficits which both groups shared, not to core autism symptomology,” she noted.
A message for clinicians from the study is that a dual diagnosis of DCD is often missing in ASD practice, said Dr. Aziz-Zadeh. “Given that approximately 80% of children with ASD have DCD, testing for DCD and addressing potential motor issues should be more common practice,” she said.
Dr. Aziz-Zadeh and colleagues are now investigating relationships between the brain, behavior, and the gut microbiome. “We think that understanding autism from a full-body perspective, examining interactions between the brain and the body, will be an important step in this field,” she emphasized.
The study was limited by several factors, including the small sample size, the use of only right-handed participants, and the use of self-reports by children and parents, the researchers noted. Additionally, they noted that white matter develops at different rates in different age groups, and future studies might consider age as a factor, as well as further behavioral assessments, they said.
Small sample size limits conclusions
“Understanding the neuroanatomic differences that may contribute to the core symptoms of ASD is a very important goal for the field, particularly how they relate to other comorbid symptoms and neurodevelopmental disorders,” said Michael Gandal, MD, of the department of psychiatry at the University of Pennsylvania, Philadelphia, and a member of the Lifespan Brain Institute at the Children’s Hospital of Philadelphia, in an interview.
“While this study provides some clues into how structural connectivity may relate to motor coordination in ASD, it will be important to replicate these findings in a much larger sample before we can really appreciate how robust these findings are and how well they generalize to the broader ASD population,” Dr. Gandal emphasized.
The study was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The researchers had no financial conflicts to disclose. Dr. Gandal had no financial conflicts to disclose.
FROM SCIENTIFIC REPORTS
Add tezepelumab to SCIT to improve cat allergy symptoms?
according to results of a phase 1/2 clinical trial.
“One year of allergen immunotherapy [AIT] combined with tezepelumab was significantly more effective than SCIT alone in reducing the nasal response to allergen challenge both at the end of treatment and one year after stopping treatment,” lead study author Jonathan Corren, MD, of the University of California, Los Angeles, and his colleagues wrote in The Journal of Allergy and Clinical Immunology.
“This persistent improvement in clinical response was paralleled by reductions in nasal transcripts for multiple immunologic pathways, including mast cell activation.”
The study was cited in a news release from the National Institutes of Health that said that the approach may work in a similar way with other allergens.
The Food and Drug Administration recently approved tezepelumab for the treatment of severe asthma in people aged 12 years and older. Tezelumab, a monoclonal antibody, works by blocking the cytokine thymic stromal lymphopoietin (TSLP).
“Cells that cover the surface of organs like the skin and intestines or that line the inside of the nose and lungs rapidly secrete TSLP in response to signals of potential danger,” according to the NIH news release. “In allergic disease, TSLP helps initiate an overreactive immune response to otherwise harmless substances like cat dander, provoking airway inflammation that leads to the symptoms of allergic rhinitis.”
Testing an enhanced strategy
The double-blind CATNIP trial was conducted by Dr. Corren and colleagues at nine sites in the United States. The trial included patients aged 18-65 years who’d had moderate to severe cat-induced allergic rhinitis for at least 2 years from 2015 to 2019.
The researchers excluded patients with recurrent acute or chronic sinusitis. They excluded patients who had undergone SCIT with cat allergen within the past 10 years or seasonal or perennial allergen sensitivity during nasal challenges. They also excluded persons with a history of persistent asthma.
In the parallel-design study, 121 participants were randomly allocated into four groups: 32 patients were treated with intravenous tezepelumab plus cat SCIT, 31 received the allergy shots alone, 30 received tezepelumab alone, and 28 received placebo alone for 52 weeks, followed by 52 weeks of observation.
Participants received SCIT (10,000 bioequivalent allergy units per milliliter) or matched placebo via subcutaneous injections weekly in increasing doses for around 12 weeks, followed by monthly maintenance injections (4,000 BAU or maximum tolerated dose) until week 48.
They received tezepelumab (700 mg IV) or matched placebo 1-3 days prior to the SCIT or placebo SCIT injections once every 4 weeks through week 24, then before or on the same day as the SCIT or placebo injections through week 48.
Measures of effectiveness
Participants were also given nasal allergy challenges – one spritz of a nasal spray containing cat allergen extract in each nostril at screening, baseline, and weeks 26, 52, 78, and 104. The researchers recorded participants’ total nasal symptom score (TNSS) and peak nasal inspiratory flow at 5, 15, 30, and 60 minutes after being sprayed and hourly for up to 6 hours post challenge. Blood and nasal cell samples were also collected.
The research team performed skin prick tests using serial dilutions of cat extract and an intradermal skin test (IDST) using the concentration of allergen that produced an early response of at least 15 mm at baseline. They measured early-phase responses for the both tests at 15 minutes and late-phase response to the IDST at 6 hours.
They measured serum levels of cat dander–specific IgE, IgG4, and total IgE using fluoroenzyme immunoassay. They measured serum interleukin-5 and IL-13 using high-sensitivity single-molecule digital immunoassay and performed nasal brushing using a 3-mm cytology brush 6 hours after a nasal allergy challenge. They performed whole-genome transcriptional profiling on the extracted RNA.
Combination therapy worked better and longer
The combined therapy worked better while being administered. Although the allergy shots alone stopped working after they were discontinued, the combination continued to benefit participants 1 year after that therapy ended.
At week 52, statistically significant reductions in TNSS induced by nasal allergy challenges occurred in patients receiving tezepelumab plus SCIT compared with patients receiving SCIT alone.
At week 104, 1 year after treatment ended, the primary endpoint TNSS was not significantly different in the tezepelumab-plus-SCIT group than in the SCIT-alone group, but TNSS peak 0–1 hour was significantly lower in the combination treatment group than in the SCIT-alone group.
In analysis of gene expression from nasal epithelial samples, participants who had been treated with the combination but not with either therapy by itself showed persistent modulation of the nasal immunologic environment, including diminished mast cell function. This was explained in large part by decreased transcription of the gene TPSAB1 (tryptase). Tryptase protein in nasal fluid was also decreased in the combination group, compared with the SCIT-alone group.
Adverse and serious adverse events, including infections and infestations as well as respiratory, thoracic, mediastinal, gastrointestinal, immune system, and nervous system disorders, did not differ significantly between treatment groups.
Four independent experts welcome the results
Patricia Lynne Lugar, MD, associate professor of medicine in the division of pulmonology, allergy, and critical care medicine at Duke University, Durham, N.C., found the results, especially the 1-year posttreatment response durability, surprising.
“AIT is a very effective treatment that often provides prolonged symptom improvement and is ‘curative’ in many cases,” she said in an interview. “If further studies show that tezepelumab offers long-term results, more patients might opt for combination therapy.
“A significant strength of the study is its evaluation of responses of the combination therapy on cellular output and gene expression,” Dr. Lugar added. “The mechanism by which AIT modulates the allergic response is largely understood. Tezepelumab may augment this modulation to alter the Th2 response upon exposure to the allergens.”
Will payors cover the prohibitively costly biologic?
Scott Frank, MD, associate professor in the department of family medicine and community health at Case Western Reserve University, Cleveland, called the study well designed and rigorous.
“The practicality of the approach may be limited by the need for intravenous administration of tezepelumab in addition to the traditional allergy shot,” he noted by email, “and the cost of this therapeutic approach is not addressed.”
Christopher Brooks, MD, clinical assistant professor of allergy and immunology in the department of otolaryngology at Ohio State University Wexner Medical Center, Columbus, also pointed out the drug’s cost.
“Tezepelumab is currently an expensive biologic, so it remains to be seen whether patients and payors will be willing to pay for this add-on medication when AIT by itself still remains very effective,” he said by email.
“AIT is most effective when given for 5 years, so it also remains to be seen whether the results and conclusions of this study would still hold true if done for the typical 5-year treatment period,” he added.
Stokes Peebles, MD, professor of medicine in the division of allergy, pulmonary, and critical care medicine at Vanderbilt University Medical Center, Nashville, Tenn., called the study “very well designed by a highly respected group of investigators using well-matched study populations.
“Tezepelumab has been shown to work in asthma, and there is no reason to think it would not work in allergic rhinitis,” he said in an interview.
“However, while the results of the combined therapy were statistically significant, their clinical significance was not clear. Patients do not care about statistical significance. They want to know whether a drug will be clinically significant,” he added.
Many people avoid cat allergy symptoms by avoiding cats and, in some cases, by avoiding people who live with cats, he said. Medical therapy, usually involving nasal corticosteroids and antihistamines, helps most people avoid cat allergy symptoms.
“Patients with bad allergies who have not done well with SCIT may consider adding tezepelumab, but it incurs a major cost. If medical therapy doesn’t work, allergy shots are available at roughly $3,000 per year. Adding tezepelumab costs around $40,000 more per year,” he explained. “Does the slight clinical benefit justify the greatly increased cost?”
The authors and uninvolved experts recommend further related research.
The research was supported by the National Institute of Allergy and Infectious Diseases. AstraZeneca and Amgen donated the drug used in the study. Dr. Corren reported financial relationships with AstraZeneca, and one coauthor reported relevant financial relationships with Amgen and other pharmaceutical companies. The remaining coauthors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
according to results of a phase 1/2 clinical trial.
“One year of allergen immunotherapy [AIT] combined with tezepelumab was significantly more effective than SCIT alone in reducing the nasal response to allergen challenge both at the end of treatment and one year after stopping treatment,” lead study author Jonathan Corren, MD, of the University of California, Los Angeles, and his colleagues wrote in The Journal of Allergy and Clinical Immunology.
“This persistent improvement in clinical response was paralleled by reductions in nasal transcripts for multiple immunologic pathways, including mast cell activation.”
The study was cited in a news release from the National Institutes of Health that said that the approach may work in a similar way with other allergens.
The Food and Drug Administration recently approved tezepelumab for the treatment of severe asthma in people aged 12 years and older. Tezelumab, a monoclonal antibody, works by blocking the cytokine thymic stromal lymphopoietin (TSLP).
“Cells that cover the surface of organs like the skin and intestines or that line the inside of the nose and lungs rapidly secrete TSLP in response to signals of potential danger,” according to the NIH news release. “In allergic disease, TSLP helps initiate an overreactive immune response to otherwise harmless substances like cat dander, provoking airway inflammation that leads to the symptoms of allergic rhinitis.”
Testing an enhanced strategy
The double-blind CATNIP trial was conducted by Dr. Corren and colleagues at nine sites in the United States. The trial included patients aged 18-65 years who’d had moderate to severe cat-induced allergic rhinitis for at least 2 years from 2015 to 2019.
The researchers excluded patients with recurrent acute or chronic sinusitis. They excluded patients who had undergone SCIT with cat allergen within the past 10 years or seasonal or perennial allergen sensitivity during nasal challenges. They also excluded persons with a history of persistent asthma.
In the parallel-design study, 121 participants were randomly allocated into four groups: 32 patients were treated with intravenous tezepelumab plus cat SCIT, 31 received the allergy shots alone, 30 received tezepelumab alone, and 28 received placebo alone for 52 weeks, followed by 52 weeks of observation.
Participants received SCIT (10,000 bioequivalent allergy units per milliliter) or matched placebo via subcutaneous injections weekly in increasing doses for around 12 weeks, followed by monthly maintenance injections (4,000 BAU or maximum tolerated dose) until week 48.
They received tezepelumab (700 mg IV) or matched placebo 1-3 days prior to the SCIT or placebo SCIT injections once every 4 weeks through week 24, then before or on the same day as the SCIT or placebo injections through week 48.
Measures of effectiveness
Participants were also given nasal allergy challenges – one spritz of a nasal spray containing cat allergen extract in each nostril at screening, baseline, and weeks 26, 52, 78, and 104. The researchers recorded participants’ total nasal symptom score (TNSS) and peak nasal inspiratory flow at 5, 15, 30, and 60 minutes after being sprayed and hourly for up to 6 hours post challenge. Blood and nasal cell samples were also collected.
The research team performed skin prick tests using serial dilutions of cat extract and an intradermal skin test (IDST) using the concentration of allergen that produced an early response of at least 15 mm at baseline. They measured early-phase responses for the both tests at 15 minutes and late-phase response to the IDST at 6 hours.
They measured serum levels of cat dander–specific IgE, IgG4, and total IgE using fluoroenzyme immunoassay. They measured serum interleukin-5 and IL-13 using high-sensitivity single-molecule digital immunoassay and performed nasal brushing using a 3-mm cytology brush 6 hours after a nasal allergy challenge. They performed whole-genome transcriptional profiling on the extracted RNA.
Combination therapy worked better and longer
The combined therapy worked better while being administered. Although the allergy shots alone stopped working after they were discontinued, the combination continued to benefit participants 1 year after that therapy ended.
At week 52, statistically significant reductions in TNSS induced by nasal allergy challenges occurred in patients receiving tezepelumab plus SCIT compared with patients receiving SCIT alone.
At week 104, 1 year after treatment ended, the primary endpoint TNSS was not significantly different in the tezepelumab-plus-SCIT group than in the SCIT-alone group, but TNSS peak 0–1 hour was significantly lower in the combination treatment group than in the SCIT-alone group.
In analysis of gene expression from nasal epithelial samples, participants who had been treated with the combination but not with either therapy by itself showed persistent modulation of the nasal immunologic environment, including diminished mast cell function. This was explained in large part by decreased transcription of the gene TPSAB1 (tryptase). Tryptase protein in nasal fluid was also decreased in the combination group, compared with the SCIT-alone group.
Adverse and serious adverse events, including infections and infestations as well as respiratory, thoracic, mediastinal, gastrointestinal, immune system, and nervous system disorders, did not differ significantly between treatment groups.
Four independent experts welcome the results
Patricia Lynne Lugar, MD, associate professor of medicine in the division of pulmonology, allergy, and critical care medicine at Duke University, Durham, N.C., found the results, especially the 1-year posttreatment response durability, surprising.
“AIT is a very effective treatment that often provides prolonged symptom improvement and is ‘curative’ in many cases,” she said in an interview. “If further studies show that tezepelumab offers long-term results, more patients might opt for combination therapy.
“A significant strength of the study is its evaluation of responses of the combination therapy on cellular output and gene expression,” Dr. Lugar added. “The mechanism by which AIT modulates the allergic response is largely understood. Tezepelumab may augment this modulation to alter the Th2 response upon exposure to the allergens.”
Will payors cover the prohibitively costly biologic?
Scott Frank, MD, associate professor in the department of family medicine and community health at Case Western Reserve University, Cleveland, called the study well designed and rigorous.
“The practicality of the approach may be limited by the need for intravenous administration of tezepelumab in addition to the traditional allergy shot,” he noted by email, “and the cost of this therapeutic approach is not addressed.”
Christopher Brooks, MD, clinical assistant professor of allergy and immunology in the department of otolaryngology at Ohio State University Wexner Medical Center, Columbus, also pointed out the drug’s cost.
“Tezepelumab is currently an expensive biologic, so it remains to be seen whether patients and payors will be willing to pay for this add-on medication when AIT by itself still remains very effective,” he said by email.
“AIT is most effective when given for 5 years, so it also remains to be seen whether the results and conclusions of this study would still hold true if done for the typical 5-year treatment period,” he added.
Stokes Peebles, MD, professor of medicine in the division of allergy, pulmonary, and critical care medicine at Vanderbilt University Medical Center, Nashville, Tenn., called the study “very well designed by a highly respected group of investigators using well-matched study populations.
“Tezepelumab has been shown to work in asthma, and there is no reason to think it would not work in allergic rhinitis,” he said in an interview.
“However, while the results of the combined therapy were statistically significant, their clinical significance was not clear. Patients do not care about statistical significance. They want to know whether a drug will be clinically significant,” he added.
Many people avoid cat allergy symptoms by avoiding cats and, in some cases, by avoiding people who live with cats, he said. Medical therapy, usually involving nasal corticosteroids and antihistamines, helps most people avoid cat allergy symptoms.
“Patients with bad allergies who have not done well with SCIT may consider adding tezepelumab, but it incurs a major cost. If medical therapy doesn’t work, allergy shots are available at roughly $3,000 per year. Adding tezepelumab costs around $40,000 more per year,” he explained. “Does the slight clinical benefit justify the greatly increased cost?”
The authors and uninvolved experts recommend further related research.
The research was supported by the National Institute of Allergy and Infectious Diseases. AstraZeneca and Amgen donated the drug used in the study. Dr. Corren reported financial relationships with AstraZeneca, and one coauthor reported relevant financial relationships with Amgen and other pharmaceutical companies. The remaining coauthors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
according to results of a phase 1/2 clinical trial.
“One year of allergen immunotherapy [AIT] combined with tezepelumab was significantly more effective than SCIT alone in reducing the nasal response to allergen challenge both at the end of treatment and one year after stopping treatment,” lead study author Jonathan Corren, MD, of the University of California, Los Angeles, and his colleagues wrote in The Journal of Allergy and Clinical Immunology.
“This persistent improvement in clinical response was paralleled by reductions in nasal transcripts for multiple immunologic pathways, including mast cell activation.”
The study was cited in a news release from the National Institutes of Health that said that the approach may work in a similar way with other allergens.
The Food and Drug Administration recently approved tezepelumab for the treatment of severe asthma in people aged 12 years and older. Tezelumab, a monoclonal antibody, works by blocking the cytokine thymic stromal lymphopoietin (TSLP).
“Cells that cover the surface of organs like the skin and intestines or that line the inside of the nose and lungs rapidly secrete TSLP in response to signals of potential danger,” according to the NIH news release. “In allergic disease, TSLP helps initiate an overreactive immune response to otherwise harmless substances like cat dander, provoking airway inflammation that leads to the symptoms of allergic rhinitis.”
Testing an enhanced strategy
The double-blind CATNIP trial was conducted by Dr. Corren and colleagues at nine sites in the United States. The trial included patients aged 18-65 years who’d had moderate to severe cat-induced allergic rhinitis for at least 2 years from 2015 to 2019.
The researchers excluded patients with recurrent acute or chronic sinusitis. They excluded patients who had undergone SCIT with cat allergen within the past 10 years or seasonal or perennial allergen sensitivity during nasal challenges. They also excluded persons with a history of persistent asthma.
In the parallel-design study, 121 participants were randomly allocated into four groups: 32 patients were treated with intravenous tezepelumab plus cat SCIT, 31 received the allergy shots alone, 30 received tezepelumab alone, and 28 received placebo alone for 52 weeks, followed by 52 weeks of observation.
Participants received SCIT (10,000 bioequivalent allergy units per milliliter) or matched placebo via subcutaneous injections weekly in increasing doses for around 12 weeks, followed by monthly maintenance injections (4,000 BAU or maximum tolerated dose) until week 48.
They received tezepelumab (700 mg IV) or matched placebo 1-3 days prior to the SCIT or placebo SCIT injections once every 4 weeks through week 24, then before or on the same day as the SCIT or placebo injections through week 48.
Measures of effectiveness
Participants were also given nasal allergy challenges – one spritz of a nasal spray containing cat allergen extract in each nostril at screening, baseline, and weeks 26, 52, 78, and 104. The researchers recorded participants’ total nasal symptom score (TNSS) and peak nasal inspiratory flow at 5, 15, 30, and 60 minutes after being sprayed and hourly for up to 6 hours post challenge. Blood and nasal cell samples were also collected.
The research team performed skin prick tests using serial dilutions of cat extract and an intradermal skin test (IDST) using the concentration of allergen that produced an early response of at least 15 mm at baseline. They measured early-phase responses for the both tests at 15 minutes and late-phase response to the IDST at 6 hours.
They measured serum levels of cat dander–specific IgE, IgG4, and total IgE using fluoroenzyme immunoassay. They measured serum interleukin-5 and IL-13 using high-sensitivity single-molecule digital immunoassay and performed nasal brushing using a 3-mm cytology brush 6 hours after a nasal allergy challenge. They performed whole-genome transcriptional profiling on the extracted RNA.
Combination therapy worked better and longer
The combined therapy worked better while being administered. Although the allergy shots alone stopped working after they were discontinued, the combination continued to benefit participants 1 year after that therapy ended.
At week 52, statistically significant reductions in TNSS induced by nasal allergy challenges occurred in patients receiving tezepelumab plus SCIT compared with patients receiving SCIT alone.
At week 104, 1 year after treatment ended, the primary endpoint TNSS was not significantly different in the tezepelumab-plus-SCIT group than in the SCIT-alone group, but TNSS peak 0–1 hour was significantly lower in the combination treatment group than in the SCIT-alone group.
In analysis of gene expression from nasal epithelial samples, participants who had been treated with the combination but not with either therapy by itself showed persistent modulation of the nasal immunologic environment, including diminished mast cell function. This was explained in large part by decreased transcription of the gene TPSAB1 (tryptase). Tryptase protein in nasal fluid was also decreased in the combination group, compared with the SCIT-alone group.
Adverse and serious adverse events, including infections and infestations as well as respiratory, thoracic, mediastinal, gastrointestinal, immune system, and nervous system disorders, did not differ significantly between treatment groups.
Four independent experts welcome the results
Patricia Lynne Lugar, MD, associate professor of medicine in the division of pulmonology, allergy, and critical care medicine at Duke University, Durham, N.C., found the results, especially the 1-year posttreatment response durability, surprising.
“AIT is a very effective treatment that often provides prolonged symptom improvement and is ‘curative’ in many cases,” she said in an interview. “If further studies show that tezepelumab offers long-term results, more patients might opt for combination therapy.
“A significant strength of the study is its evaluation of responses of the combination therapy on cellular output and gene expression,” Dr. Lugar added. “The mechanism by which AIT modulates the allergic response is largely understood. Tezepelumab may augment this modulation to alter the Th2 response upon exposure to the allergens.”
Will payors cover the prohibitively costly biologic?
Scott Frank, MD, associate professor in the department of family medicine and community health at Case Western Reserve University, Cleveland, called the study well designed and rigorous.
“The practicality of the approach may be limited by the need for intravenous administration of tezepelumab in addition to the traditional allergy shot,” he noted by email, “and the cost of this therapeutic approach is not addressed.”
Christopher Brooks, MD, clinical assistant professor of allergy and immunology in the department of otolaryngology at Ohio State University Wexner Medical Center, Columbus, also pointed out the drug’s cost.
“Tezepelumab is currently an expensive biologic, so it remains to be seen whether patients and payors will be willing to pay for this add-on medication when AIT by itself still remains very effective,” he said by email.
“AIT is most effective when given for 5 years, so it also remains to be seen whether the results and conclusions of this study would still hold true if done for the typical 5-year treatment period,” he added.
Stokes Peebles, MD, professor of medicine in the division of allergy, pulmonary, and critical care medicine at Vanderbilt University Medical Center, Nashville, Tenn., called the study “very well designed by a highly respected group of investigators using well-matched study populations.
“Tezepelumab has been shown to work in asthma, and there is no reason to think it would not work in allergic rhinitis,” he said in an interview.
“However, while the results of the combined therapy were statistically significant, their clinical significance was not clear. Patients do not care about statistical significance. They want to know whether a drug will be clinically significant,” he added.
Many people avoid cat allergy symptoms by avoiding cats and, in some cases, by avoiding people who live with cats, he said. Medical therapy, usually involving nasal corticosteroids and antihistamines, helps most people avoid cat allergy symptoms.
“Patients with bad allergies who have not done well with SCIT may consider adding tezepelumab, but it incurs a major cost. If medical therapy doesn’t work, allergy shots are available at roughly $3,000 per year. Adding tezepelumab costs around $40,000 more per year,” he explained. “Does the slight clinical benefit justify the greatly increased cost?”
The authors and uninvolved experts recommend further related research.
The research was supported by the National Institute of Allergy and Infectious Diseases. AstraZeneca and Amgen donated the drug used in the study. Dr. Corren reported financial relationships with AstraZeneca, and one coauthor reported relevant financial relationships with Amgen and other pharmaceutical companies. The remaining coauthors reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF ALLERGY AND CLINICAL IMMUNOLOGY
Extreme temperature shifts tied to increase in hate speech
, according to researchers from the Potsdam Institute for Climate Impact Research.
What to know
- Analyzing over four billion tweets posted on the social media platform Twitter in the United States, researchers found that hate speech increased across climate zones, income groups, and belief systems when temperatures were too hot or too cold outside.
- The minimum number of hate tweets appears to occur when temperatures are between 15° and 18° C (59° to 65° F). The precise feel-good temperature window varies a little across climate zones, depending on what temperatures are common in those regions.
- When temperatures rose or fell from the feel-good temperature margin, online hate increased up to 12% for colder temperatures and up to 22% for hotter temperatures.
- The United Nations defines hate speech as cases of discriminatory language with reference to a person or a group on the basis of their religion, ethnicity, nationality, race, color, descent, gender, or other identity factor.
- The consequences of more aggressive online behavior can be severe, as hate speech has been found to have negative effects on the mental health of online hate victims, especially for young people and marginalized groups. It can also be predictive of hate crimes in the offline world.
A version of this article first appeared on Medscape.com.
This is a summary of the article, “Temperature Impacts on Hate Speech Online: Evidence From Four Billion Tweets,” published by The Lancet Planetary Health on September 1, 2022. The full article can be found on thelancet.com.
, according to researchers from the Potsdam Institute for Climate Impact Research.
What to know
- Analyzing over four billion tweets posted on the social media platform Twitter in the United States, researchers found that hate speech increased across climate zones, income groups, and belief systems when temperatures were too hot or too cold outside.
- The minimum number of hate tweets appears to occur when temperatures are between 15° and 18° C (59° to 65° F). The precise feel-good temperature window varies a little across climate zones, depending on what temperatures are common in those regions.
- When temperatures rose or fell from the feel-good temperature margin, online hate increased up to 12% for colder temperatures and up to 22% for hotter temperatures.
- The United Nations defines hate speech as cases of discriminatory language with reference to a person or a group on the basis of their religion, ethnicity, nationality, race, color, descent, gender, or other identity factor.
- The consequences of more aggressive online behavior can be severe, as hate speech has been found to have negative effects on the mental health of online hate victims, especially for young people and marginalized groups. It can also be predictive of hate crimes in the offline world.
A version of this article first appeared on Medscape.com.
This is a summary of the article, “Temperature Impacts on Hate Speech Online: Evidence From Four Billion Tweets,” published by The Lancet Planetary Health on September 1, 2022. The full article can be found on thelancet.com.
, according to researchers from the Potsdam Institute for Climate Impact Research.
What to know
- Analyzing over four billion tweets posted on the social media platform Twitter in the United States, researchers found that hate speech increased across climate zones, income groups, and belief systems when temperatures were too hot or too cold outside.
- The minimum number of hate tweets appears to occur when temperatures are between 15° and 18° C (59° to 65° F). The precise feel-good temperature window varies a little across climate zones, depending on what temperatures are common in those regions.
- When temperatures rose or fell from the feel-good temperature margin, online hate increased up to 12% for colder temperatures and up to 22% for hotter temperatures.
- The United Nations defines hate speech as cases of discriminatory language with reference to a person or a group on the basis of their religion, ethnicity, nationality, race, color, descent, gender, or other identity factor.
- The consequences of more aggressive online behavior can be severe, as hate speech has been found to have negative effects on the mental health of online hate victims, especially for young people and marginalized groups. It can also be predictive of hate crimes in the offline world.
A version of this article first appeared on Medscape.com.
This is a summary of the article, “Temperature Impacts on Hate Speech Online: Evidence From Four Billion Tweets,” published by The Lancet Planetary Health on September 1, 2022. The full article can be found on thelancet.com.
Balanced crystalloid fluids surpass saline for kidney transplant
ORLANDO – Using a low-chloride, balanced crystalloid solution for all intravenous fluids received by patients who received a deceased donor kidney transplant resulted in significantly fewer episodes of delayed graft function, compared with patients who received saline as their IV fluids, in a new multicenter trial with 807 randomized and evaluable patients called BEST-Fluids.
“The findings suggest that balanced crystalloids should be the standard-of-care IV fluid in deceased donor kidney transplantations,” Michael G. Collins, MBChB, PhD, said at the annual meeting of the American Society of Nephrology.
“Balanced crystalloids are cheap, readily available worldwide, and this simple change in kidney transplant practice can easily be implemented in global practice ... almost immediately,” said Dr. Collins, a nephrologist at Royal Adelaide Hospital, Australia.
A 1-L bag of balanced crystalloid fluid is more expensive; however, it has a U.S. retail cost of about $2-$5 per bag, compared with about $1 per bag of saline fluid, Dr. Collins added.
Various other commentators had mixed views. Some agreed with Dr. Collins and said the switch could be made immediately, although one researcher wanted to see more trials. Another wondered why balanced crystalloid fluid hadn’t seemed to provide benefit in studies in acute kidney injury.
Treating 10 patients prevents one delayed graft function
The incidence of delayed graft function, defined as the need for dialysis during the 7 days following transplantation, occurred in 30.0% of 404 patients who received balanced crystalloid fluids (Plasma-Lyte 148) and in 39.7% of 403 patients who received saline starting at the time of randomization (prior to surgery) until 48 hours post-surgery, Dr. Collins reported.
This translated into a significant, adjusted relative risk reduction of 26% and a number needed to treat of 10 to result in one avoided episode of delayed graft function.
Preventing delayed graft function is important because it is a “major complication” of deceased donor kidney transplantation that usually occurs in about 30%-50% of people who receive these organs, Dr. Collins explained. Incident delayed graft function leads to higher hospitalization costs because of a prolonged need for dialysis and extended hospital days, as well as increased risk for long-term graft failure and death.
A secondary outcome – the number of dialysis sessions required during the 28 days following transplantation – was 406 sessions among those who received balanced crystalloid fluids and 596 sessions among the controls who received saline, a significant adjusted relative decrease of 30%.
Freedom from need for dialysis by 12 weeks after surgery increased by a significant 10% among those treated with balanced crystalloid fluids, compared with controls. The balanced crystalloid fluids were also significantly linked with an average 1-L increase in urine output during the first 2 days after transplantation, compared with controls.
Chloride is the culprit
“I think this is driven by the harmful effects of saline,” which is currently the standard fluid that kidney transplant patients receive worldwide, said Dr. Collins. Specifically, he cited the chloride content of saline – which contains 0.9% sodium chloride – as the culprit by causing reduced kidney perfusion.
“Some data suggest that saline may be harmful because of chloride acidosis producing vasoconstriction and increasing ischemia,” commented Karen A. Griffin, MD, chief of the renal section at the Edward Hines, Jr. VA Medical Center, Hines, Illinois. But Dr. Griffin said she’d like to see further study of balanced crystalloid fluids in this setting before she’d be comfortable using it routinely as a replacement for saline.
However, Pascale H. Lane, MD, a pediatric nephrologist with Oklahoma University Health, Oklahoma City, predicted that based on these results, “I think it will be rapidly embraced” by U.S. clinicians. Dr. Lane expressed concern about the availability of an adequate supply of balanced crystalloid fluid, but Dr. Collins said he did not believe supply would be an issue based on current availability.
This was “a beautiful study, very well done, with nice results, and a very easy switch to balanced crystalloid fluids without harm,” commented Richard Lafayette, MD, a nephrologist and professor of medicine at Stanford (Calif.) University.
Success attributed to early treatment
But Dr. Lafayette also wondered, “Why should this work for transplant patients when it did not work for patients who develop acute kidney injury in the ICU?” And he found it hard to understand how the impact of the balanced crystalloid fluid could manifest so quickly, with a change in urine output during the first day following surgery.
Dr. Collins attributed the rapid effects and overall success to the early initiation of balanced crystalloid fluids before the transplant occurred.
The BEST-Fluids trial ran at 16 centers in Australia and New Zealand and enrolled patients from January 2018 to August 2020. It enrolled adults and children scheduled to receive a deceased donor kidney, excluding those who weighed less than 20 kg and those who received multiple organs.
Enrolled patients averaged about 55 years old, about 63% were men, and their average duration on dialysis prior to surgery was about 30 months. The study randomized 808 patients who received their transplanted kidney, with 807 included in the efficacy analysis. Patients in each of the two groups showed very close balance for all reported parameters of patient and donor characteristics. During the period of randomized fluid treatment, patients in the balanced crystalloid group received an average of just over 8 L of fluid, while those in the control group received an average of just over 7 L.
During follow-up, serious adverse events were rare and balanced, with three in the balanced crystalloid group and four among controls.
The only significant difference in adverse events was the rate of ICU admissions that required ventilation, which occurred in one patient in the balanced crystalloid group and 12 controls.
BEST-Fluids received balanced crystalloid and saline solutions at no charge from Baxter Healthcare, which markets Plasma-Lyte 148. The study received no other commercial funding. Dr. Collins, Dr. Griffin, and Dr. Lane have reported no relevant financial relationships. Dr. Lafayette has received personal fees and grants from Alexion, Aurinia, Calliditas, Omeros, Pfizer, Roche, Travere, and Vera and has been an advisor to Akahest and Equillium.
A version of this article first appeared on Medscape.com.
ORLANDO – Using a low-chloride, balanced crystalloid solution for all intravenous fluids received by patients who received a deceased donor kidney transplant resulted in significantly fewer episodes of delayed graft function, compared with patients who received saline as their IV fluids, in a new multicenter trial with 807 randomized and evaluable patients called BEST-Fluids.
“The findings suggest that balanced crystalloids should be the standard-of-care IV fluid in deceased donor kidney transplantations,” Michael G. Collins, MBChB, PhD, said at the annual meeting of the American Society of Nephrology.
“Balanced crystalloids are cheap, readily available worldwide, and this simple change in kidney transplant practice can easily be implemented in global practice ... almost immediately,” said Dr. Collins, a nephrologist at Royal Adelaide Hospital, Australia.
A 1-L bag of balanced crystalloid fluid is more expensive; however, it has a U.S. retail cost of about $2-$5 per bag, compared with about $1 per bag of saline fluid, Dr. Collins added.
Various other commentators had mixed views. Some agreed with Dr. Collins and said the switch could be made immediately, although one researcher wanted to see more trials. Another wondered why balanced crystalloid fluid hadn’t seemed to provide benefit in studies in acute kidney injury.
Treating 10 patients prevents one delayed graft function
The incidence of delayed graft function, defined as the need for dialysis during the 7 days following transplantation, occurred in 30.0% of 404 patients who received balanced crystalloid fluids (Plasma-Lyte 148) and in 39.7% of 403 patients who received saline starting at the time of randomization (prior to surgery) until 48 hours post-surgery, Dr. Collins reported.
This translated into a significant, adjusted relative risk reduction of 26% and a number needed to treat of 10 to result in one avoided episode of delayed graft function.
Preventing delayed graft function is important because it is a “major complication” of deceased donor kidney transplantation that usually occurs in about 30%-50% of people who receive these organs, Dr. Collins explained. Incident delayed graft function leads to higher hospitalization costs because of a prolonged need for dialysis and extended hospital days, as well as increased risk for long-term graft failure and death.
A secondary outcome – the number of dialysis sessions required during the 28 days following transplantation – was 406 sessions among those who received balanced crystalloid fluids and 596 sessions among the controls who received saline, a significant adjusted relative decrease of 30%.
Freedom from need for dialysis by 12 weeks after surgery increased by a significant 10% among those treated with balanced crystalloid fluids, compared with controls. The balanced crystalloid fluids were also significantly linked with an average 1-L increase in urine output during the first 2 days after transplantation, compared with controls.
Chloride is the culprit
“I think this is driven by the harmful effects of saline,” which is currently the standard fluid that kidney transplant patients receive worldwide, said Dr. Collins. Specifically, he cited the chloride content of saline – which contains 0.9% sodium chloride – as the culprit by causing reduced kidney perfusion.
“Some data suggest that saline may be harmful because of chloride acidosis producing vasoconstriction and increasing ischemia,” commented Karen A. Griffin, MD, chief of the renal section at the Edward Hines, Jr. VA Medical Center, Hines, Illinois. But Dr. Griffin said she’d like to see further study of balanced crystalloid fluids in this setting before she’d be comfortable using it routinely as a replacement for saline.
However, Pascale H. Lane, MD, a pediatric nephrologist with Oklahoma University Health, Oklahoma City, predicted that based on these results, “I think it will be rapidly embraced” by U.S. clinicians. Dr. Lane expressed concern about the availability of an adequate supply of balanced crystalloid fluid, but Dr. Collins said he did not believe supply would be an issue based on current availability.
This was “a beautiful study, very well done, with nice results, and a very easy switch to balanced crystalloid fluids without harm,” commented Richard Lafayette, MD, a nephrologist and professor of medicine at Stanford (Calif.) University.
Success attributed to early treatment
But Dr. Lafayette also wondered, “Why should this work for transplant patients when it did not work for patients who develop acute kidney injury in the ICU?” And he found it hard to understand how the impact of the balanced crystalloid fluid could manifest so quickly, with a change in urine output during the first day following surgery.
Dr. Collins attributed the rapid effects and overall success to the early initiation of balanced crystalloid fluids before the transplant occurred.
The BEST-Fluids trial ran at 16 centers in Australia and New Zealand and enrolled patients from January 2018 to August 2020. It enrolled adults and children scheduled to receive a deceased donor kidney, excluding those who weighed less than 20 kg and those who received multiple organs.
Enrolled patients averaged about 55 years old, about 63% were men, and their average duration on dialysis prior to surgery was about 30 months. The study randomized 808 patients who received their transplanted kidney, with 807 included in the efficacy analysis. Patients in each of the two groups showed very close balance for all reported parameters of patient and donor characteristics. During the period of randomized fluid treatment, patients in the balanced crystalloid group received an average of just over 8 L of fluid, while those in the control group received an average of just over 7 L.
During follow-up, serious adverse events were rare and balanced, with three in the balanced crystalloid group and four among controls.
The only significant difference in adverse events was the rate of ICU admissions that required ventilation, which occurred in one patient in the balanced crystalloid group and 12 controls.
BEST-Fluids received balanced crystalloid and saline solutions at no charge from Baxter Healthcare, which markets Plasma-Lyte 148. The study received no other commercial funding. Dr. Collins, Dr. Griffin, and Dr. Lane have reported no relevant financial relationships. Dr. Lafayette has received personal fees and grants from Alexion, Aurinia, Calliditas, Omeros, Pfizer, Roche, Travere, and Vera and has been an advisor to Akahest and Equillium.
A version of this article first appeared on Medscape.com.
ORLANDO – Using a low-chloride, balanced crystalloid solution for all intravenous fluids received by patients who received a deceased donor kidney transplant resulted in significantly fewer episodes of delayed graft function, compared with patients who received saline as their IV fluids, in a new multicenter trial with 807 randomized and evaluable patients called BEST-Fluids.
“The findings suggest that balanced crystalloids should be the standard-of-care IV fluid in deceased donor kidney transplantations,” Michael G. Collins, MBChB, PhD, said at the annual meeting of the American Society of Nephrology.
“Balanced crystalloids are cheap, readily available worldwide, and this simple change in kidney transplant practice can easily be implemented in global practice ... almost immediately,” said Dr. Collins, a nephrologist at Royal Adelaide Hospital, Australia.
A 1-L bag of balanced crystalloid fluid is more expensive; however, it has a U.S. retail cost of about $2-$5 per bag, compared with about $1 per bag of saline fluid, Dr. Collins added.
Various other commentators had mixed views. Some agreed with Dr. Collins and said the switch could be made immediately, although one researcher wanted to see more trials. Another wondered why balanced crystalloid fluid hadn’t seemed to provide benefit in studies in acute kidney injury.
Treating 10 patients prevents one delayed graft function
The incidence of delayed graft function, defined as the need for dialysis during the 7 days following transplantation, occurred in 30.0% of 404 patients who received balanced crystalloid fluids (Plasma-Lyte 148) and in 39.7% of 403 patients who received saline starting at the time of randomization (prior to surgery) until 48 hours post-surgery, Dr. Collins reported.
This translated into a significant, adjusted relative risk reduction of 26% and a number needed to treat of 10 to result in one avoided episode of delayed graft function.
Preventing delayed graft function is important because it is a “major complication” of deceased donor kidney transplantation that usually occurs in about 30%-50% of people who receive these organs, Dr. Collins explained. Incident delayed graft function leads to higher hospitalization costs because of a prolonged need for dialysis and extended hospital days, as well as increased risk for long-term graft failure and death.
A secondary outcome – the number of dialysis sessions required during the 28 days following transplantation – was 406 sessions among those who received balanced crystalloid fluids and 596 sessions among the controls who received saline, a significant adjusted relative decrease of 30%.
Freedom from need for dialysis by 12 weeks after surgery increased by a significant 10% among those treated with balanced crystalloid fluids, compared with controls. The balanced crystalloid fluids were also significantly linked with an average 1-L increase in urine output during the first 2 days after transplantation, compared with controls.
Chloride is the culprit
“I think this is driven by the harmful effects of saline,” which is currently the standard fluid that kidney transplant patients receive worldwide, said Dr. Collins. Specifically, he cited the chloride content of saline – which contains 0.9% sodium chloride – as the culprit by causing reduced kidney perfusion.
“Some data suggest that saline may be harmful because of chloride acidosis producing vasoconstriction and increasing ischemia,” commented Karen A. Griffin, MD, chief of the renal section at the Edward Hines, Jr. VA Medical Center, Hines, Illinois. But Dr. Griffin said she’d like to see further study of balanced crystalloid fluids in this setting before she’d be comfortable using it routinely as a replacement for saline.
However, Pascale H. Lane, MD, a pediatric nephrologist with Oklahoma University Health, Oklahoma City, predicted that based on these results, “I think it will be rapidly embraced” by U.S. clinicians. Dr. Lane expressed concern about the availability of an adequate supply of balanced crystalloid fluid, but Dr. Collins said he did not believe supply would be an issue based on current availability.
This was “a beautiful study, very well done, with nice results, and a very easy switch to balanced crystalloid fluids without harm,” commented Richard Lafayette, MD, a nephrologist and professor of medicine at Stanford (Calif.) University.
Success attributed to early treatment
But Dr. Lafayette also wondered, “Why should this work for transplant patients when it did not work for patients who develop acute kidney injury in the ICU?” And he found it hard to understand how the impact of the balanced crystalloid fluid could manifest so quickly, with a change in urine output during the first day following surgery.
Dr. Collins attributed the rapid effects and overall success to the early initiation of balanced crystalloid fluids before the transplant occurred.
The BEST-Fluids trial ran at 16 centers in Australia and New Zealand and enrolled patients from January 2018 to August 2020. It enrolled adults and children scheduled to receive a deceased donor kidney, excluding those who weighed less than 20 kg and those who received multiple organs.
Enrolled patients averaged about 55 years old, about 63% were men, and their average duration on dialysis prior to surgery was about 30 months. The study randomized 808 patients who received their transplanted kidney, with 807 included in the efficacy analysis. Patients in each of the two groups showed very close balance for all reported parameters of patient and donor characteristics. During the period of randomized fluid treatment, patients in the balanced crystalloid group received an average of just over 8 L of fluid, while those in the control group received an average of just over 7 L.
During follow-up, serious adverse events were rare and balanced, with three in the balanced crystalloid group and four among controls.
The only significant difference in adverse events was the rate of ICU admissions that required ventilation, which occurred in one patient in the balanced crystalloid group and 12 controls.
BEST-Fluids received balanced crystalloid and saline solutions at no charge from Baxter Healthcare, which markets Plasma-Lyte 148. The study received no other commercial funding. Dr. Collins, Dr. Griffin, and Dr. Lane have reported no relevant financial relationships. Dr. Lafayette has received personal fees and grants from Alexion, Aurinia, Calliditas, Omeros, Pfizer, Roche, Travere, and Vera and has been an advisor to Akahest and Equillium.
A version of this article first appeared on Medscape.com.
AT KIDNEY WEEK 2022