User login
Norovirus Outbreak Traced to Reusable Grocery Bag
A norovirus outbreak affecting a girls’ soccer team competing at an intrastate tournament was traced to a reusable grocery bag that held the team’s snacks, according to a report in the May issue of the Journal of Infectious Diseases.
The bag was stored in a hotel bathroom used by the index case when she developed vomiting and diarrhea, presumably from exposure to the pathogen before the tournament. "This investigation confirms the potential for aerosol contamination of fomites in norovirus outbreaks, which has long been suspected to contribute to persistent problems on cruise ships, in nursing homes, and in other settings," said Kimberly K. Repp, Ph.D., of Oregon Health and Science University, Portland, and William E. Keene, Ph.D., of the Oregon Public Health Division.
Dr. Repp and Dr. Keene investigated the outbreak to determine its scope and etiology. Although approximately 2,000 children from Washington and Oregon attended the weekend tournament in King County, Wash., the outbreak was confined to a single group of 17 Oregon girls aged 13-14 years and their 4 adult chaperones. The group shared hotel rooms and ate together at local restaurants.
The index case moved to her chaperone’s room when she became symptomatic, had vomiting and diarrhea throughout the night, and left for home with the chaperone in the morning. Thus, she had no contact with her teammates or other chaperones after symptom onset, and no direct contact with the grocery bag or the food it contained.
However, the bag of snacks was retrieved from the bathroom she used and brought to a different room, where it was handled by the team and other chaperones at lunch the next day. The reusable, open-top grocery bag was made from laminated woven polypropylene. The other members of the affected party removed from it chips and cookies that were in commercial sealed packages, as well as fresh grapes, and consumed them hours after the index case and the chaperone had left the area.
The chaperone who drove the index case home later became ill, as did six other girls and chaperones in the group, 36-57 hours later. All seven cases reported vomiting, and four reported diarrhea. The duration of symptoms was 1-7 days (median 3 days). There were no hospitalizations.
Illness was associated with a composite exposure to any item in the bag or the bag itself. It was impossible to pin down a more detailed chain of contamination because so many of these individuals passed all these items around among themselves.
Three stool specimens were obtained from people who became ill, and all were positive for the same strain of norovirus. Two of 10 swabs taken from the grocery bag 2 weeks after the incident also were positive for that strain of norovirus.
"The data indicate that virus aerosolized within the hotel bathroom settled upon the grocery bag and its contents, and it was touching the bag and consumption of its contents that led to the outbreak," Dr. Repp and Dr. Keene said (J. Infect. Dis. 2012;205:1639-41 [doi: 10.1093/infdis/jis250]).
No other attendees at the tournament, patrons at the hotel or restaurants, or staff members reported any gastrointestinal illness.
"Although we certainly recommend not storing food in bathrooms, it is more important to emphasize that areas where aerosol exposures may have occurred should be thoroughly disinfected; this includes not only exposed surfaces but also objects in the environment that could serve as fomites," the researchers added.
"We also recommend that persons with responsibilities for cleaning (e.g., housekeeping staff or family members) be informed about incidents of vomiting or diarrhea and best practices for disinfection," they said.
Dr. Repp has since moved to the Washington County (Ore.) Department of Health and Human Services. This work was supported by the National Institutes of Health and the Centers for Disease Control and Prevention (CDC). Neither Dr. Repp nor Dr. Keene reported any relevant financial conflicts of interest.
The report by Repp and Keene is "a fascinating example of how a unique exposure and transmission scenario can result in a norovirus outbreak," said Aron J. Hall, D.V.M.
"The chain of events in this outbreak demonstrates how this tenacious virus finds a way to move from host to host, even when those hosts have no direct contact with one another," Dr. Hall said.
The thorough epidemiologic investigation "nicely demonstrates that not only can noroviruses be aerosolized and dispersed onto fomites without direct contact, but also that exposure to those contaminated fomites can then cause disease," he noted.
Aron J. Hall, D.V.M., is in the division of viral diseases at the National Center for Immunization and Respiratory Diseases at the Centers for Disease Control and Prevention. This work was supported by the CDC, but Dr. Hall reported no relevant financial conflicts of interest. These remarks were taken from his editorial comment accompanying the report by Dr. Repp and Dr. Keene (J. Infect. Dis. 2012;205:1622-24 [doi: 10.1093/infdis/jis251]).
The report by Repp and Keene is "a fascinating example of how a unique exposure and transmission scenario can result in a norovirus outbreak," said Aron J. Hall, D.V.M.
"The chain of events in this outbreak demonstrates how this tenacious virus finds a way to move from host to host, even when those hosts have no direct contact with one another," Dr. Hall said.
The thorough epidemiologic investigation "nicely demonstrates that not only can noroviruses be aerosolized and dispersed onto fomites without direct contact, but also that exposure to those contaminated fomites can then cause disease," he noted.
Aron J. Hall, D.V.M., is in the division of viral diseases at the National Center for Immunization and Respiratory Diseases at the Centers for Disease Control and Prevention. This work was supported by the CDC, but Dr. Hall reported no relevant financial conflicts of interest. These remarks were taken from his editorial comment accompanying the report by Dr. Repp and Dr. Keene (J. Infect. Dis. 2012;205:1622-24 [doi: 10.1093/infdis/jis251]).
The report by Repp and Keene is "a fascinating example of how a unique exposure and transmission scenario can result in a norovirus outbreak," said Aron J. Hall, D.V.M.
"The chain of events in this outbreak demonstrates how this tenacious virus finds a way to move from host to host, even when those hosts have no direct contact with one another," Dr. Hall said.
The thorough epidemiologic investigation "nicely demonstrates that not only can noroviruses be aerosolized and dispersed onto fomites without direct contact, but also that exposure to those contaminated fomites can then cause disease," he noted.
Aron J. Hall, D.V.M., is in the division of viral diseases at the National Center for Immunization and Respiratory Diseases at the Centers for Disease Control and Prevention. This work was supported by the CDC, but Dr. Hall reported no relevant financial conflicts of interest. These remarks were taken from his editorial comment accompanying the report by Dr. Repp and Dr. Keene (J. Infect. Dis. 2012;205:1622-24 [doi: 10.1093/infdis/jis251]).
A norovirus outbreak affecting a girls’ soccer team competing at an intrastate tournament was traced to a reusable grocery bag that held the team’s snacks, according to a report in the May issue of the Journal of Infectious Diseases.
The bag was stored in a hotel bathroom used by the index case when she developed vomiting and diarrhea, presumably from exposure to the pathogen before the tournament. "This investigation confirms the potential for aerosol contamination of fomites in norovirus outbreaks, which has long been suspected to contribute to persistent problems on cruise ships, in nursing homes, and in other settings," said Kimberly K. Repp, Ph.D., of Oregon Health and Science University, Portland, and William E. Keene, Ph.D., of the Oregon Public Health Division.
Dr. Repp and Dr. Keene investigated the outbreak to determine its scope and etiology. Although approximately 2,000 children from Washington and Oregon attended the weekend tournament in King County, Wash., the outbreak was confined to a single group of 17 Oregon girls aged 13-14 years and their 4 adult chaperones. The group shared hotel rooms and ate together at local restaurants.
The index case moved to her chaperone’s room when she became symptomatic, had vomiting and diarrhea throughout the night, and left for home with the chaperone in the morning. Thus, she had no contact with her teammates or other chaperones after symptom onset, and no direct contact with the grocery bag or the food it contained.
However, the bag of snacks was retrieved from the bathroom she used and brought to a different room, where it was handled by the team and other chaperones at lunch the next day. The reusable, open-top grocery bag was made from laminated woven polypropylene. The other members of the affected party removed from it chips and cookies that were in commercial sealed packages, as well as fresh grapes, and consumed them hours after the index case and the chaperone had left the area.
The chaperone who drove the index case home later became ill, as did six other girls and chaperones in the group, 36-57 hours later. All seven cases reported vomiting, and four reported diarrhea. The duration of symptoms was 1-7 days (median 3 days). There were no hospitalizations.
Illness was associated with a composite exposure to any item in the bag or the bag itself. It was impossible to pin down a more detailed chain of contamination because so many of these individuals passed all these items around among themselves.
Three stool specimens were obtained from people who became ill, and all were positive for the same strain of norovirus. Two of 10 swabs taken from the grocery bag 2 weeks after the incident also were positive for that strain of norovirus.
"The data indicate that virus aerosolized within the hotel bathroom settled upon the grocery bag and its contents, and it was touching the bag and consumption of its contents that led to the outbreak," Dr. Repp and Dr. Keene said (J. Infect. Dis. 2012;205:1639-41 [doi: 10.1093/infdis/jis250]).
No other attendees at the tournament, patrons at the hotel or restaurants, or staff members reported any gastrointestinal illness.
"Although we certainly recommend not storing food in bathrooms, it is more important to emphasize that areas where aerosol exposures may have occurred should be thoroughly disinfected; this includes not only exposed surfaces but also objects in the environment that could serve as fomites," the researchers added.
"We also recommend that persons with responsibilities for cleaning (e.g., housekeeping staff or family members) be informed about incidents of vomiting or diarrhea and best practices for disinfection," they said.
Dr. Repp has since moved to the Washington County (Ore.) Department of Health and Human Services. This work was supported by the National Institutes of Health and the Centers for Disease Control and Prevention (CDC). Neither Dr. Repp nor Dr. Keene reported any relevant financial conflicts of interest.
A norovirus outbreak affecting a girls’ soccer team competing at an intrastate tournament was traced to a reusable grocery bag that held the team’s snacks, according to a report in the May issue of the Journal of Infectious Diseases.
The bag was stored in a hotel bathroom used by the index case when she developed vomiting and diarrhea, presumably from exposure to the pathogen before the tournament. "This investigation confirms the potential for aerosol contamination of fomites in norovirus outbreaks, which has long been suspected to contribute to persistent problems on cruise ships, in nursing homes, and in other settings," said Kimberly K. Repp, Ph.D., of Oregon Health and Science University, Portland, and William E. Keene, Ph.D., of the Oregon Public Health Division.
Dr. Repp and Dr. Keene investigated the outbreak to determine its scope and etiology. Although approximately 2,000 children from Washington and Oregon attended the weekend tournament in King County, Wash., the outbreak was confined to a single group of 17 Oregon girls aged 13-14 years and their 4 adult chaperones. The group shared hotel rooms and ate together at local restaurants.
The index case moved to her chaperone’s room when she became symptomatic, had vomiting and diarrhea throughout the night, and left for home with the chaperone in the morning. Thus, she had no contact with her teammates or other chaperones after symptom onset, and no direct contact with the grocery bag or the food it contained.
However, the bag of snacks was retrieved from the bathroom she used and brought to a different room, where it was handled by the team and other chaperones at lunch the next day. The reusable, open-top grocery bag was made from laminated woven polypropylene. The other members of the affected party removed from it chips and cookies that were in commercial sealed packages, as well as fresh grapes, and consumed them hours after the index case and the chaperone had left the area.
The chaperone who drove the index case home later became ill, as did six other girls and chaperones in the group, 36-57 hours later. All seven cases reported vomiting, and four reported diarrhea. The duration of symptoms was 1-7 days (median 3 days). There were no hospitalizations.
Illness was associated with a composite exposure to any item in the bag or the bag itself. It was impossible to pin down a more detailed chain of contamination because so many of these individuals passed all these items around among themselves.
Three stool specimens were obtained from people who became ill, and all were positive for the same strain of norovirus. Two of 10 swabs taken from the grocery bag 2 weeks after the incident also were positive for that strain of norovirus.
"The data indicate that virus aerosolized within the hotel bathroom settled upon the grocery bag and its contents, and it was touching the bag and consumption of its contents that led to the outbreak," Dr. Repp and Dr. Keene said (J. Infect. Dis. 2012;205:1639-41 [doi: 10.1093/infdis/jis250]).
No other attendees at the tournament, patrons at the hotel or restaurants, or staff members reported any gastrointestinal illness.
"Although we certainly recommend not storing food in bathrooms, it is more important to emphasize that areas where aerosol exposures may have occurred should be thoroughly disinfected; this includes not only exposed surfaces but also objects in the environment that could serve as fomites," the researchers added.
"We also recommend that persons with responsibilities for cleaning (e.g., housekeeping staff or family members) be informed about incidents of vomiting or diarrhea and best practices for disinfection," they said.
Dr. Repp has since moved to the Washington County (Ore.) Department of Health and Human Services. This work was supported by the National Institutes of Health and the Centers for Disease Control and Prevention (CDC). Neither Dr. Repp nor Dr. Keene reported any relevant financial conflicts of interest.
FROM THE JOURNAL OF INFECTIOUS DISEASES
Women's Stroke Risk Higher in AF, Regardless of Warfarin Use
Women with atrial fibrillation are at higher risk of stroke than are men with the condition, regardless of their use of warfarin or risk profiles, according to large, population-based cohort study reported in the May 9 issue of JAMA.
In numerous large studies of AF, women’s stroke risk has been shown to be 40%-70% higher than men’s risk. The reason for this discrepancy is unclear, but some have suggested that it might be because women are less likely than men to receive prophylactic warfarin therapy.
The findings of this population-based cohort study disprove this hypothesis and show that warfarin use is not a significant contributor to the discrepancy between the sexes in stroke risk. The results also suggest that "current anticoagulant therapy to prevent stroke might not be sufficient for older women, and newer strategies are needed to further reduce stroke risk in women with AF," said Meytal Avgil Tsadok, Ph.D., of the division of clinical epidemiology, McGill University Health Center, Montreal, and her associates.
The investigators assessed warfarin use and subsequent stroke incidence in a cohort of more than 83,000 elderly patients throughout Quebec who were discharged with a primary or secondary diagnosis of AF in 1998-2007. The 39,398 men and 44,115 women all were aged 65 years and older at admission.
Following hospital discharge, the crude stroke rate was significantly higher in women (5.8%) than in men (4.3%), as was the overall incidence of stroke: 2.02/100 person-years among women, compared with 1.61/100 person-years among men.
In a multivariate analysis that adjusted for comorbid conditions, stroke risk factors, and warfarin use, the higher risk of stroke among women persisted, with a hazard ratio of 1.14, the researchers said (JAMA 2012;307:1952-8).
Women were slightly more likely to fill warfarin prescriptions, with a rate of 60.6%, than were men (58.2% rate). Adherence to warfarin therapy was judged to be good in both sexes, and neither warfarin use nor adherence level affected the discrepancy between men and women in stroke risk.
Other than female sex, the strongest independent risk factor for stroke was a history of the disorder. In an analysis of the subgroup of patients who had no history of stroke, women still were at higher risk than men, with a hazard ratio of 1.17.
"The difference between the sexes was mainly driven by the rates in older (greater than 75 years) patients (2.38/100 person-years for women vs 1.95 in men)," Dr. Avgil Tsadok and her colleagues said.
"Thus, women older than 75 years represent the most important target population for stroke prevention in patients with AF, and the effectiveness of novel anticoagulants in this population in real-world practice will need to be closely monitored," they added.
This study was supported by the Canadian Institutes of Health Research. No relevant financial conflicts of interest were reported.
Women with atrial fibrillation are at higher risk of stroke than are men with the condition, regardless of their use of warfarin or risk profiles, according to large, population-based cohort study reported in the May 9 issue of JAMA.
In numerous large studies of AF, women’s stroke risk has been shown to be 40%-70% higher than men’s risk. The reason for this discrepancy is unclear, but some have suggested that it might be because women are less likely than men to receive prophylactic warfarin therapy.
The findings of this population-based cohort study disprove this hypothesis and show that warfarin use is not a significant contributor to the discrepancy between the sexes in stroke risk. The results also suggest that "current anticoagulant therapy to prevent stroke might not be sufficient for older women, and newer strategies are needed to further reduce stroke risk in women with AF," said Meytal Avgil Tsadok, Ph.D., of the division of clinical epidemiology, McGill University Health Center, Montreal, and her associates.
The investigators assessed warfarin use and subsequent stroke incidence in a cohort of more than 83,000 elderly patients throughout Quebec who were discharged with a primary or secondary diagnosis of AF in 1998-2007. The 39,398 men and 44,115 women all were aged 65 years and older at admission.
Following hospital discharge, the crude stroke rate was significantly higher in women (5.8%) than in men (4.3%), as was the overall incidence of stroke: 2.02/100 person-years among women, compared with 1.61/100 person-years among men.
In a multivariate analysis that adjusted for comorbid conditions, stroke risk factors, and warfarin use, the higher risk of stroke among women persisted, with a hazard ratio of 1.14, the researchers said (JAMA 2012;307:1952-8).
Women were slightly more likely to fill warfarin prescriptions, with a rate of 60.6%, than were men (58.2% rate). Adherence to warfarin therapy was judged to be good in both sexes, and neither warfarin use nor adherence level affected the discrepancy between men and women in stroke risk.
Other than female sex, the strongest independent risk factor for stroke was a history of the disorder. In an analysis of the subgroup of patients who had no history of stroke, women still were at higher risk than men, with a hazard ratio of 1.17.
"The difference between the sexes was mainly driven by the rates in older (greater than 75 years) patients (2.38/100 person-years for women vs 1.95 in men)," Dr. Avgil Tsadok and her colleagues said.
"Thus, women older than 75 years represent the most important target population for stroke prevention in patients with AF, and the effectiveness of novel anticoagulants in this population in real-world practice will need to be closely monitored," they added.
This study was supported by the Canadian Institutes of Health Research. No relevant financial conflicts of interest were reported.
Women with atrial fibrillation are at higher risk of stroke than are men with the condition, regardless of their use of warfarin or risk profiles, according to large, population-based cohort study reported in the May 9 issue of JAMA.
In numerous large studies of AF, women’s stroke risk has been shown to be 40%-70% higher than men’s risk. The reason for this discrepancy is unclear, but some have suggested that it might be because women are less likely than men to receive prophylactic warfarin therapy.
The findings of this population-based cohort study disprove this hypothesis and show that warfarin use is not a significant contributor to the discrepancy between the sexes in stroke risk. The results also suggest that "current anticoagulant therapy to prevent stroke might not be sufficient for older women, and newer strategies are needed to further reduce stroke risk in women with AF," said Meytal Avgil Tsadok, Ph.D., of the division of clinical epidemiology, McGill University Health Center, Montreal, and her associates.
The investigators assessed warfarin use and subsequent stroke incidence in a cohort of more than 83,000 elderly patients throughout Quebec who were discharged with a primary or secondary diagnosis of AF in 1998-2007. The 39,398 men and 44,115 women all were aged 65 years and older at admission.
Following hospital discharge, the crude stroke rate was significantly higher in women (5.8%) than in men (4.3%), as was the overall incidence of stroke: 2.02/100 person-years among women, compared with 1.61/100 person-years among men.
In a multivariate analysis that adjusted for comorbid conditions, stroke risk factors, and warfarin use, the higher risk of stroke among women persisted, with a hazard ratio of 1.14, the researchers said (JAMA 2012;307:1952-8).
Women were slightly more likely to fill warfarin prescriptions, with a rate of 60.6%, than were men (58.2% rate). Adherence to warfarin therapy was judged to be good in both sexes, and neither warfarin use nor adherence level affected the discrepancy between men and women in stroke risk.
Other than female sex, the strongest independent risk factor for stroke was a history of the disorder. In an analysis of the subgroup of patients who had no history of stroke, women still were at higher risk than men, with a hazard ratio of 1.17.
"The difference between the sexes was mainly driven by the rates in older (greater than 75 years) patients (2.38/100 person-years for women vs 1.95 in men)," Dr. Avgil Tsadok and her colleagues said.
"Thus, women older than 75 years represent the most important target population for stroke prevention in patients with AF, and the effectiveness of novel anticoagulants in this population in real-world practice will need to be closely monitored," they added.
This study was supported by the Canadian Institutes of Health Research. No relevant financial conflicts of interest were reported.
FROM JAMA
Major Finding: The rate of stroke was significantly higher in women (5.8%) than in men (4.3%) with AF, and the overall incidence was 2.02/100 person-years for women vs 1.61/100 person-years for men.
Data Source: A population-based cohort study of 39,398 men and 44,115 women aged 65 years and older diagnosed as having AF in 1998-2007 and followed for stroke incidence.
Disclosures: This study was supported by the Canadian Institutes of Health Research. No relevant financial conflicts of interest were reported.
Data Supporting Probiotics' Benefit Found Weak but Favorable
The use of probiotics appears to lower the risk of developing diarrhea while taking antibiotic therapy, based on a systematic review of 82 randomized clinical trials in the May 9 issue of JAMA.
The majority of the studies, however, were poorly conducted with considerable limitations, and more research is needed to determine which probiotics are associated with the greatest efficacy in the setting of specific antibiotics, said Susanne Hempel, Ph.D., of the Southern California Evidence-Based Practice Center, Rand Health, Santa Monica, and her associates.
In fact, when examined individually, most of the studies showed no significant benefit from using probiotics. However, when the results of 63 of these trials were pooled in a meta-analysis involving 11,811 subjects, the use of probiotics decreased the relative risk of developing antibiotic-related diarrhea when compared with not using them, the investigators found.
Adjunct probiotic therapy also reduced the number of study subjects who experienced severe diarrhea, they added.
The investigators reviewed the literature in 12 electronic databases and screened 2,426 reports on probiotic use published during the last 30 years. They included 82 randomized clinical trials that compared the therapy against no treatment, placebo, or a different dose of probiotics in their meta-analysis.
The study subjects included children, adults, and the elderly who were taking antibiotics for a variety of indications. Sixteen of the trials examined the use of a single antibiotic, and the remainder assessed numerous antibiotics. Two trials focused on the use of probiotics to treat rather than to prevent antibiotic-associated diarrhea.
All types of probiotics were included in the meta-analysis, including the genera Lactobacillus, Bifidobacterium, Saccharomyces, Streptococcus, Enterococcus, and Bacillus, alone or in various combinations.
Overall, the quality of the research was considered low. Fifty-nine studies "lacked adequate information to assess the overall risk of bias." Sixty-four never stated whether treatment allocation was blinded, 31 didn’t report an intent-to-treat analysis, and approximately half did not include a calculation of the study’s statistical power to detect differences in outcomes.
In addition, 17 trials were industry sponsored, and 52 did not clarify the role of funding or conflicts of interest. Perhaps most important, 59 of the 82 randomized clinical trials did not report on adverse events specifically related to the use of probiotics, Dr. Hempel and her colleagues said.
Probiotics have been linked to serious adverse effects such as fungemia and bacterial sepsis. "It is noteworthy that few trials addressed these outcomes, especially because cases of such infections suspected to be associated with the administered organisms were reported decades ago," Dr. Hempel and her associates noted.
The use of probiotics was found to reduce the risk of antibiotic-associated diarrhea, with a relative risk of 0.58. This benefit was consistent across several subgroups of patients and in different sensitivity analyses.
"The treatment effect equates to a number needed to treat of 13," the investigators said (JAMA 2012;307:1959-69).
There was no evidence that the benefit varied by type of probiotic, but it was impossible to assess this question adequately because most of the trials used blends of genera, species, and strains.
The treatment benefit appeared to be consistent regardless of patient age, the clinical indication for antibiotic therapy, and the duration of antibiotic therapy. It wasn’t possible to assess the advantages of probiotics by type of antibiotic agent because the trials in this meta-analysis rarely specified which antibiotics were used, or else they involved patients taking a variety of antibiotics.
This study was funded by Rand. No relevant financial relationships were reported.
The use of probiotics appears to lower the risk of developing diarrhea while taking antibiotic therapy, based on a systematic review of 82 randomized clinical trials in the May 9 issue of JAMA.
The majority of the studies, however, were poorly conducted with considerable limitations, and more research is needed to determine which probiotics are associated with the greatest efficacy in the setting of specific antibiotics, said Susanne Hempel, Ph.D., of the Southern California Evidence-Based Practice Center, Rand Health, Santa Monica, and her associates.
In fact, when examined individually, most of the studies showed no significant benefit from using probiotics. However, when the results of 63 of these trials were pooled in a meta-analysis involving 11,811 subjects, the use of probiotics decreased the relative risk of developing antibiotic-related diarrhea when compared with not using them, the investigators found.
Adjunct probiotic therapy also reduced the number of study subjects who experienced severe diarrhea, they added.
The investigators reviewed the literature in 12 electronic databases and screened 2,426 reports on probiotic use published during the last 30 years. They included 82 randomized clinical trials that compared the therapy against no treatment, placebo, or a different dose of probiotics in their meta-analysis.
The study subjects included children, adults, and the elderly who were taking antibiotics for a variety of indications. Sixteen of the trials examined the use of a single antibiotic, and the remainder assessed numerous antibiotics. Two trials focused on the use of probiotics to treat rather than to prevent antibiotic-associated diarrhea.
All types of probiotics were included in the meta-analysis, including the genera Lactobacillus, Bifidobacterium, Saccharomyces, Streptococcus, Enterococcus, and Bacillus, alone or in various combinations.
Overall, the quality of the research was considered low. Fifty-nine studies "lacked adequate information to assess the overall risk of bias." Sixty-four never stated whether treatment allocation was blinded, 31 didn’t report an intent-to-treat analysis, and approximately half did not include a calculation of the study’s statistical power to detect differences in outcomes.
In addition, 17 trials were industry sponsored, and 52 did not clarify the role of funding or conflicts of interest. Perhaps most important, 59 of the 82 randomized clinical trials did not report on adverse events specifically related to the use of probiotics, Dr. Hempel and her colleagues said.
Probiotics have been linked to serious adverse effects such as fungemia and bacterial sepsis. "It is noteworthy that few trials addressed these outcomes, especially because cases of such infections suspected to be associated with the administered organisms were reported decades ago," Dr. Hempel and her associates noted.
The use of probiotics was found to reduce the risk of antibiotic-associated diarrhea, with a relative risk of 0.58. This benefit was consistent across several subgroups of patients and in different sensitivity analyses.
"The treatment effect equates to a number needed to treat of 13," the investigators said (JAMA 2012;307:1959-69).
There was no evidence that the benefit varied by type of probiotic, but it was impossible to assess this question adequately because most of the trials used blends of genera, species, and strains.
The treatment benefit appeared to be consistent regardless of patient age, the clinical indication for antibiotic therapy, and the duration of antibiotic therapy. It wasn’t possible to assess the advantages of probiotics by type of antibiotic agent because the trials in this meta-analysis rarely specified which antibiotics were used, or else they involved patients taking a variety of antibiotics.
This study was funded by Rand. No relevant financial relationships were reported.
The use of probiotics appears to lower the risk of developing diarrhea while taking antibiotic therapy, based on a systematic review of 82 randomized clinical trials in the May 9 issue of JAMA.
The majority of the studies, however, were poorly conducted with considerable limitations, and more research is needed to determine which probiotics are associated with the greatest efficacy in the setting of specific antibiotics, said Susanne Hempel, Ph.D., of the Southern California Evidence-Based Practice Center, Rand Health, Santa Monica, and her associates.
In fact, when examined individually, most of the studies showed no significant benefit from using probiotics. However, when the results of 63 of these trials were pooled in a meta-analysis involving 11,811 subjects, the use of probiotics decreased the relative risk of developing antibiotic-related diarrhea when compared with not using them, the investigators found.
Adjunct probiotic therapy also reduced the number of study subjects who experienced severe diarrhea, they added.
The investigators reviewed the literature in 12 electronic databases and screened 2,426 reports on probiotic use published during the last 30 years. They included 82 randomized clinical trials that compared the therapy against no treatment, placebo, or a different dose of probiotics in their meta-analysis.
The study subjects included children, adults, and the elderly who were taking antibiotics for a variety of indications. Sixteen of the trials examined the use of a single antibiotic, and the remainder assessed numerous antibiotics. Two trials focused on the use of probiotics to treat rather than to prevent antibiotic-associated diarrhea.
All types of probiotics were included in the meta-analysis, including the genera Lactobacillus, Bifidobacterium, Saccharomyces, Streptococcus, Enterococcus, and Bacillus, alone or in various combinations.
Overall, the quality of the research was considered low. Fifty-nine studies "lacked adequate information to assess the overall risk of bias." Sixty-four never stated whether treatment allocation was blinded, 31 didn’t report an intent-to-treat analysis, and approximately half did not include a calculation of the study’s statistical power to detect differences in outcomes.
In addition, 17 trials were industry sponsored, and 52 did not clarify the role of funding or conflicts of interest. Perhaps most important, 59 of the 82 randomized clinical trials did not report on adverse events specifically related to the use of probiotics, Dr. Hempel and her colleagues said.
Probiotics have been linked to serious adverse effects such as fungemia and bacterial sepsis. "It is noteworthy that few trials addressed these outcomes, especially because cases of such infections suspected to be associated with the administered organisms were reported decades ago," Dr. Hempel and her associates noted.
The use of probiotics was found to reduce the risk of antibiotic-associated diarrhea, with a relative risk of 0.58. This benefit was consistent across several subgroups of patients and in different sensitivity analyses.
"The treatment effect equates to a number needed to treat of 13," the investigators said (JAMA 2012;307:1959-69).
There was no evidence that the benefit varied by type of probiotic, but it was impossible to assess this question adequately because most of the trials used blends of genera, species, and strains.
The treatment benefit appeared to be consistent regardless of patient age, the clinical indication for antibiotic therapy, and the duration of antibiotic therapy. It wasn’t possible to assess the advantages of probiotics by type of antibiotic agent because the trials in this meta-analysis rarely specified which antibiotics were used, or else they involved patients taking a variety of antibiotics.
This study was funded by Rand. No relevant financial relationships were reported.
FROM JAMA
Major Finding: Overall, probiotics reduced the risk of developing diarrhea during antibiotic therapy, with an RR of 0.58. However, the quality of the majority of the research in this area is poor, and further studies are needed to determine which probiotics work best in the setting of specific antibiotics.
Data Source: Findings were based on a meta-analysis of 82 randomized controlled trials performed over the past 30 years assessing probiotic therapy to prevent antibiotic-associated diarrhea.
Disclosures: This study was funded by Rand. No relevant financial relationships were reported.
Variceal Rebleeding Twice as Likely If Beta-Blockers Fail
Patients with cirrhosis whose first episode of acute variceal bleeding occurs when they are already taking prophylactic beta-blockers are at increased risk for recurrent bleeding, Dr. Andrea Ribeiro de Souza and colleagues reported in the June issue of Clinical Gastroenterology and Hepatology.
The risk of recurrence is approximately twice as high in such patients as in those who are not taking prophylactic beta-blockers when their first variceal bleed occurs. This is true even when patients receive the currently recommended secondary therapy after nonselective beta-blocker prophylaxis fails, which is a combination of endoscopic band ligation and further beta-blocker treatment, with or without the addition of isosorbide-5-mononitrate.
These results, taken together with those of two recent studies showing that patients who undergo endoscopic band ligation have a "dismal" rate of variceal rebleeding, suggest that patients who don’t respond to prophylactic beta-blockers "have an idiosyncrasy that makes them also poor responders to endoscopic therapy.
"Since there are no baseline clinical or hemodynamic characteristics that could differentiate this population, it can be speculated that their increased bleeding risk may be related to other factors, perhaps ... peculiarities of the esophageal circulation, which [have] never been investigated so far," said Dr. de Souza and associates at the University of Barcelona and Centro de Investigacion Biomedica en Red de Enfermedades Hepaticas y Digestivas (Ciberehd) (Clin. Gastroenterol. Hepatol. 2012 [doi:10.1016/j.cgh.2012.02.011]).
Primary prophylaxis of variceal bleeding with nonselective beta-blockers is now widely used, so the number of cirrhosis patients who experience their first episode of bleeding while taking these drugs is increasing. Until now, no study has explored whether these patients differ significantly from those who aren’t taking the drugs when they have their first variceal bleed.
Dr. de Souza and colleagues examined this question using data from the liver unit of their hospital during 2007-2011, on 89 consecutive patients treated for acute variceal bleeding. Thirty-four of the study subjects had their first bleed while on beta-blocker prophylaxis, and 55 subjects were not taking the medication.
Subjects were treated according to current recommendations. On admission they received an intravenous vasoconstrictor (terlipressin or somatostatin) and prophylactic antibiotics, and they underwent endoscopic band ligation (EBL) within 12 hours. Those whose bleeding was controlled were started on oral propanolol or nadolol, which was increased until heart rate or systolic blood pressure had fallen to appropriate levels.
Isosorbide was started in 21 patients. EBL sessions were scheduled every 2 weeks until varices were eradicated, and patients took proton pump inhibitors until that time as well.
Variceal obliteration was achieved in only 67% of patients who had already been taking beta-blocker prophylaxis, compared with 80% of those who had not.
All subjects underwent surveillance endoscopy at 1-3 months, and at 6-month intervals thereafter. Further EBL was done if varices reappeared. Patients were followed for 2 years, or until liver transplantation or death occurred.
The primary end point of the study was rebleeding from any source during follow-up. The cumulative incidence of rebleeding from any source was 48% for patients already taking beta-blocker prophylaxis, compared with 24% in the other group.
When the analysis was restricted to rebleeding from varices only, the rate was still significantly higher among patients already taking beta-blocker prophylaxis (39%) than in the other group (17%).
This discrepancy persisted across all subgroups in further analyses, regardless of whether the cirrhosis was or was not alcohol related, whether or not the subjects were actively drinking at the time of the first variceal bleed, and whether or not patients were treated with isosorbide.
These findings indicate that patients whose first variceal bleed occurs while they are taking prophylactic beta-blockers are not likely to benefit from EBL, "and would probably be best treated with new and more effective drugs to achieve target reductions in portal pressure. A possibility is the use of carvedilol, a nonselective beta-blocker with intrinsic vasodilator activity that causes a greater reduction in hepatic vein pressure gradient than propranolol or nadolol," the researchers said.
An even better option might be transjugular intrahepatic portosystemic shunting, since medication typically achieves only a modest decrease in hepatic vein pressure gradient, which may not be sufficient to prevent bleeding recurrences, they added.
The study was supported in part by grants from Instituto de Salud Carlos III, Ministerio de Ciencia e Innovación. The Ciberehd is funded by Instituto de Salud Carlos III. Dr. Andrea Ribeiro de Souza’s work is funded by grant of the BBVA foundation. The investigators reported no financial conflicts of interest.
Patients with cirrhosis whose first episode of acute variceal bleeding occurs when they are already taking prophylactic beta-blockers are at increased risk for recurrent bleeding, Dr. Andrea Ribeiro de Souza and colleagues reported in the June issue of Clinical Gastroenterology and Hepatology.
The risk of recurrence is approximately twice as high in such patients as in those who are not taking prophylactic beta-blockers when their first variceal bleed occurs. This is true even when patients receive the currently recommended secondary therapy after nonselective beta-blocker prophylaxis fails, which is a combination of endoscopic band ligation and further beta-blocker treatment, with or without the addition of isosorbide-5-mononitrate.
These results, taken together with those of two recent studies showing that patients who undergo endoscopic band ligation have a "dismal" rate of variceal rebleeding, suggest that patients who don’t respond to prophylactic beta-blockers "have an idiosyncrasy that makes them also poor responders to endoscopic therapy.
"Since there are no baseline clinical or hemodynamic characteristics that could differentiate this population, it can be speculated that their increased bleeding risk may be related to other factors, perhaps ... peculiarities of the esophageal circulation, which [have] never been investigated so far," said Dr. de Souza and associates at the University of Barcelona and Centro de Investigacion Biomedica en Red de Enfermedades Hepaticas y Digestivas (Ciberehd) (Clin. Gastroenterol. Hepatol. 2012 [doi:10.1016/j.cgh.2012.02.011]).
Primary prophylaxis of variceal bleeding with nonselective beta-blockers is now widely used, so the number of cirrhosis patients who experience their first episode of bleeding while taking these drugs is increasing. Until now, no study has explored whether these patients differ significantly from those who aren’t taking the drugs when they have their first variceal bleed.
Dr. de Souza and colleagues examined this question using data from the liver unit of their hospital during 2007-2011, on 89 consecutive patients treated for acute variceal bleeding. Thirty-four of the study subjects had their first bleed while on beta-blocker prophylaxis, and 55 subjects were not taking the medication.
Subjects were treated according to current recommendations. On admission they received an intravenous vasoconstrictor (terlipressin or somatostatin) and prophylactic antibiotics, and they underwent endoscopic band ligation (EBL) within 12 hours. Those whose bleeding was controlled were started on oral propanolol or nadolol, which was increased until heart rate or systolic blood pressure had fallen to appropriate levels.
Isosorbide was started in 21 patients. EBL sessions were scheduled every 2 weeks until varices were eradicated, and patients took proton pump inhibitors until that time as well.
Variceal obliteration was achieved in only 67% of patients who had already been taking beta-blocker prophylaxis, compared with 80% of those who had not.
All subjects underwent surveillance endoscopy at 1-3 months, and at 6-month intervals thereafter. Further EBL was done if varices reappeared. Patients were followed for 2 years, or until liver transplantation or death occurred.
The primary end point of the study was rebleeding from any source during follow-up. The cumulative incidence of rebleeding from any source was 48% for patients already taking beta-blocker prophylaxis, compared with 24% in the other group.
When the analysis was restricted to rebleeding from varices only, the rate was still significantly higher among patients already taking beta-blocker prophylaxis (39%) than in the other group (17%).
This discrepancy persisted across all subgroups in further analyses, regardless of whether the cirrhosis was or was not alcohol related, whether or not the subjects were actively drinking at the time of the first variceal bleed, and whether or not patients were treated with isosorbide.
These findings indicate that patients whose first variceal bleed occurs while they are taking prophylactic beta-blockers are not likely to benefit from EBL, "and would probably be best treated with new and more effective drugs to achieve target reductions in portal pressure. A possibility is the use of carvedilol, a nonselective beta-blocker with intrinsic vasodilator activity that causes a greater reduction in hepatic vein pressure gradient than propranolol or nadolol," the researchers said.
An even better option might be transjugular intrahepatic portosystemic shunting, since medication typically achieves only a modest decrease in hepatic vein pressure gradient, which may not be sufficient to prevent bleeding recurrences, they added.
The study was supported in part by grants from Instituto de Salud Carlos III, Ministerio de Ciencia e Innovación. The Ciberehd is funded by Instituto de Salud Carlos III. Dr. Andrea Ribeiro de Souza’s work is funded by grant of the BBVA foundation. The investigators reported no financial conflicts of interest.
Patients with cirrhosis whose first episode of acute variceal bleeding occurs when they are already taking prophylactic beta-blockers are at increased risk for recurrent bleeding, Dr. Andrea Ribeiro de Souza and colleagues reported in the June issue of Clinical Gastroenterology and Hepatology.
The risk of recurrence is approximately twice as high in such patients as in those who are not taking prophylactic beta-blockers when their first variceal bleed occurs. This is true even when patients receive the currently recommended secondary therapy after nonselective beta-blocker prophylaxis fails, which is a combination of endoscopic band ligation and further beta-blocker treatment, with or without the addition of isosorbide-5-mononitrate.
These results, taken together with those of two recent studies showing that patients who undergo endoscopic band ligation have a "dismal" rate of variceal rebleeding, suggest that patients who don’t respond to prophylactic beta-blockers "have an idiosyncrasy that makes them also poor responders to endoscopic therapy.
"Since there are no baseline clinical or hemodynamic characteristics that could differentiate this population, it can be speculated that their increased bleeding risk may be related to other factors, perhaps ... peculiarities of the esophageal circulation, which [have] never been investigated so far," said Dr. de Souza and associates at the University of Barcelona and Centro de Investigacion Biomedica en Red de Enfermedades Hepaticas y Digestivas (Ciberehd) (Clin. Gastroenterol. Hepatol. 2012 [doi:10.1016/j.cgh.2012.02.011]).
Primary prophylaxis of variceal bleeding with nonselective beta-blockers is now widely used, so the number of cirrhosis patients who experience their first episode of bleeding while taking these drugs is increasing. Until now, no study has explored whether these patients differ significantly from those who aren’t taking the drugs when they have their first variceal bleed.
Dr. de Souza and colleagues examined this question using data from the liver unit of their hospital during 2007-2011, on 89 consecutive patients treated for acute variceal bleeding. Thirty-four of the study subjects had their first bleed while on beta-blocker prophylaxis, and 55 subjects were not taking the medication.
Subjects were treated according to current recommendations. On admission they received an intravenous vasoconstrictor (terlipressin or somatostatin) and prophylactic antibiotics, and they underwent endoscopic band ligation (EBL) within 12 hours. Those whose bleeding was controlled were started on oral propanolol or nadolol, which was increased until heart rate or systolic blood pressure had fallen to appropriate levels.
Isosorbide was started in 21 patients. EBL sessions were scheduled every 2 weeks until varices were eradicated, and patients took proton pump inhibitors until that time as well.
Variceal obliteration was achieved in only 67% of patients who had already been taking beta-blocker prophylaxis, compared with 80% of those who had not.
All subjects underwent surveillance endoscopy at 1-3 months, and at 6-month intervals thereafter. Further EBL was done if varices reappeared. Patients were followed for 2 years, or until liver transplantation or death occurred.
The primary end point of the study was rebleeding from any source during follow-up. The cumulative incidence of rebleeding from any source was 48% for patients already taking beta-blocker prophylaxis, compared with 24% in the other group.
When the analysis was restricted to rebleeding from varices only, the rate was still significantly higher among patients already taking beta-blocker prophylaxis (39%) than in the other group (17%).
This discrepancy persisted across all subgroups in further analyses, regardless of whether the cirrhosis was or was not alcohol related, whether or not the subjects were actively drinking at the time of the first variceal bleed, and whether or not patients were treated with isosorbide.
These findings indicate that patients whose first variceal bleed occurs while they are taking prophylactic beta-blockers are not likely to benefit from EBL, "and would probably be best treated with new and more effective drugs to achieve target reductions in portal pressure. A possibility is the use of carvedilol, a nonselective beta-blocker with intrinsic vasodilator activity that causes a greater reduction in hepatic vein pressure gradient than propranolol or nadolol," the researchers said.
An even better option might be transjugular intrahepatic portosystemic shunting, since medication typically achieves only a modest decrease in hepatic vein pressure gradient, which may not be sufficient to prevent bleeding recurrences, they added.
The study was supported in part by grants from Instituto de Salud Carlos III, Ministerio de Ciencia e Innovación. The Ciberehd is funded by Instituto de Salud Carlos III. Dr. Andrea Ribeiro de Souza’s work is funded by grant of the BBVA foundation. The investigators reported no financial conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major Finding: The cumulative incidence of rebleeding from any source was 48% for patients already taking beta-blocker prophylaxis, compared with 24% in the group not on beta-blockers.
Data Source: A total of 89 consecutive patients in a prospective database were analyzed in this single-center study.
Disclosures: The study was supported in part by grants from the Instituto de Salud Carlos III, Ministerio de Ciencia e Innovación. The Ciberehd is funded by the Instituto de Salud Carlos III. Dr. Andrea Ribeiro de Souza’s work is funded by a grant from the BBVA foundation. The investigators reported no financial conflicts of interest.
Profile Predicts Longer Survival in Multiple Myeloma
Investigators in France have identified a profile that predicts longer survival for one in five patients with newly diagnosed multiple myeloma, according to a study published online April 30 in the Journal of Clinical Oncology.
The absence of three key chromosomal abnormalities in malignant plasma-cell samples, together with a low beta-2 microglobulin level, was seen in this subgroup of patients. In addition, patients younger than 55 years had longer progression-free and overall survival in a relatively young population that was limited to patients less than age 66 years.
The finding favoring younger age was unexpected and "to our knowledge, it has not been reported before," said Dr. Hervé Avet-Loiseau of the hematology laboratory, Biology Institute, University of Nantes (France), and his associates.
Whereas most prognostic studies are designed to identify myeloma patients with poorer outcomes, the researchers sought to identify patients with longer life expectancy. To that end, they updated and reanalyzed the data of patients treated in the IFM (Intergroupe Francophone du Myelome) 99-02 and 99-04 trials.
Sixty percent of the 520 patients studied did not carry any of the three high-risk genetic abnormalities. Those who also were younger than age 55 years and had beta-2 microglobulin levels less than 5.5 mg/l had an 8-year probability of survival of 75%. "This subgroup represented 20% of the entire patient population," the investigators noted.
Two of the chromosomal abnormalities – t(4;14) translocation and loss of the short arm of chromosome 17, or del(17p) – are known to be associated with a poor outcome and usually are assessed as part of risk stratification in patients with multiple myeloma. The third abnormality – a gain in chromosome 1q – has recently been recognized as a prognostic indicator but has not yet been added to the typical panel of genetic probes used to assess patient prognosis.
To determine which prognostic indicators can be used to define shorter or longer survival, rather than just "poor outcome" or "better outcome," Dr. Avet-Loiseau and his colleagues included the assessment of 1q gains along with t(4:14) translocations and deletions of 17p in this large series.
For their study, Dr. Avet-Loiseau and his associates analyzed stored bone marrow and plasma samples of 520 patients who had all received the same induction regimen of vincristine, Adriamycin, and dexamethasone followed by high-dose melphalan. All the study subjects were younger than age 66 years. Median overall survival was 7.5 years.
A total of 11% of the cohort had t(4;14) translocations, 5.4% had del(17p), and 33% had 1q gains (J. Clin. Oncol. 2012 [doi:10.1200/JCO.2011.36.5726]).
In contrast, patients who had two or more of these prognostic factors had a median overall survival of only 33 months. The findings indicate that assessment of 1q gains should be added to the panel of probes used routinely in determining prognosis in patients with multiple myeloma, the researchers said.
"The question now concerns the role of novel drugs in this prognostication," the authors wrote. None of the patients received bortezomib (Velcade) in first-line treatment, although most were given novel drugs upon progression.
As bortezomib may help to overcome the poor prognosis associated with t(4;14), long-term analysis of first-line trials are warranted, they said, noting, "However, such analyses will not be possible for 4-5 years, because the first trials testing this drug started in 2005."
Dr. Avet-Loiseau reported no financial conflicts of interest; one of his associates reported ties to Celgene and Janssen-Cilag.
Investigators in France have identified a profile that predicts longer survival for one in five patients with newly diagnosed multiple myeloma, according to a study published online April 30 in the Journal of Clinical Oncology.
The absence of three key chromosomal abnormalities in malignant plasma-cell samples, together with a low beta-2 microglobulin level, was seen in this subgroup of patients. In addition, patients younger than 55 years had longer progression-free and overall survival in a relatively young population that was limited to patients less than age 66 years.
The finding favoring younger age was unexpected and "to our knowledge, it has not been reported before," said Dr. Hervé Avet-Loiseau of the hematology laboratory, Biology Institute, University of Nantes (France), and his associates.
Whereas most prognostic studies are designed to identify myeloma patients with poorer outcomes, the researchers sought to identify patients with longer life expectancy. To that end, they updated and reanalyzed the data of patients treated in the IFM (Intergroupe Francophone du Myelome) 99-02 and 99-04 trials.
Sixty percent of the 520 patients studied did not carry any of the three high-risk genetic abnormalities. Those who also were younger than age 55 years and had beta-2 microglobulin levels less than 5.5 mg/l had an 8-year probability of survival of 75%. "This subgroup represented 20% of the entire patient population," the investigators noted.
Two of the chromosomal abnormalities – t(4;14) translocation and loss of the short arm of chromosome 17, or del(17p) – are known to be associated with a poor outcome and usually are assessed as part of risk stratification in patients with multiple myeloma. The third abnormality – a gain in chromosome 1q – has recently been recognized as a prognostic indicator but has not yet been added to the typical panel of genetic probes used to assess patient prognosis.
To determine which prognostic indicators can be used to define shorter or longer survival, rather than just "poor outcome" or "better outcome," Dr. Avet-Loiseau and his colleagues included the assessment of 1q gains along with t(4:14) translocations and deletions of 17p in this large series.
For their study, Dr. Avet-Loiseau and his associates analyzed stored bone marrow and plasma samples of 520 patients who had all received the same induction regimen of vincristine, Adriamycin, and dexamethasone followed by high-dose melphalan. All the study subjects were younger than age 66 years. Median overall survival was 7.5 years.
A total of 11% of the cohort had t(4;14) translocations, 5.4% had del(17p), and 33% had 1q gains (J. Clin. Oncol. 2012 [doi:10.1200/JCO.2011.36.5726]).
In contrast, patients who had two or more of these prognostic factors had a median overall survival of only 33 months. The findings indicate that assessment of 1q gains should be added to the panel of probes used routinely in determining prognosis in patients with multiple myeloma, the researchers said.
"The question now concerns the role of novel drugs in this prognostication," the authors wrote. None of the patients received bortezomib (Velcade) in first-line treatment, although most were given novel drugs upon progression.
As bortezomib may help to overcome the poor prognosis associated with t(4;14), long-term analysis of first-line trials are warranted, they said, noting, "However, such analyses will not be possible for 4-5 years, because the first trials testing this drug started in 2005."
Dr. Avet-Loiseau reported no financial conflicts of interest; one of his associates reported ties to Celgene and Janssen-Cilag.
Investigators in France have identified a profile that predicts longer survival for one in five patients with newly diagnosed multiple myeloma, according to a study published online April 30 in the Journal of Clinical Oncology.
The absence of three key chromosomal abnormalities in malignant plasma-cell samples, together with a low beta-2 microglobulin level, was seen in this subgroup of patients. In addition, patients younger than 55 years had longer progression-free and overall survival in a relatively young population that was limited to patients less than age 66 years.
The finding favoring younger age was unexpected and "to our knowledge, it has not been reported before," said Dr. Hervé Avet-Loiseau of the hematology laboratory, Biology Institute, University of Nantes (France), and his associates.
Whereas most prognostic studies are designed to identify myeloma patients with poorer outcomes, the researchers sought to identify patients with longer life expectancy. To that end, they updated and reanalyzed the data of patients treated in the IFM (Intergroupe Francophone du Myelome) 99-02 and 99-04 trials.
Sixty percent of the 520 patients studied did not carry any of the three high-risk genetic abnormalities. Those who also were younger than age 55 years and had beta-2 microglobulin levels less than 5.5 mg/l had an 8-year probability of survival of 75%. "This subgroup represented 20% of the entire patient population," the investigators noted.
Two of the chromosomal abnormalities – t(4;14) translocation and loss of the short arm of chromosome 17, or del(17p) – are known to be associated with a poor outcome and usually are assessed as part of risk stratification in patients with multiple myeloma. The third abnormality – a gain in chromosome 1q – has recently been recognized as a prognostic indicator but has not yet been added to the typical panel of genetic probes used to assess patient prognosis.
To determine which prognostic indicators can be used to define shorter or longer survival, rather than just "poor outcome" or "better outcome," Dr. Avet-Loiseau and his colleagues included the assessment of 1q gains along with t(4:14) translocations and deletions of 17p in this large series.
For their study, Dr. Avet-Loiseau and his associates analyzed stored bone marrow and plasma samples of 520 patients who had all received the same induction regimen of vincristine, Adriamycin, and dexamethasone followed by high-dose melphalan. All the study subjects were younger than age 66 years. Median overall survival was 7.5 years.
A total of 11% of the cohort had t(4;14) translocations, 5.4% had del(17p), and 33% had 1q gains (J. Clin. Oncol. 2012 [doi:10.1200/JCO.2011.36.5726]).
In contrast, patients who had two or more of these prognostic factors had a median overall survival of only 33 months. The findings indicate that assessment of 1q gains should be added to the panel of probes used routinely in determining prognosis in patients with multiple myeloma, the researchers said.
"The question now concerns the role of novel drugs in this prognostication," the authors wrote. None of the patients received bortezomib (Velcade) in first-line treatment, although most were given novel drugs upon progression.
As bortezomib may help to overcome the poor prognosis associated with t(4;14), long-term analysis of first-line trials are warranted, they said, noting, "However, such analyses will not be possible for 4-5 years, because the first trials testing this drug started in 2005."
Dr. Avet-Loiseau reported no financial conflicts of interest; one of his associates reported ties to Celgene and Janssen-Cilag.
FROM THE JOURNAL OF CLINICAL ONCOLOGY
Major Finding: Patients with multiple myeloma who did not have three key chromosomal abnormalities, were younger than 55 years, and had low levels of beta-microglobulin had an 8-year probability of survival of 75%.
Data Source: A secondary analysis of genetic and other high-risk factors in 520 adults with multiple myeloma who participated in previous clinical trials of induction therapy.
Disclosures: Dr. Avet-Loiseau reported no financial conflicts of interest; one of his associates reported ties to Celgene and Janssen-Cilag.
Women 30% More Likely to Survive Melanoma Than Men
Among patients with stage I or II cutaneous melanoma, women have been found to have a consistent 30% advantage over men in overall survival, disease-specific survival, rate of distant metastasis, rate of lymph node metastasis, and rate of relapse, a study published online April 30 in the Journal of Clinical Oncology has shown.
"The 30% advantage extends to the whole spectrum of melanoma disease behavior," reported Dr. Arjen Joosse of Erasmus University Medical Center, Rotterdam, the Netherlands, and his associates.
Women with melanoma are known to have higher survival rates than men, but the details of the difference had never been thoroughly explored. Some experts have proposed that men have more advanced disease at diagnosis because they are less aware of melanoma, less likely to be screened, and less likely to seek medical care for a suspect lesion. Others contend that biologic differences between the sexes account for survival differences, and point to estrogen as a likely contributor.
Dr. Joosse and his colleagues examined the issue by analyzing the pooled results of four large, randomized phase III clinical trials of localized melanoma performed by the European Organisation for Research and Treatment of Cancer (EORTC). The trials, which investigated different therapies for the disease, involved detailed medical records and "meticulous" follow-up of 2,672 patients (48% men and 52% women).
"Women exhibited an independent, significant, and consistent advantage of approximately 30%" for overall survival, relapse-free survival, disease-specific survival, time to in-transit metastasis, lymph node metastasis, and distant metastasis, the investigators reported (J. Clin. Oncol. 2012 April 30 [doi:10.1200/JCO.2011.38.0584]).
This sex-based difference persisted across numerous prognostic subgroups of patients, regardless of the location of the initial lesion, Breslow thickness, the presence or absence of ulceration, and whether the patient underwent sentinel node biopsy or elective lymph node dissection. If the hypothesis about sex differences in melanoma detection, screening, and diagnostic delays were true, there should be marked differences in the discrepancy between men and women across such subgroups; but no such differences were found.
Moreover, because women showed both a longer delay before relapse and a higher cure rate, compared with men, "it seems that whatever the cause of the female advantage may be, it causes both a delay in progression and a larger subset of melanomas being cured in women, compared with men," the researchers wrote.
To explore the hypothesis that estrogen might be the source of women’s survival advantage, the investigators classified the female patients by age to approximate their menopausal status.
Postmenopausal women (defined as those aged 60 years and older) retained the 30% advantage in overall survival, relapse-free survival, time to lymph node metastasis, and time to distant metastasis, compared with premenopausal women (aged 45 and younger). The advantage for disease-specific survival declined significantly in this analysis, but that may be a chance finding because of the small sample sizes and low event rates in these subgroups.
Thus, estrogen alone cannot account for the sex-based differences in survival. Other factors that may be involved include androgen receptors in melanoma cells; differences in oxidative stress between men and women; differences between the sexes in vitamin D metabolism, because vitamin D levels appear to affect melanoma prognosis; and differences in immune homeostasis, since melanoma is thought to be immunogenic.
Unravelling the underlying cause of the survival difference between men and women could point the way to targeted therapies, the investigators noted.
They added that the 30% survival advantage in their study is consistent with a 30% advantage in 5 of the 7 published studies in the literature that included 10,000 or more patients.
The study investigators reported no relevant financial disclosures.
Using different therapeutic approaches for men than for women with localized melanoma would be premature now, since we don’t yet know exactly what drives the discrepancy in survival, according to Dr. Vernon K. Sondak and his colleagues.
But we can still take aim at men’s poorer outcomes, by increasing men’s skin cancer awareness and promoting their self-examination, as well as examination by both dermatologists and primary care physicians. "If even a portion of the observed 30% sex-based differences in outcome can be eliminated by focused early detection and prevention strategies in men, this could save many lives in the United States and around the world each year," they wrote.
Dr. Sondak is at the Moffitt Cancer Center and the University of South Florida, Tampa. Dr. Sondak and his colleagues said they had no relevant financial disclosures. These comments were taken from their editorial accompanying Dr. Joosse’s study (J. Clin. Oncol. 2012 April 30 [doi10.1200/JCO.2011.41.3849]).
Using different therapeutic approaches for men than for women with localized melanoma would be premature now, since we don’t yet know exactly what drives the discrepancy in survival, according to Dr. Vernon K. Sondak and his colleagues.
But we can still take aim at men’s poorer outcomes, by increasing men’s skin cancer awareness and promoting their self-examination, as well as examination by both dermatologists and primary care physicians. "If even a portion of the observed 30% sex-based differences in outcome can be eliminated by focused early detection and prevention strategies in men, this could save many lives in the United States and around the world each year," they wrote.
Dr. Sondak is at the Moffitt Cancer Center and the University of South Florida, Tampa. Dr. Sondak and his colleagues said they had no relevant financial disclosures. These comments were taken from their editorial accompanying Dr. Joosse’s study (J. Clin. Oncol. 2012 April 30 [doi10.1200/JCO.2011.41.3849]).
Using different therapeutic approaches for men than for women with localized melanoma would be premature now, since we don’t yet know exactly what drives the discrepancy in survival, according to Dr. Vernon K. Sondak and his colleagues.
But we can still take aim at men’s poorer outcomes, by increasing men’s skin cancer awareness and promoting their self-examination, as well as examination by both dermatologists and primary care physicians. "If even a portion of the observed 30% sex-based differences in outcome can be eliminated by focused early detection and prevention strategies in men, this could save many lives in the United States and around the world each year," they wrote.
Dr. Sondak is at the Moffitt Cancer Center and the University of South Florida, Tampa. Dr. Sondak and his colleagues said they had no relevant financial disclosures. These comments were taken from their editorial accompanying Dr. Joosse’s study (J. Clin. Oncol. 2012 April 30 [doi10.1200/JCO.2011.41.3849]).
Among patients with stage I or II cutaneous melanoma, women have been found to have a consistent 30% advantage over men in overall survival, disease-specific survival, rate of distant metastasis, rate of lymph node metastasis, and rate of relapse, a study published online April 30 in the Journal of Clinical Oncology has shown.
"The 30% advantage extends to the whole spectrum of melanoma disease behavior," reported Dr. Arjen Joosse of Erasmus University Medical Center, Rotterdam, the Netherlands, and his associates.
Women with melanoma are known to have higher survival rates than men, but the details of the difference had never been thoroughly explored. Some experts have proposed that men have more advanced disease at diagnosis because they are less aware of melanoma, less likely to be screened, and less likely to seek medical care for a suspect lesion. Others contend that biologic differences between the sexes account for survival differences, and point to estrogen as a likely contributor.
Dr. Joosse and his colleagues examined the issue by analyzing the pooled results of four large, randomized phase III clinical trials of localized melanoma performed by the European Organisation for Research and Treatment of Cancer (EORTC). The trials, which investigated different therapies for the disease, involved detailed medical records and "meticulous" follow-up of 2,672 patients (48% men and 52% women).
"Women exhibited an independent, significant, and consistent advantage of approximately 30%" for overall survival, relapse-free survival, disease-specific survival, time to in-transit metastasis, lymph node metastasis, and distant metastasis, the investigators reported (J. Clin. Oncol. 2012 April 30 [doi:10.1200/JCO.2011.38.0584]).
This sex-based difference persisted across numerous prognostic subgroups of patients, regardless of the location of the initial lesion, Breslow thickness, the presence or absence of ulceration, and whether the patient underwent sentinel node biopsy or elective lymph node dissection. If the hypothesis about sex differences in melanoma detection, screening, and diagnostic delays were true, there should be marked differences in the discrepancy between men and women across such subgroups; but no such differences were found.
Moreover, because women showed both a longer delay before relapse and a higher cure rate, compared with men, "it seems that whatever the cause of the female advantage may be, it causes both a delay in progression and a larger subset of melanomas being cured in women, compared with men," the researchers wrote.
To explore the hypothesis that estrogen might be the source of women’s survival advantage, the investigators classified the female patients by age to approximate their menopausal status.
Postmenopausal women (defined as those aged 60 years and older) retained the 30% advantage in overall survival, relapse-free survival, time to lymph node metastasis, and time to distant metastasis, compared with premenopausal women (aged 45 and younger). The advantage for disease-specific survival declined significantly in this analysis, but that may be a chance finding because of the small sample sizes and low event rates in these subgroups.
Thus, estrogen alone cannot account for the sex-based differences in survival. Other factors that may be involved include androgen receptors in melanoma cells; differences in oxidative stress between men and women; differences between the sexes in vitamin D metabolism, because vitamin D levels appear to affect melanoma prognosis; and differences in immune homeostasis, since melanoma is thought to be immunogenic.
Unravelling the underlying cause of the survival difference between men and women could point the way to targeted therapies, the investigators noted.
They added that the 30% survival advantage in their study is consistent with a 30% advantage in 5 of the 7 published studies in the literature that included 10,000 or more patients.
The study investigators reported no relevant financial disclosures.
Among patients with stage I or II cutaneous melanoma, women have been found to have a consistent 30% advantage over men in overall survival, disease-specific survival, rate of distant metastasis, rate of lymph node metastasis, and rate of relapse, a study published online April 30 in the Journal of Clinical Oncology has shown.
"The 30% advantage extends to the whole spectrum of melanoma disease behavior," reported Dr. Arjen Joosse of Erasmus University Medical Center, Rotterdam, the Netherlands, and his associates.
Women with melanoma are known to have higher survival rates than men, but the details of the difference had never been thoroughly explored. Some experts have proposed that men have more advanced disease at diagnosis because they are less aware of melanoma, less likely to be screened, and less likely to seek medical care for a suspect lesion. Others contend that biologic differences between the sexes account for survival differences, and point to estrogen as a likely contributor.
Dr. Joosse and his colleagues examined the issue by analyzing the pooled results of four large, randomized phase III clinical trials of localized melanoma performed by the European Organisation for Research and Treatment of Cancer (EORTC). The trials, which investigated different therapies for the disease, involved detailed medical records and "meticulous" follow-up of 2,672 patients (48% men and 52% women).
"Women exhibited an independent, significant, and consistent advantage of approximately 30%" for overall survival, relapse-free survival, disease-specific survival, time to in-transit metastasis, lymph node metastasis, and distant metastasis, the investigators reported (J. Clin. Oncol. 2012 April 30 [doi:10.1200/JCO.2011.38.0584]).
This sex-based difference persisted across numerous prognostic subgroups of patients, regardless of the location of the initial lesion, Breslow thickness, the presence or absence of ulceration, and whether the patient underwent sentinel node biopsy or elective lymph node dissection. If the hypothesis about sex differences in melanoma detection, screening, and diagnostic delays were true, there should be marked differences in the discrepancy between men and women across such subgroups; but no such differences were found.
Moreover, because women showed both a longer delay before relapse and a higher cure rate, compared with men, "it seems that whatever the cause of the female advantage may be, it causes both a delay in progression and a larger subset of melanomas being cured in women, compared with men," the researchers wrote.
To explore the hypothesis that estrogen might be the source of women’s survival advantage, the investigators classified the female patients by age to approximate their menopausal status.
Postmenopausal women (defined as those aged 60 years and older) retained the 30% advantage in overall survival, relapse-free survival, time to lymph node metastasis, and time to distant metastasis, compared with premenopausal women (aged 45 and younger). The advantage for disease-specific survival declined significantly in this analysis, but that may be a chance finding because of the small sample sizes and low event rates in these subgroups.
Thus, estrogen alone cannot account for the sex-based differences in survival. Other factors that may be involved include androgen receptors in melanoma cells; differences in oxidative stress between men and women; differences between the sexes in vitamin D metabolism, because vitamin D levels appear to affect melanoma prognosis; and differences in immune homeostasis, since melanoma is thought to be immunogenic.
Unravelling the underlying cause of the survival difference between men and women could point the way to targeted therapies, the investigators noted.
They added that the 30% survival advantage in their study is consistent with a 30% advantage in 5 of the 7 published studies in the literature that included 10,000 or more patients.
The study investigators reported no relevant financial disclosures.
FROM THE JOURNAL OF CLINICAL ONCOLOGY
Major Finding: Compared with men, women with melanoma showed a consistent advantage of approximately 30% for overall survival, relapse-free survival, disease-specific survival, lymph node metastasis, and distant metastasis.
Data Source: A pooled analysis of data from four large, randomized clinical trials involving 2,672 adults with localized melanoma who were closely followed for disease progression was conducted.
Disclosures: The investigators said they had no relevant financial disclosures.
Insulin Degludec Matches Insulin Glargine Efficacy
Insulin degludec, an ultralong-acting insulin now in clinical development, proved noninferior to insulin glargine in two parallel, phase III randomized trials sponsored by the manufacturer and reported in the April 21 issue of Lancet.
The new insulin was as effective as insulin glargine at reducing hemoglobin A1c levels in one study of patients with type 1 diabetes and in another of patients with type 2 diabetes. Patients reported significantly fewer episodes of hypoglycemia with insulin degludec, both research groups reported.
Fear of hypoglycemic events often interferes with patients’ initiating or intensifying their insulin therapy, and may be the leading cause of inadequate insulin dosing, the researchers noted.
When injected subcutaneously, insulin degludec forms a depot of soluble multihexamers that slowly and continuously release the drug into the circulation. Insulin degludec has a half-life of 25 hours (twice that of insulin glargine) and a duration of action of more than 40 hours.
In the first study, 629 adults who had longstanding type 1 diabetes and had been treated with basal-bolus insulin for at least 1 year were randomly assigned to take once-daily subcutaneous injections of either insulin degludec (472 subjects) or insulin glargine (157 subjects), as well as subcutaneous injections of insulin as part at every meal, a strategy known as basal-bolus treatment, said Dr. Simon Heller of the University of Sheffield (England) and his associates.
The trial was open label because the injection devices for the basal insulins were different, so subjects and researchers could not be blinded to treatment assignment.
The study subjects were treated and followed for 1 year at 79 sites in France, Germany, Russia, South Africa, the United Kingdom, and the United States. Novo Nordisk, the manufacturer of insulin degludec, designed the study, supplied products and equipment, and provided data monitoring and management, statistical analysis, and the written report of the trial results.
The primary efficacy outcome was the mean percent decrease in HbA1c levels from baseline values, which were less than or equal to 10% (86 mmol/mol). The mean percent reduction was 0.40% with insulin degludec and 0.39% with insulin glargine, demonstrating the noninferiority of the new insulin, the investigators said (Lancet 2012;379:1489-97).
Similar proportions of patients achieved target HbA1c levels with insulin degludec (40%) and insulin glargine (43%). Mean fasting plasma glucose levels also declined to the same degree in the two groups. Mean weight gain was similar, at 1.8 kg with insulin degludec and 1.6 kg with insulin glargine.
"At the end of the trial, the mean values for daily basal, daily bolus, and daily total insulin dose were significantly lower by 14%, 10%, and 11%, respectively, in the insulin degludec group relative to the insulin glargine group," Dr. Heller and his colleagues said. "This difference might be attributable to a requirement for higher doses of insulin glargine to achieve adequate 24-hour coverage when used once daily."
The rates of hypoglycemic episodes, of severe hypoglycemic episodes, and of daytime hypoglycemic episodes were not significantly different between the two study groups. The rate of nocturnal hypoglycemic episodes, however, was 25% lower with insulin degludec. In the first 12 hours after a once-daily injection, which many patients perform at bedtime, approximately 50% of insulin degludec and 60% of insulin glargine are released, the researchers explained.
The rates of other adverse events were similar between the two groups, and there were no significant differences in patient assessments of quality of life.
Dr. Heller and his colleagues cautioned that rates of severe hypoglycemia are much higher in real-world settings than in clinical trials – as much as 20 times higher in one large, observational study. "This difference is partly because patients with recurrent episodes are usually excluded from trials, but also because those in trials receive close supervision and are treated according to protocol. Whether reductions in hypoglycemia reported in clinical trials translate into benefits in clinical practice remains to be seen," they said.
The second phase III study was identical in design and differed only in the patient population, which was limited to patients with type 2 diabetes. These 1,006 subjects had a HbA1c of 7%-10% after 3 months or more of any insulin regimen (with or without oral antidiabetic drugs), and were treated and followed at 123 sites in Bulgaria, Germany, Honk Kong, Ireland, Italy, Romania, Russia, Slovakia, South Africa, Spain, Turkey, and the United States.
After 1 year, the mean HbA1c level had decreased by 1.10% with insulin degludec and by 1.18% with insulin glargine, confirming the noninferiority of insulin degludec, said Dr. Alan J. Garber of Baylor College of Medicine, Houston, and his associates.
The proportion of patients who achieved target HbA1c levels was similar in both groups, at 49% with insulin degludec and 50% with insulin glargine, they said (Lancet 2012;379:1498-507).
Rates of overall hypoglycemia were significantly lower with insulin degludec (11.09 episodes per patient-year of exposure) than with insulin glargine (13.63 episodes per patient-year), as were rates of nocturnal and daytime hypoglycemia.
Mean weight gain was the same between the two study groups, as were the rates of adverse events and of serious adverse events other than hypoglycemia.
Both studies were sponsored by Novo Nordisk, manufacturer of degludec. Novo Nordisk designed both studies, supplied products and equipment, and provided data monitoring and management, statistical analysis, and the written report of the trial results. Dr. Heller, Dr. Garber, and their associates reported numerous ties to industry sources. Dr. Tahrani, Dr. Bailey, and Dr. Barnett also reported ties to numerous industry sources, including Novo Nordisk.
Both studies demonstrate that degludec could be a valuable addition to presently available insulins, particularly because of degludec’s benefit regarding hypoglycemia, which "is commonly the greatest barrier to achievement of normal glucose concentrations with insulin therapy," said Dr. Abd A. Tahrani, Dr. Clifford J. Bailey, and Dr. Anthony H. Barnett.
However, it is not yet known whether the steady glucose insulin concentrations achieved in these clinical trials will translate into clinical benefit. *"Indeed, the proportion of patients who had any symptoms of hypoglycemia was very high (greater than 80%) in both studies," they noted.
Dr. Tahrani is at the centre of endocrinology, diabetes, and metabolism at the University of Birmingham (England) Institute of Biomedical Research. Dr. Bailey is in the school of life and health sciences at Aston University, Birmingham. Dr. Barnett is at the diabetes centre at Birmingham Heartlands Hospital. They reported ties to numerous industry sources, including Novo Nordisk. These remarks were taken from their editorial comment accompanying the two reports on insulin degludec (Lancet 2012;379:1465-7).
*CORRECTION 5/9/12: A previous version of this story misstated the proportion of patients who had any symptoms of hypoglycemia.
Both studies demonstrate that degludec could be a valuable addition to presently available insulins, particularly because of degludec’s benefit regarding hypoglycemia, which "is commonly the greatest barrier to achievement of normal glucose concentrations with insulin therapy," said Dr. Abd A. Tahrani, Dr. Clifford J. Bailey, and Dr. Anthony H. Barnett.
However, it is not yet known whether the steady glucose insulin concentrations achieved in these clinical trials will translate into clinical benefit. *"Indeed, the proportion of patients who had any symptoms of hypoglycemia was very high (greater than 80%) in both studies," they noted.
Dr. Tahrani is at the centre of endocrinology, diabetes, and metabolism at the University of Birmingham (England) Institute of Biomedical Research. Dr. Bailey is in the school of life and health sciences at Aston University, Birmingham. Dr. Barnett is at the diabetes centre at Birmingham Heartlands Hospital. They reported ties to numerous industry sources, including Novo Nordisk. These remarks were taken from their editorial comment accompanying the two reports on insulin degludec (Lancet 2012;379:1465-7).
*CORRECTION 5/9/12: A previous version of this story misstated the proportion of patients who had any symptoms of hypoglycemia.
Both studies demonstrate that degludec could be a valuable addition to presently available insulins, particularly because of degludec’s benefit regarding hypoglycemia, which "is commonly the greatest barrier to achievement of normal glucose concentrations with insulin therapy," said Dr. Abd A. Tahrani, Dr. Clifford J. Bailey, and Dr. Anthony H. Barnett.
However, it is not yet known whether the steady glucose insulin concentrations achieved in these clinical trials will translate into clinical benefit. *"Indeed, the proportion of patients who had any symptoms of hypoglycemia was very high (greater than 80%) in both studies," they noted.
Dr. Tahrani is at the centre of endocrinology, diabetes, and metabolism at the University of Birmingham (England) Institute of Biomedical Research. Dr. Bailey is in the school of life and health sciences at Aston University, Birmingham. Dr. Barnett is at the diabetes centre at Birmingham Heartlands Hospital. They reported ties to numerous industry sources, including Novo Nordisk. These remarks were taken from their editorial comment accompanying the two reports on insulin degludec (Lancet 2012;379:1465-7).
*CORRECTION 5/9/12: A previous version of this story misstated the proportion of patients who had any symptoms of hypoglycemia.
Insulin degludec, an ultralong-acting insulin now in clinical development, proved noninferior to insulin glargine in two parallel, phase III randomized trials sponsored by the manufacturer and reported in the April 21 issue of Lancet.
The new insulin was as effective as insulin glargine at reducing hemoglobin A1c levels in one study of patients with type 1 diabetes and in another of patients with type 2 diabetes. Patients reported significantly fewer episodes of hypoglycemia with insulin degludec, both research groups reported.
Fear of hypoglycemic events often interferes with patients’ initiating or intensifying their insulin therapy, and may be the leading cause of inadequate insulin dosing, the researchers noted.
When injected subcutaneously, insulin degludec forms a depot of soluble multihexamers that slowly and continuously release the drug into the circulation. Insulin degludec has a half-life of 25 hours (twice that of insulin glargine) and a duration of action of more than 40 hours.
In the first study, 629 adults who had longstanding type 1 diabetes and had been treated with basal-bolus insulin for at least 1 year were randomly assigned to take once-daily subcutaneous injections of either insulin degludec (472 subjects) or insulin glargine (157 subjects), as well as subcutaneous injections of insulin as part at every meal, a strategy known as basal-bolus treatment, said Dr. Simon Heller of the University of Sheffield (England) and his associates.
The trial was open label because the injection devices for the basal insulins were different, so subjects and researchers could not be blinded to treatment assignment.
The study subjects were treated and followed for 1 year at 79 sites in France, Germany, Russia, South Africa, the United Kingdom, and the United States. Novo Nordisk, the manufacturer of insulin degludec, designed the study, supplied products and equipment, and provided data monitoring and management, statistical analysis, and the written report of the trial results.
The primary efficacy outcome was the mean percent decrease in HbA1c levels from baseline values, which were less than or equal to 10% (86 mmol/mol). The mean percent reduction was 0.40% with insulin degludec and 0.39% with insulin glargine, demonstrating the noninferiority of the new insulin, the investigators said (Lancet 2012;379:1489-97).
Similar proportions of patients achieved target HbA1c levels with insulin degludec (40%) and insulin glargine (43%). Mean fasting plasma glucose levels also declined to the same degree in the two groups. Mean weight gain was similar, at 1.8 kg with insulin degludec and 1.6 kg with insulin glargine.
"At the end of the trial, the mean values for daily basal, daily bolus, and daily total insulin dose were significantly lower by 14%, 10%, and 11%, respectively, in the insulin degludec group relative to the insulin glargine group," Dr. Heller and his colleagues said. "This difference might be attributable to a requirement for higher doses of insulin glargine to achieve adequate 24-hour coverage when used once daily."
The rates of hypoglycemic episodes, of severe hypoglycemic episodes, and of daytime hypoglycemic episodes were not significantly different between the two study groups. The rate of nocturnal hypoglycemic episodes, however, was 25% lower with insulin degludec. In the first 12 hours after a once-daily injection, which many patients perform at bedtime, approximately 50% of insulin degludec and 60% of insulin glargine are released, the researchers explained.
The rates of other adverse events were similar between the two groups, and there were no significant differences in patient assessments of quality of life.
Dr. Heller and his colleagues cautioned that rates of severe hypoglycemia are much higher in real-world settings than in clinical trials – as much as 20 times higher in one large, observational study. "This difference is partly because patients with recurrent episodes are usually excluded from trials, but also because those in trials receive close supervision and are treated according to protocol. Whether reductions in hypoglycemia reported in clinical trials translate into benefits in clinical practice remains to be seen," they said.
The second phase III study was identical in design and differed only in the patient population, which was limited to patients with type 2 diabetes. These 1,006 subjects had a HbA1c of 7%-10% after 3 months or more of any insulin regimen (with or without oral antidiabetic drugs), and were treated and followed at 123 sites in Bulgaria, Germany, Honk Kong, Ireland, Italy, Romania, Russia, Slovakia, South Africa, Spain, Turkey, and the United States.
After 1 year, the mean HbA1c level had decreased by 1.10% with insulin degludec and by 1.18% with insulin glargine, confirming the noninferiority of insulin degludec, said Dr. Alan J. Garber of Baylor College of Medicine, Houston, and his associates.
The proportion of patients who achieved target HbA1c levels was similar in both groups, at 49% with insulin degludec and 50% with insulin glargine, they said (Lancet 2012;379:1498-507).
Rates of overall hypoglycemia were significantly lower with insulin degludec (11.09 episodes per patient-year of exposure) than with insulin glargine (13.63 episodes per patient-year), as were rates of nocturnal and daytime hypoglycemia.
Mean weight gain was the same between the two study groups, as were the rates of adverse events and of serious adverse events other than hypoglycemia.
Both studies were sponsored by Novo Nordisk, manufacturer of degludec. Novo Nordisk designed both studies, supplied products and equipment, and provided data monitoring and management, statistical analysis, and the written report of the trial results. Dr. Heller, Dr. Garber, and their associates reported numerous ties to industry sources. Dr. Tahrani, Dr. Bailey, and Dr. Barnett also reported ties to numerous industry sources, including Novo Nordisk.
Insulin degludec, an ultralong-acting insulin now in clinical development, proved noninferior to insulin glargine in two parallel, phase III randomized trials sponsored by the manufacturer and reported in the April 21 issue of Lancet.
The new insulin was as effective as insulin glargine at reducing hemoglobin A1c levels in one study of patients with type 1 diabetes and in another of patients with type 2 diabetes. Patients reported significantly fewer episodes of hypoglycemia with insulin degludec, both research groups reported.
Fear of hypoglycemic events often interferes with patients’ initiating or intensifying their insulin therapy, and may be the leading cause of inadequate insulin dosing, the researchers noted.
When injected subcutaneously, insulin degludec forms a depot of soluble multihexamers that slowly and continuously release the drug into the circulation. Insulin degludec has a half-life of 25 hours (twice that of insulin glargine) and a duration of action of more than 40 hours.
In the first study, 629 adults who had longstanding type 1 diabetes and had been treated with basal-bolus insulin for at least 1 year were randomly assigned to take once-daily subcutaneous injections of either insulin degludec (472 subjects) or insulin glargine (157 subjects), as well as subcutaneous injections of insulin as part at every meal, a strategy known as basal-bolus treatment, said Dr. Simon Heller of the University of Sheffield (England) and his associates.
The trial was open label because the injection devices for the basal insulins were different, so subjects and researchers could not be blinded to treatment assignment.
The study subjects were treated and followed for 1 year at 79 sites in France, Germany, Russia, South Africa, the United Kingdom, and the United States. Novo Nordisk, the manufacturer of insulin degludec, designed the study, supplied products and equipment, and provided data monitoring and management, statistical analysis, and the written report of the trial results.
The primary efficacy outcome was the mean percent decrease in HbA1c levels from baseline values, which were less than or equal to 10% (86 mmol/mol). The mean percent reduction was 0.40% with insulin degludec and 0.39% with insulin glargine, demonstrating the noninferiority of the new insulin, the investigators said (Lancet 2012;379:1489-97).
Similar proportions of patients achieved target HbA1c levels with insulin degludec (40%) and insulin glargine (43%). Mean fasting plasma glucose levels also declined to the same degree in the two groups. Mean weight gain was similar, at 1.8 kg with insulin degludec and 1.6 kg with insulin glargine.
"At the end of the trial, the mean values for daily basal, daily bolus, and daily total insulin dose were significantly lower by 14%, 10%, and 11%, respectively, in the insulin degludec group relative to the insulin glargine group," Dr. Heller and his colleagues said. "This difference might be attributable to a requirement for higher doses of insulin glargine to achieve adequate 24-hour coverage when used once daily."
The rates of hypoglycemic episodes, of severe hypoglycemic episodes, and of daytime hypoglycemic episodes were not significantly different between the two study groups. The rate of nocturnal hypoglycemic episodes, however, was 25% lower with insulin degludec. In the first 12 hours after a once-daily injection, which many patients perform at bedtime, approximately 50% of insulin degludec and 60% of insulin glargine are released, the researchers explained.
The rates of other adverse events were similar between the two groups, and there were no significant differences in patient assessments of quality of life.
Dr. Heller and his colleagues cautioned that rates of severe hypoglycemia are much higher in real-world settings than in clinical trials – as much as 20 times higher in one large, observational study. "This difference is partly because patients with recurrent episodes are usually excluded from trials, but also because those in trials receive close supervision and are treated according to protocol. Whether reductions in hypoglycemia reported in clinical trials translate into benefits in clinical practice remains to be seen," they said.
The second phase III study was identical in design and differed only in the patient population, which was limited to patients with type 2 diabetes. These 1,006 subjects had a HbA1c of 7%-10% after 3 months or more of any insulin regimen (with or without oral antidiabetic drugs), and were treated and followed at 123 sites in Bulgaria, Germany, Honk Kong, Ireland, Italy, Romania, Russia, Slovakia, South Africa, Spain, Turkey, and the United States.
After 1 year, the mean HbA1c level had decreased by 1.10% with insulin degludec and by 1.18% with insulin glargine, confirming the noninferiority of insulin degludec, said Dr. Alan J. Garber of Baylor College of Medicine, Houston, and his associates.
The proportion of patients who achieved target HbA1c levels was similar in both groups, at 49% with insulin degludec and 50% with insulin glargine, they said (Lancet 2012;379:1498-507).
Rates of overall hypoglycemia were significantly lower with insulin degludec (11.09 episodes per patient-year of exposure) than with insulin glargine (13.63 episodes per patient-year), as were rates of nocturnal and daytime hypoglycemia.
Mean weight gain was the same between the two study groups, as were the rates of adverse events and of serious adverse events other than hypoglycemia.
Both studies were sponsored by Novo Nordisk, manufacturer of degludec. Novo Nordisk designed both studies, supplied products and equipment, and provided data monitoring and management, statistical analysis, and the written report of the trial results. Dr. Heller, Dr. Garber, and their associates reported numerous ties to industry sources. Dr. Tahrani, Dr. Bailey, and Dr. Barnett also reported ties to numerous industry sources, including Novo Nordisk.
FROM THE LANCET
Major Finding: After 1 year, insulin degludec reduced HbA1c levels by 0.40%, compared with insulin glargine (0.39%) in patients with type 1 diabetes; reductions were 1.10% and 1.18%, respectively, in patients with type 2 diabetes.
Data Source: Researchers conducted two international open-label, phase III, randomized clinical trials comparing 1 year of daily subcutaneous injections with either insulin degludec or insulin glargine in 629 adults with type 1 diabetes and 1,006 adults with type 2 diabetes.
Disclosures: Both studies were sponsored by Novo Nordisk, manufacturer of degludec. Novo Nordisk designed both studies, supplied products and equipment, and provided data monitoring and management, statistical analysis, and the written report of the trial results. Dr. Heller, Dr. Garber, and their associates reported numerous ties to industry sources.
Botox Injections Flunk for Headache Prevention
Injections of botulinum toxin A may be of some benefit in preventing chronic migraine and chronic daily headaches, but that benefit is small and does not extend to episodic migraine, episodic tension-type headaches, or chronic tension-type headaches, according to a report in the April 25 issue of JAMA.
In a metaanalysis of 31 randomized controlled trials, botulinum toxin A injections reduced the number of chronic migraine headaches from 19.5 to 17.2 per month and the number of chronic daily headaches from 17.5 to 15.4 per month, differences of unknown clinical importance. The treatment did not reduce the frequency of other types of headache, said Dr. Jeffrey L. Jackson of the Zablocki Veterans Affairs Medical Center and the Medical College of Wisconsin, Milwaukee, and his associates.
"Our finding of minimal benefit is contrary to findings from case series and open-label studies that suggested substantial benefits. These differences in results may be due to a strong association of placebo with improved outcomes and the natural history of headaches, in which improvement is observed over time," they noted.
Dr. Jackson and his colleagues searched the literature for randomized clinical trials of at least 4 weeks’ duration that compared botulinum toxin A injections against either placebo injections or prophylactic medications.
In 27 placebo-controlled trials involving adults, the average subject age was 42 years and the average duration of the study was 19 weeks (range, 84-270 days). A total of 1,938 of these subjects had episodic migraines, 1,544 had chronic migraines, 616 had chronic tension-type headaches, and 1,115 had chronic daily headaches.
Botulinum A injections were associated with a reduction of approximately two headaches per month for both chronic migraine and chronic daily headaches, but did not reduce the other types of headache. Moreover, there was a "substantial" placebo effect among control subjects, with a significant number of them reporting reduced headaches over time.
The researchers also analyzed four trials that compared the injections against prophylactic medications. Botulinum toxin A injections were no more effective than were topiramate, amitriptyline, or valproate at preventing any type of headache.
The injections did reduce headache severity in a single trial comparing them against methylprednisolone, but given that corticosteroids are not generally used for headache prophylaxis, "it is unclear how useful this comparison is clinically," Dr. Jackson and his associates said (JAMA 2012;307:1736-45).
Study subjects who received botulinum toxin A injections were more likely to report adverse effects than were those who received placebo injections, including blepharoptosis, muscle weakness, neck pain, neck stiffness, paresthesia, and skin tightness.
Outcomes with botulinum toxin A injections were the same regardless of whether they were administered on a fixed or a flexible schedule, whether particular muscle groups were injected, or whether injection sites were selected on the basis of patients’ pain reports. Outcomes also were the same whether the injections were performed once, or three times at 90-day intervals.
There also were no differences in outcomes according to the number of muscle groups injected or the total dose of botulinum toxin A administered, the investigators added.
Among the study’s limitations was the fact that for nearly all the headache subtypes, there were relatively few studies and may of the studies were small.
No relevant financial conflicts of interest were reported.
Injections of botulinum toxin A may be of some benefit in preventing chronic migraine and chronic daily headaches, but that benefit is small and does not extend to episodic migraine, episodic tension-type headaches, or chronic tension-type headaches, according to a report in the April 25 issue of JAMA.
In a metaanalysis of 31 randomized controlled trials, botulinum toxin A injections reduced the number of chronic migraine headaches from 19.5 to 17.2 per month and the number of chronic daily headaches from 17.5 to 15.4 per month, differences of unknown clinical importance. The treatment did not reduce the frequency of other types of headache, said Dr. Jeffrey L. Jackson of the Zablocki Veterans Affairs Medical Center and the Medical College of Wisconsin, Milwaukee, and his associates.
"Our finding of minimal benefit is contrary to findings from case series and open-label studies that suggested substantial benefits. These differences in results may be due to a strong association of placebo with improved outcomes and the natural history of headaches, in which improvement is observed over time," they noted.
Dr. Jackson and his colleagues searched the literature for randomized clinical trials of at least 4 weeks’ duration that compared botulinum toxin A injections against either placebo injections or prophylactic medications.
In 27 placebo-controlled trials involving adults, the average subject age was 42 years and the average duration of the study was 19 weeks (range, 84-270 days). A total of 1,938 of these subjects had episodic migraines, 1,544 had chronic migraines, 616 had chronic tension-type headaches, and 1,115 had chronic daily headaches.
Botulinum A injections were associated with a reduction of approximately two headaches per month for both chronic migraine and chronic daily headaches, but did not reduce the other types of headache. Moreover, there was a "substantial" placebo effect among control subjects, with a significant number of them reporting reduced headaches over time.
The researchers also analyzed four trials that compared the injections against prophylactic medications. Botulinum toxin A injections were no more effective than were topiramate, amitriptyline, or valproate at preventing any type of headache.
The injections did reduce headache severity in a single trial comparing them against methylprednisolone, but given that corticosteroids are not generally used for headache prophylaxis, "it is unclear how useful this comparison is clinically," Dr. Jackson and his associates said (JAMA 2012;307:1736-45).
Study subjects who received botulinum toxin A injections were more likely to report adverse effects than were those who received placebo injections, including blepharoptosis, muscle weakness, neck pain, neck stiffness, paresthesia, and skin tightness.
Outcomes with botulinum toxin A injections were the same regardless of whether they were administered on a fixed or a flexible schedule, whether particular muscle groups were injected, or whether injection sites were selected on the basis of patients’ pain reports. Outcomes also were the same whether the injections were performed once, or three times at 90-day intervals.
There also were no differences in outcomes according to the number of muscle groups injected or the total dose of botulinum toxin A administered, the investigators added.
Among the study’s limitations was the fact that for nearly all the headache subtypes, there were relatively few studies and may of the studies were small.
No relevant financial conflicts of interest were reported.
Injections of botulinum toxin A may be of some benefit in preventing chronic migraine and chronic daily headaches, but that benefit is small and does not extend to episodic migraine, episodic tension-type headaches, or chronic tension-type headaches, according to a report in the April 25 issue of JAMA.
In a metaanalysis of 31 randomized controlled trials, botulinum toxin A injections reduced the number of chronic migraine headaches from 19.5 to 17.2 per month and the number of chronic daily headaches from 17.5 to 15.4 per month, differences of unknown clinical importance. The treatment did not reduce the frequency of other types of headache, said Dr. Jeffrey L. Jackson of the Zablocki Veterans Affairs Medical Center and the Medical College of Wisconsin, Milwaukee, and his associates.
"Our finding of minimal benefit is contrary to findings from case series and open-label studies that suggested substantial benefits. These differences in results may be due to a strong association of placebo with improved outcomes and the natural history of headaches, in which improvement is observed over time," they noted.
Dr. Jackson and his colleagues searched the literature for randomized clinical trials of at least 4 weeks’ duration that compared botulinum toxin A injections against either placebo injections or prophylactic medications.
In 27 placebo-controlled trials involving adults, the average subject age was 42 years and the average duration of the study was 19 weeks (range, 84-270 days). A total of 1,938 of these subjects had episodic migraines, 1,544 had chronic migraines, 616 had chronic tension-type headaches, and 1,115 had chronic daily headaches.
Botulinum A injections were associated with a reduction of approximately two headaches per month for both chronic migraine and chronic daily headaches, but did not reduce the other types of headache. Moreover, there was a "substantial" placebo effect among control subjects, with a significant number of them reporting reduced headaches over time.
The researchers also analyzed four trials that compared the injections against prophylactic medications. Botulinum toxin A injections were no more effective than were topiramate, amitriptyline, or valproate at preventing any type of headache.
The injections did reduce headache severity in a single trial comparing them against methylprednisolone, but given that corticosteroids are not generally used for headache prophylaxis, "it is unclear how useful this comparison is clinically," Dr. Jackson and his associates said (JAMA 2012;307:1736-45).
Study subjects who received botulinum toxin A injections were more likely to report adverse effects than were those who received placebo injections, including blepharoptosis, muscle weakness, neck pain, neck stiffness, paresthesia, and skin tightness.
Outcomes with botulinum toxin A injections were the same regardless of whether they were administered on a fixed or a flexible schedule, whether particular muscle groups were injected, or whether injection sites were selected on the basis of patients’ pain reports. Outcomes also were the same whether the injections were performed once, or three times at 90-day intervals.
There also were no differences in outcomes according to the number of muscle groups injected or the total dose of botulinum toxin A administered, the investigators added.
Among the study’s limitations was the fact that for nearly all the headache subtypes, there were relatively few studies and may of the studies were small.
No relevant financial conflicts of interest were reported.
FROM JAMA
Major Finding: Botulinum A injections were associated with a reduction of approximately two headaches per month for patients with chronic migraine and chronic daily headaches, but did not reduce any other types of headache.
Data Source: This was a metaanalysis of 27 placebo-controlled randomized clinical trials assessing botulinum toxin A injections in more than 5,000 headache patients, and 4 comparative-effectiveness trials assessing the injections against topiramate, valproate, amitriptyline, and methylprednisolone.
Disclosures: No relevant financial conflicts of interest were reported.
Amyloid-Beta-Associated Cognitive Decline Only Occurs at High P-Tau Levels
In clinically normal older people, the cognitive decline associated with high levels of amyloid-beta in the cerebrospinal fluid only takes place if elevated phospho-tau levels also are present, according to a report published online April 23 in Archives of Neurology.
This indicates that amyloid-beta deposition by itself is not associated with the cognitive decline that is characteristic of Alzheimer’s disease (AD) but becomes so when accompanied by high levels of phospho-tau (p-tau). "In the absence of p-tau, the effect of amyloid-beta on longitudinal clinical decline is not significantly different from zero," said Dr. Rahul S. Desikan of the department of radiology, University of California in San Diego, and his associates.
The study findings suggest that p-tau may be an important marker of Alzheimer’s-associated degeneration, more so than total tau (t-tau). "Elevations of CSF t-tau are seen in a number of neurologic disorders characterized by neuronal and axonal death, whereas increased CSF p-tau correlates with increased neurofibrillary pathology and can distinguish AD from other neurodegenerative disorders," they said.
The investigators examined the relationships among CSF markers of AD at the preclinical stage of the disease using data on 107 healthy control subjects from 50 sites across the United States and Canada who participated in the Alzheimer Disease Neuroimaging Initiative, a collaborative effort begun in 2003 and funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, the Food and Drug Administration, several pharmaceutical companies, and nonprofit organizations. The initiative examined whether serial MRI and PET images, together with biological markers, clinical assessments, and neuropsychological testing, could measure the progression of mild cognitive impairment and early AD.
For their study, Dr. Desikan and his colleagues classified the 107 normal subjects as having either high or low levels of p-tau and high or low levels of amyloid-beta in CSF samples. The subjects were followed for a mean of 3 years, undergoing periodic assessments of cognitive status using the global Clinical Dementia Rating scale, the CDR–Sum of Boxes (CDR-SB) subscale, and the Alzheimer Disease Assessment Scale–cognitive (ADAS-cog) subscale.
Study subjects who had high CSF levels of amyloid-beta showed declines on the cognitive assessments only if they also had elevated CSF levels of p-tau. "These data suggest that the combination of p-tau and amyloid-beta likely reflects underlying pathobiology of the preclinical stage of AD," the researchers said (Arch. Neurol. 2012 April 23 [doi:10.1001/archneurol.2011.3354]).
Specifically, in older patients, positive CSF amyloid-beta1-42 status significantly correlated with change in global CDR (beta1 = 0.03; standard error = 0.01; P = .04), CDR-SB (beta1 = 0.09; SE = 0.05; P less than .05), and ADAS-cog (beta1 = 0.59; SE = 0.23; P = .01). To ensure that our results were not owing to a categorical treatment of variables, we examined CSF amyloid-beta1-42 as a continuous variable and found significant associations between decreased CSF amyloid-beta1-42 levels and change in global CDR (beta-coefficient = -0.0002; SE = 0.0001; P = .03), CDR-SB (beta-coefficient = -0.0009; SE = 0.0004; P = .04), and ADAS-cog (beta-coefficient = -0.005; SE = 0.002; P = .02)," according to the investigators.
"From a clinical perspective, these results are consonant with the three-stage preclinical AD framework recently proposed by the National Institute on Aging–Alzheimer Association workgroup, and indicate that a biomarker profile consisting of both CSF amyloid-beta and CSF p-tau levels may better identify those older individuals who are at an elevated risk for progressing to eventual AD dementia than either biomarker by itself," they added.
Their results also highlight the need for therapies that specifically target tau. It is reasonable to hypothesize that amyloid-beta "initiates the degenerative cascade" in AD, but that elevated tau levels signal a second phase of the pathologic process in which neurodegenerative declines occur independently of amyloid-beta. "Targeting downstream events, such as tau phosphorylation and aggregation, in older individuals ... may be an additionally beneficial treatment strategy," the researchers said.
The study findings must be validated in future research, preferably that involving cohorts of older people in the general population, since the subjects of this study were highly selected, healthy older adults motivated to participate in a clinical study of AD, Dr. Desikan and his associates noted.
This study was supported by the National Institutes of Health and the Alzheimer’s Association of San Diego. Dr. Desikan’s associates reported ties to numerous industry sources.
The data in this study are important and likely reliable because they were derived from a large multisite study, and the CSF measurements were all assessed according to a standard protocol at a single site, noted Dr. David M. Holtzman.
The findings strongly suggest that researchers can use beta-amyloid and p-tau biomarkers in healthy people between 55-85 years of age to identify the approximately 20% who are at risk for cognitive decline. It also may be time to enroll such subjects in a secondary prevention trial to see whether therapies that target "tauopathy," amyloid-beta deposition, and neuroinflammation can be beneficial at this preclinical stage of AD, he said.
Dr. Holtzman is with the Hope Center for Neurological Disorders, and the Charles F. and Joanne Knight Alzheimer’s Disease Research Center at Washington University, St. Louis. He reported ties to Bristol-Myers Squibb, C2N Diagnostics, and Pfizer. Dr. Holtzman’s work is supported by the National Institutes of Health, the Cure Alzheimer’s Fund, Ellison Medical Foundation, Eli Lilly, AstraZeneca, Pfizer, Integrated diagnostics, and C2N Diagnostics. These remarks were taken from his editorial accompanying Dr. Desikan’s report (Arch. Neurol. 2012 April 23 [doi:10.1001/archneurol.2012.587]).
The data in this study are important and likely reliable because they were derived from a large multisite study, and the CSF measurements were all assessed according to a standard protocol at a single site, noted Dr. David M. Holtzman.
The findings strongly suggest that researchers can use beta-amyloid and p-tau biomarkers in healthy people between 55-85 years of age to identify the approximately 20% who are at risk for cognitive decline. It also may be time to enroll such subjects in a secondary prevention trial to see whether therapies that target "tauopathy," amyloid-beta deposition, and neuroinflammation can be beneficial at this preclinical stage of AD, he said.
Dr. Holtzman is with the Hope Center for Neurological Disorders, and the Charles F. and Joanne Knight Alzheimer’s Disease Research Center at Washington University, St. Louis. He reported ties to Bristol-Myers Squibb, C2N Diagnostics, and Pfizer. Dr. Holtzman’s work is supported by the National Institutes of Health, the Cure Alzheimer’s Fund, Ellison Medical Foundation, Eli Lilly, AstraZeneca, Pfizer, Integrated diagnostics, and C2N Diagnostics. These remarks were taken from his editorial accompanying Dr. Desikan’s report (Arch. Neurol. 2012 April 23 [doi:10.1001/archneurol.2012.587]).
The data in this study are important and likely reliable because they were derived from a large multisite study, and the CSF measurements were all assessed according to a standard protocol at a single site, noted Dr. David M. Holtzman.
The findings strongly suggest that researchers can use beta-amyloid and p-tau biomarkers in healthy people between 55-85 years of age to identify the approximately 20% who are at risk for cognitive decline. It also may be time to enroll such subjects in a secondary prevention trial to see whether therapies that target "tauopathy," amyloid-beta deposition, and neuroinflammation can be beneficial at this preclinical stage of AD, he said.
Dr. Holtzman is with the Hope Center for Neurological Disorders, and the Charles F. and Joanne Knight Alzheimer’s Disease Research Center at Washington University, St. Louis. He reported ties to Bristol-Myers Squibb, C2N Diagnostics, and Pfizer. Dr. Holtzman’s work is supported by the National Institutes of Health, the Cure Alzheimer’s Fund, Ellison Medical Foundation, Eli Lilly, AstraZeneca, Pfizer, Integrated diagnostics, and C2N Diagnostics. These remarks were taken from his editorial accompanying Dr. Desikan’s report (Arch. Neurol. 2012 April 23 [doi:10.1001/archneurol.2012.587]).
In clinically normal older people, the cognitive decline associated with high levels of amyloid-beta in the cerebrospinal fluid only takes place if elevated phospho-tau levels also are present, according to a report published online April 23 in Archives of Neurology.
This indicates that amyloid-beta deposition by itself is not associated with the cognitive decline that is characteristic of Alzheimer’s disease (AD) but becomes so when accompanied by high levels of phospho-tau (p-tau). "In the absence of p-tau, the effect of amyloid-beta on longitudinal clinical decline is not significantly different from zero," said Dr. Rahul S. Desikan of the department of radiology, University of California in San Diego, and his associates.
The study findings suggest that p-tau may be an important marker of Alzheimer’s-associated degeneration, more so than total tau (t-tau). "Elevations of CSF t-tau are seen in a number of neurologic disorders characterized by neuronal and axonal death, whereas increased CSF p-tau correlates with increased neurofibrillary pathology and can distinguish AD from other neurodegenerative disorders," they said.
The investigators examined the relationships among CSF markers of AD at the preclinical stage of the disease using data on 107 healthy control subjects from 50 sites across the United States and Canada who participated in the Alzheimer Disease Neuroimaging Initiative, a collaborative effort begun in 2003 and funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, the Food and Drug Administration, several pharmaceutical companies, and nonprofit organizations. The initiative examined whether serial MRI and PET images, together with biological markers, clinical assessments, and neuropsychological testing, could measure the progression of mild cognitive impairment and early AD.
For their study, Dr. Desikan and his colleagues classified the 107 normal subjects as having either high or low levels of p-tau and high or low levels of amyloid-beta in CSF samples. The subjects were followed for a mean of 3 years, undergoing periodic assessments of cognitive status using the global Clinical Dementia Rating scale, the CDR–Sum of Boxes (CDR-SB) subscale, and the Alzheimer Disease Assessment Scale–cognitive (ADAS-cog) subscale.
Study subjects who had high CSF levels of amyloid-beta showed declines on the cognitive assessments only if they also had elevated CSF levels of p-tau. "These data suggest that the combination of p-tau and amyloid-beta likely reflects underlying pathobiology of the preclinical stage of AD," the researchers said (Arch. Neurol. 2012 April 23 [doi:10.1001/archneurol.2011.3354]).
Specifically, in older patients, positive CSF amyloid-beta1-42 status significantly correlated with change in global CDR (beta1 = 0.03; standard error = 0.01; P = .04), CDR-SB (beta1 = 0.09; SE = 0.05; P less than .05), and ADAS-cog (beta1 = 0.59; SE = 0.23; P = .01). To ensure that our results were not owing to a categorical treatment of variables, we examined CSF amyloid-beta1-42 as a continuous variable and found significant associations between decreased CSF amyloid-beta1-42 levels and change in global CDR (beta-coefficient = -0.0002; SE = 0.0001; P = .03), CDR-SB (beta-coefficient = -0.0009; SE = 0.0004; P = .04), and ADAS-cog (beta-coefficient = -0.005; SE = 0.002; P = .02)," according to the investigators.
"From a clinical perspective, these results are consonant with the three-stage preclinical AD framework recently proposed by the National Institute on Aging–Alzheimer Association workgroup, and indicate that a biomarker profile consisting of both CSF amyloid-beta and CSF p-tau levels may better identify those older individuals who are at an elevated risk for progressing to eventual AD dementia than either biomarker by itself," they added.
Their results also highlight the need for therapies that specifically target tau. It is reasonable to hypothesize that amyloid-beta "initiates the degenerative cascade" in AD, but that elevated tau levels signal a second phase of the pathologic process in which neurodegenerative declines occur independently of amyloid-beta. "Targeting downstream events, such as tau phosphorylation and aggregation, in older individuals ... may be an additionally beneficial treatment strategy," the researchers said.
The study findings must be validated in future research, preferably that involving cohorts of older people in the general population, since the subjects of this study were highly selected, healthy older adults motivated to participate in a clinical study of AD, Dr. Desikan and his associates noted.
This study was supported by the National Institutes of Health and the Alzheimer’s Association of San Diego. Dr. Desikan’s associates reported ties to numerous industry sources.
In clinically normal older people, the cognitive decline associated with high levels of amyloid-beta in the cerebrospinal fluid only takes place if elevated phospho-tau levels also are present, according to a report published online April 23 in Archives of Neurology.
This indicates that amyloid-beta deposition by itself is not associated with the cognitive decline that is characteristic of Alzheimer’s disease (AD) but becomes so when accompanied by high levels of phospho-tau (p-tau). "In the absence of p-tau, the effect of amyloid-beta on longitudinal clinical decline is not significantly different from zero," said Dr. Rahul S. Desikan of the department of radiology, University of California in San Diego, and his associates.
The study findings suggest that p-tau may be an important marker of Alzheimer’s-associated degeneration, more so than total tau (t-tau). "Elevations of CSF t-tau are seen in a number of neurologic disorders characterized by neuronal and axonal death, whereas increased CSF p-tau correlates with increased neurofibrillary pathology and can distinguish AD from other neurodegenerative disorders," they said.
The investigators examined the relationships among CSF markers of AD at the preclinical stage of the disease using data on 107 healthy control subjects from 50 sites across the United States and Canada who participated in the Alzheimer Disease Neuroimaging Initiative, a collaborative effort begun in 2003 and funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, the Food and Drug Administration, several pharmaceutical companies, and nonprofit organizations. The initiative examined whether serial MRI and PET images, together with biological markers, clinical assessments, and neuropsychological testing, could measure the progression of mild cognitive impairment and early AD.
For their study, Dr. Desikan and his colleagues classified the 107 normal subjects as having either high or low levels of p-tau and high or low levels of amyloid-beta in CSF samples. The subjects were followed for a mean of 3 years, undergoing periodic assessments of cognitive status using the global Clinical Dementia Rating scale, the CDR–Sum of Boxes (CDR-SB) subscale, and the Alzheimer Disease Assessment Scale–cognitive (ADAS-cog) subscale.
Study subjects who had high CSF levels of amyloid-beta showed declines on the cognitive assessments only if they also had elevated CSF levels of p-tau. "These data suggest that the combination of p-tau and amyloid-beta likely reflects underlying pathobiology of the preclinical stage of AD," the researchers said (Arch. Neurol. 2012 April 23 [doi:10.1001/archneurol.2011.3354]).
Specifically, in older patients, positive CSF amyloid-beta1-42 status significantly correlated with change in global CDR (beta1 = 0.03; standard error = 0.01; P = .04), CDR-SB (beta1 = 0.09; SE = 0.05; P less than .05), and ADAS-cog (beta1 = 0.59; SE = 0.23; P = .01). To ensure that our results were not owing to a categorical treatment of variables, we examined CSF amyloid-beta1-42 as a continuous variable and found significant associations between decreased CSF amyloid-beta1-42 levels and change in global CDR (beta-coefficient = -0.0002; SE = 0.0001; P = .03), CDR-SB (beta-coefficient = -0.0009; SE = 0.0004; P = .04), and ADAS-cog (beta-coefficient = -0.005; SE = 0.002; P = .02)," according to the investigators.
"From a clinical perspective, these results are consonant with the three-stage preclinical AD framework recently proposed by the National Institute on Aging–Alzheimer Association workgroup, and indicate that a biomarker profile consisting of both CSF amyloid-beta and CSF p-tau levels may better identify those older individuals who are at an elevated risk for progressing to eventual AD dementia than either biomarker by itself," they added.
Their results also highlight the need for therapies that specifically target tau. It is reasonable to hypothesize that amyloid-beta "initiates the degenerative cascade" in AD, but that elevated tau levels signal a second phase of the pathologic process in which neurodegenerative declines occur independently of amyloid-beta. "Targeting downstream events, such as tau phosphorylation and aggregation, in older individuals ... may be an additionally beneficial treatment strategy," the researchers said.
The study findings must be validated in future research, preferably that involving cohorts of older people in the general population, since the subjects of this study were highly selected, healthy older adults motivated to participate in a clinical study of AD, Dr. Desikan and his associates noted.
This study was supported by the National Institutes of Health and the Alzheimer’s Association of San Diego. Dr. Desikan’s associates reported ties to numerous industry sources.
FROM ARCHIVES OF NEUROLOGY
Major Finding: Older patients who were positive for amyloid-beta1-42 in their CSF were significantly more likely to show a decline in their Clinical Dementia Rating status (beta1 = 0.03; standard error = 0.01; P = .04 during 3 years of follow-up if they also had elevated CSF levels of p-tau, a marker of neurofibrillary pathology.
Data Source: This was a secondary analysis of data on 107 healthy, older control subjects who participated in the multisite Alzheimer Disease Neuroimaging Initiative.
Disclosures: This study was supported by the National Institutes of Health and the Alzheimer’s Association of San Diego. Dr. Desikan’s associates reported ties to numerous industry sources.
Subclinical Hyperthyroidism Raised CHD Mortality, AF Risks
Endogenous subclinical hyperthyroidism increased the risks of coronary heart disease mortality, total mortality, and atrial fibrillation, a study published online April 23 in the Archives of Internal Medicine has shown.
The excess risk was most pronounced in patients who had the lowest thyrotropin levels – below 0.10 mIU/L – said Dr. Tinh-Hai Collet of the department of ambulatory care and community medicine, University of Lausanne (Switzerland), and associates (Arch. Intern. Med. 2012 April 23 [doi:10.1001/archinternmed.2012.402]).
Researchers have long suspected an association between subclinical hyperthyroidism and adverse cardiovascular effects, but prospective cohort studies and study-level meta-analyses alike have reached conflicting conclusions about such a link.
Dr. Collet and colleagues conducted a patient-level meta-analysis of the issue, reasoning that examining individual participant data from large cohort studies might resolve the question.
The researchers included all 10 prospective longitudinal cohorts in the literature that reported baseline thyrotropin and free thyroxine levels, included euthyroid control groups, and specifically tracked coronary heart disease (CHD) and mortality outcomes. They excluded studies that used first-generation thyrotropin assays, because those tests are not sensitive enough to detect subclinical hyperthyroidism reliably.
The 10 cohorts comprised 52,674 patients with a median age of 59 years. The median follow-up was 8.8 years, and the total follow-up was 501,922 person-years. A total of 2,188 (4.2%) of those patients had endogenous subclinical hyperthyroidism.
Overall, 8,527 patients died during follow-up. There were 1,896 CHD deaths, 3,653 CHD events, and 785 cases of incident atrial fibrillation (AF).
For patients with subclinical hyperthyroidism, the overall hazard ratio compared with euthyroid control patients for all-cause mortality was 1.24, for CHD mortality was 1.29, for CHD events was 1.21, and for incident AF was 1.68.
Both CHD mortality and AF rates – but not the other outcomes – were significantly higher among patients with the very lowest thyrotropin levels, the investigators said.
Those increased risks did not change materially when the data were analyzed according to patient age or the presence of preexisting cardiovascular disease. The results also remained the same in sensitivity analyses, even after the data were adjusted to account for body mass index and the use of lipid-lowering or antihypertensive medication.
In contrast, cancer mortality and stroke mortality were no higher in patients with subclinical hyperthyroidism than in control patients.
"Our results, based on individual participant data, demonstrate that there is indeed an increased risk of total and CHD mortality associated with subclinical hyperthyroidism," Dr. Collet and associates said.
Their findings support recent guidelines stating that the treatment of subclinical hyperthyroidism "should be strongly considered" in all patients aged 65 and older whose thyrotropin level is lower than 0.10 mIU/L.
The study could not address whether such therapy decreases the elevated risk of death, CHD events, or AF, the authors said, and no large randomized controlled trial has yet attempted to do so. Such a trial would be "challenging" to perform because of the low prevalence of subclinical hyperthyroidism in the general population, they added.
The Swiss National Science Foundation supported the study. The investigators reported no relevant financial conflicts of interest.
Endogenous subclinical hyperthyroidism increased the risks of coronary heart disease mortality, total mortality, and atrial fibrillation, a study published online April 23 in the Archives of Internal Medicine has shown.
The excess risk was most pronounced in patients who had the lowest thyrotropin levels – below 0.10 mIU/L – said Dr. Tinh-Hai Collet of the department of ambulatory care and community medicine, University of Lausanne (Switzerland), and associates (Arch. Intern. Med. 2012 April 23 [doi:10.1001/archinternmed.2012.402]).
Researchers have long suspected an association between subclinical hyperthyroidism and adverse cardiovascular effects, but prospective cohort studies and study-level meta-analyses alike have reached conflicting conclusions about such a link.
Dr. Collet and colleagues conducted a patient-level meta-analysis of the issue, reasoning that examining individual participant data from large cohort studies might resolve the question.
The researchers included all 10 prospective longitudinal cohorts in the literature that reported baseline thyrotropin and free thyroxine levels, included euthyroid control groups, and specifically tracked coronary heart disease (CHD) and mortality outcomes. They excluded studies that used first-generation thyrotropin assays, because those tests are not sensitive enough to detect subclinical hyperthyroidism reliably.
The 10 cohorts comprised 52,674 patients with a median age of 59 years. The median follow-up was 8.8 years, and the total follow-up was 501,922 person-years. A total of 2,188 (4.2%) of those patients had endogenous subclinical hyperthyroidism.
Overall, 8,527 patients died during follow-up. There were 1,896 CHD deaths, 3,653 CHD events, and 785 cases of incident atrial fibrillation (AF).
For patients with subclinical hyperthyroidism, the overall hazard ratio compared with euthyroid control patients for all-cause mortality was 1.24, for CHD mortality was 1.29, for CHD events was 1.21, and for incident AF was 1.68.
Both CHD mortality and AF rates – but not the other outcomes – were significantly higher among patients with the very lowest thyrotropin levels, the investigators said.
Those increased risks did not change materially when the data were analyzed according to patient age or the presence of preexisting cardiovascular disease. The results also remained the same in sensitivity analyses, even after the data were adjusted to account for body mass index and the use of lipid-lowering or antihypertensive medication.
In contrast, cancer mortality and stroke mortality were no higher in patients with subclinical hyperthyroidism than in control patients.
"Our results, based on individual participant data, demonstrate that there is indeed an increased risk of total and CHD mortality associated with subclinical hyperthyroidism," Dr. Collet and associates said.
Their findings support recent guidelines stating that the treatment of subclinical hyperthyroidism "should be strongly considered" in all patients aged 65 and older whose thyrotropin level is lower than 0.10 mIU/L.
The study could not address whether such therapy decreases the elevated risk of death, CHD events, or AF, the authors said, and no large randomized controlled trial has yet attempted to do so. Such a trial would be "challenging" to perform because of the low prevalence of subclinical hyperthyroidism in the general population, they added.
The Swiss National Science Foundation supported the study. The investigators reported no relevant financial conflicts of interest.
Endogenous subclinical hyperthyroidism increased the risks of coronary heart disease mortality, total mortality, and atrial fibrillation, a study published online April 23 in the Archives of Internal Medicine has shown.
The excess risk was most pronounced in patients who had the lowest thyrotropin levels – below 0.10 mIU/L – said Dr. Tinh-Hai Collet of the department of ambulatory care and community medicine, University of Lausanne (Switzerland), and associates (Arch. Intern. Med. 2012 April 23 [doi:10.1001/archinternmed.2012.402]).
Researchers have long suspected an association between subclinical hyperthyroidism and adverse cardiovascular effects, but prospective cohort studies and study-level meta-analyses alike have reached conflicting conclusions about such a link.
Dr. Collet and colleagues conducted a patient-level meta-analysis of the issue, reasoning that examining individual participant data from large cohort studies might resolve the question.
The researchers included all 10 prospective longitudinal cohorts in the literature that reported baseline thyrotropin and free thyroxine levels, included euthyroid control groups, and specifically tracked coronary heart disease (CHD) and mortality outcomes. They excluded studies that used first-generation thyrotropin assays, because those tests are not sensitive enough to detect subclinical hyperthyroidism reliably.
The 10 cohorts comprised 52,674 patients with a median age of 59 years. The median follow-up was 8.8 years, and the total follow-up was 501,922 person-years. A total of 2,188 (4.2%) of those patients had endogenous subclinical hyperthyroidism.
Overall, 8,527 patients died during follow-up. There were 1,896 CHD deaths, 3,653 CHD events, and 785 cases of incident atrial fibrillation (AF).
For patients with subclinical hyperthyroidism, the overall hazard ratio compared with euthyroid control patients for all-cause mortality was 1.24, for CHD mortality was 1.29, for CHD events was 1.21, and for incident AF was 1.68.
Both CHD mortality and AF rates – but not the other outcomes – were significantly higher among patients with the very lowest thyrotropin levels, the investigators said.
Those increased risks did not change materially when the data were analyzed according to patient age or the presence of preexisting cardiovascular disease. The results also remained the same in sensitivity analyses, even after the data were adjusted to account for body mass index and the use of lipid-lowering or antihypertensive medication.
In contrast, cancer mortality and stroke mortality were no higher in patients with subclinical hyperthyroidism than in control patients.
"Our results, based on individual participant data, demonstrate that there is indeed an increased risk of total and CHD mortality associated with subclinical hyperthyroidism," Dr. Collet and associates said.
Their findings support recent guidelines stating that the treatment of subclinical hyperthyroidism "should be strongly considered" in all patients aged 65 and older whose thyrotropin level is lower than 0.10 mIU/L.
The study could not address whether such therapy decreases the elevated risk of death, CHD events, or AF, the authors said, and no large randomized controlled trial has yet attempted to do so. Such a trial would be "challenging" to perform because of the low prevalence of subclinical hyperthyroidism in the general population, they added.
The Swiss National Science Foundation supported the study. The investigators reported no relevant financial conflicts of interest.
FROM ARCHIVES OF INTERNAL MEDICINE
Major Finding: Compared with euthyroid study patients, those with endogenous subclinical hyperthyroidism were at greater risk for coronary heart disease mortality (HR, 1.29), all-cause mortality (HR, 1.24), CHD events (HR, 1.21), and incident atrial fibrillation (HR, 1.68).
Data Source: A patient-level meta-analysis of data on 52,674 participants in 10 prospective cohort studies who were followed for a median of 8.8 years for CHD and mortality outcomes.
Disclosures: The Swiss National Science Foundation supported the study. No financial conflicts of interest were reported.