Lorcaserin Produced Significant Weight Loss

Article Type
Changed
Display Headline
Lorcaserin Produced Significant Weight Loss

Major Finding: More obese subjects taking twice-daily lorcaserin (47%) than taking placebo (20%) lost 5% or more of their body weight at 1 year. Subjects taking lorcaserin were more likely to maintain their weight loss for another year.

Data Source: Two-year double-blind multicenter randomized clinical trial comparing lorcaserin with placebo in 3,182 obese or overweight subjects.

Disclosures: The study was supported by Arena Pharmaceuticals, which employed several of the coauthors and was involved in study design, data analysis, data review, and writing and revising of the manuscript.

Lorcaserin, a serotonin receptor agonist similar to fenfluramine and dexfenfluramine but designed to avoid the serotonin-related valvulopathy associated with those drugs, produces significant weight loss compared with placebo, according to a randomized trial of more than 3,000 subjects.

In conjunction with a behavior modification program, twice-daily lorcaserin enabled twice as many obese or overweight patients to lose 5% or more of their body weight during 1 year compared with placebo. The active drug also helped study subjects maintain their weight loss for a second year, compared with placebo, said Dr. Steven R. Smith of Florida Hospital's Translational Research Institute for Metabolism and Diabetes, Winter Park, and his associates.

However, as with other large studies of weight loss, this trial had a dropout rate of nearly 50% at 1 year, and nearly 40% of those subjects dropped out before 2 years.

The investigators reported the results of a prospective clinical trial evaluating the efficacy and safety of lorcaserin, conducted during September 2006–February 2009 at 98 academic and private sites, and involving 3,182 subjects.

The study subjects had a body mass index of 30-45 kg/m

Both the active-treatment and the placebo groups attended monthly sessions of individual nutritional and exercise counseling, and were encouraged to exercise moderately for 30 minutes per day and to reduce their daily caloric intake to 600 kcal below their estimated energy requirements.

After 1 year, 47% of the subjects taking lorcaserin had lost 5% or more of their baseline body weight, compared with 20% of those taking placebo, a statistically significant difference. Subjects receiving lorcaserin lost an average of 6% of their body weight, compared with an average 2% weight loss with placebo. Significantly more subjects in the lorcaserin group (23%) lost 10% or more of their body weight than in the placebo group (8%).

Among subjects who achieved a weight loss of 5% or more at 1 year, a greater proportion of those who continued taking lorcaserin in year 2 maintained that weight loss (70%) compared with subjects who were assigned to placebo in year 2 (50%).

Use of lorcaserin also was associated with significant declines in adverse metabolic measures such as fasting glucose levels, insulin levels, and glycated hemoglobin levels; waist circumference; adverse lipid measures such as total cholesterol, LDL cholesterol, and triglycerides; and markers of cardiovascular risk such as C-reactive protein levels, fibrinogen levels, blood pressure, and heart rate.

Both study groups showed improvement in quality of life measures, with a greater improvement in the lorcaserin group, Dr. Smith and his colleagues wrote (N. Engl. J. Med. 2010;363:245-56).

There were no significant differences between lorcaserin and placebo in the development of valvulopathy (less than 3% in both groups), and no severe mitral or aortic insufficiency was found. The two groups also showed no differences in change in pulmonary-artery systolic pressure.

However, “the actual incidence of [Food and Drug Administration]-defined valvulopathy was below the pretrial estimates; as a result, the trial was slightly underpowered regarding the primary echocardiographic safety end point,” the investigators noted.

Rates of serious adverse events were similarly low in both study groups, as were rates of depression, depressive symptoms, depressed mood, and suicidal thoughts.

Rates of headache and nausea were higher with lorcaserin than with placebo, but the symptoms tended to be mild and to resolve even with continued use of the agents.

The study was limited by its high dropout rate. It remains unknown whether the findings are applicable to patient groups that were excluded from the trial such as those with a BMI over 45, an eating disorder, or diabetes.

My Take

Studies Needed to Confirm Safety

The history of many pharmacologic therapies for obesity—including rimonabant, sibutramine, fenfluramine, and dexfenfluramine—is characterized by withdrawal from the market after postmarketing discovery of serious adverse effects. Given this history, “the justification for using lorcaserin to manage obesity is not greater efficacy than currently available drugs, but rather an apparently much better safety and adverse-event profile,” said Dr. Arne Astrup.

 

 

Lorcaserin therapy also was associated with “slight, but clinically relevant, improvements in almost all reported surrogate measures of diabetes and cardiovascular risk. These findings are important in light of the problems with drugs such as rimonabant and sibutramine, which do not produce similar reductions in blood pressure, heart rate, and levels of [LDL] cholesterol that would be expected with the weight loss achieved,” he noted.

“Lorcaserin use does not seem to increase the risk of valvulopathy, pulmonary hypertension, depression, or suicidal thoughts, but phase III studies will be required to confirm these initial findings in larger populations of patients,” he added.

DR. ASTRUP is in the department of nutrition at the University of Copenhagen in Frederiksberg, Denmark. He reported being a board member and receiving grants from Novo Nordisk, Neurosearch, and Merck. These comments are taken from his editorial accompanying Dr. Smith's report (N. Engl. J. Med. 2010;363:288-90).

Vitals

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: More obese subjects taking twice-daily lorcaserin (47%) than taking placebo (20%) lost 5% or more of their body weight at 1 year. Subjects taking lorcaserin were more likely to maintain their weight loss for another year.

Data Source: Two-year double-blind multicenter randomized clinical trial comparing lorcaserin with placebo in 3,182 obese or overweight subjects.

Disclosures: The study was supported by Arena Pharmaceuticals, which employed several of the coauthors and was involved in study design, data analysis, data review, and writing and revising of the manuscript.

Lorcaserin, a serotonin receptor agonist similar to fenfluramine and dexfenfluramine but designed to avoid the serotonin-related valvulopathy associated with those drugs, produces significant weight loss compared with placebo, according to a randomized trial of more than 3,000 subjects.

In conjunction with a behavior modification program, twice-daily lorcaserin enabled twice as many obese or overweight patients to lose 5% or more of their body weight during 1 year compared with placebo. The active drug also helped study subjects maintain their weight loss for a second year, compared with placebo, said Dr. Steven R. Smith of Florida Hospital's Translational Research Institute for Metabolism and Diabetes, Winter Park, and his associates.

However, as with other large studies of weight loss, this trial had a dropout rate of nearly 50% at 1 year, and nearly 40% of those subjects dropped out before 2 years.

The investigators reported the results of a prospective clinical trial evaluating the efficacy and safety of lorcaserin, conducted during September 2006–February 2009 at 98 academic and private sites, and involving 3,182 subjects.

The study subjects had a body mass index of 30-45 kg/m

Both the active-treatment and the placebo groups attended monthly sessions of individual nutritional and exercise counseling, and were encouraged to exercise moderately for 30 minutes per day and to reduce their daily caloric intake to 600 kcal below their estimated energy requirements.

After 1 year, 47% of the subjects taking lorcaserin had lost 5% or more of their baseline body weight, compared with 20% of those taking placebo, a statistically significant difference. Subjects receiving lorcaserin lost an average of 6% of their body weight, compared with an average 2% weight loss with placebo. Significantly more subjects in the lorcaserin group (23%) lost 10% or more of their body weight than in the placebo group (8%).

Among subjects who achieved a weight loss of 5% or more at 1 year, a greater proportion of those who continued taking lorcaserin in year 2 maintained that weight loss (70%) compared with subjects who were assigned to placebo in year 2 (50%).

Use of lorcaserin also was associated with significant declines in adverse metabolic measures such as fasting glucose levels, insulin levels, and glycated hemoglobin levels; waist circumference; adverse lipid measures such as total cholesterol, LDL cholesterol, and triglycerides; and markers of cardiovascular risk such as C-reactive protein levels, fibrinogen levels, blood pressure, and heart rate.

Both study groups showed improvement in quality of life measures, with a greater improvement in the lorcaserin group, Dr. Smith and his colleagues wrote (N. Engl. J. Med. 2010;363:245-56).

There were no significant differences between lorcaserin and placebo in the development of valvulopathy (less than 3% in both groups), and no severe mitral or aortic insufficiency was found. The two groups also showed no differences in change in pulmonary-artery systolic pressure.

However, “the actual incidence of [Food and Drug Administration]-defined valvulopathy was below the pretrial estimates; as a result, the trial was slightly underpowered regarding the primary echocardiographic safety end point,” the investigators noted.

Rates of serious adverse events were similarly low in both study groups, as were rates of depression, depressive symptoms, depressed mood, and suicidal thoughts.

Rates of headache and nausea were higher with lorcaserin than with placebo, but the symptoms tended to be mild and to resolve even with continued use of the agents.

The study was limited by its high dropout rate. It remains unknown whether the findings are applicable to patient groups that were excluded from the trial such as those with a BMI over 45, an eating disorder, or diabetes.

My Take

Studies Needed to Confirm Safety

The history of many pharmacologic therapies for obesity—including rimonabant, sibutramine, fenfluramine, and dexfenfluramine—is characterized by withdrawal from the market after postmarketing discovery of serious adverse effects. Given this history, “the justification for using lorcaserin to manage obesity is not greater efficacy than currently available drugs, but rather an apparently much better safety and adverse-event profile,” said Dr. Arne Astrup.

 

 

Lorcaserin therapy also was associated with “slight, but clinically relevant, improvements in almost all reported surrogate measures of diabetes and cardiovascular risk. These findings are important in light of the problems with drugs such as rimonabant and sibutramine, which do not produce similar reductions in blood pressure, heart rate, and levels of [LDL] cholesterol that would be expected with the weight loss achieved,” he noted.

“Lorcaserin use does not seem to increase the risk of valvulopathy, pulmonary hypertension, depression, or suicidal thoughts, but phase III studies will be required to confirm these initial findings in larger populations of patients,” he added.

DR. ASTRUP is in the department of nutrition at the University of Copenhagen in Frederiksberg, Denmark. He reported being a board member and receiving grants from Novo Nordisk, Neurosearch, and Merck. These comments are taken from his editorial accompanying Dr. Smith's report (N. Engl. J. Med. 2010;363:288-90).

Vitals

Major Finding: More obese subjects taking twice-daily lorcaserin (47%) than taking placebo (20%) lost 5% or more of their body weight at 1 year. Subjects taking lorcaserin were more likely to maintain their weight loss for another year.

Data Source: Two-year double-blind multicenter randomized clinical trial comparing lorcaserin with placebo in 3,182 obese or overweight subjects.

Disclosures: The study was supported by Arena Pharmaceuticals, which employed several of the coauthors and was involved in study design, data analysis, data review, and writing and revising of the manuscript.

Lorcaserin, a serotonin receptor agonist similar to fenfluramine and dexfenfluramine but designed to avoid the serotonin-related valvulopathy associated with those drugs, produces significant weight loss compared with placebo, according to a randomized trial of more than 3,000 subjects.

In conjunction with a behavior modification program, twice-daily lorcaserin enabled twice as many obese or overweight patients to lose 5% or more of their body weight during 1 year compared with placebo. The active drug also helped study subjects maintain their weight loss for a second year, compared with placebo, said Dr. Steven R. Smith of Florida Hospital's Translational Research Institute for Metabolism and Diabetes, Winter Park, and his associates.

However, as with other large studies of weight loss, this trial had a dropout rate of nearly 50% at 1 year, and nearly 40% of those subjects dropped out before 2 years.

The investigators reported the results of a prospective clinical trial evaluating the efficacy and safety of lorcaserin, conducted during September 2006–February 2009 at 98 academic and private sites, and involving 3,182 subjects.

The study subjects had a body mass index of 30-45 kg/m

Both the active-treatment and the placebo groups attended monthly sessions of individual nutritional and exercise counseling, and were encouraged to exercise moderately for 30 minutes per day and to reduce their daily caloric intake to 600 kcal below their estimated energy requirements.

After 1 year, 47% of the subjects taking lorcaserin had lost 5% or more of their baseline body weight, compared with 20% of those taking placebo, a statistically significant difference. Subjects receiving lorcaserin lost an average of 6% of their body weight, compared with an average 2% weight loss with placebo. Significantly more subjects in the lorcaserin group (23%) lost 10% or more of their body weight than in the placebo group (8%).

Among subjects who achieved a weight loss of 5% or more at 1 year, a greater proportion of those who continued taking lorcaserin in year 2 maintained that weight loss (70%) compared with subjects who were assigned to placebo in year 2 (50%).

Use of lorcaserin also was associated with significant declines in adverse metabolic measures such as fasting glucose levels, insulin levels, and glycated hemoglobin levels; waist circumference; adverse lipid measures such as total cholesterol, LDL cholesterol, and triglycerides; and markers of cardiovascular risk such as C-reactive protein levels, fibrinogen levels, blood pressure, and heart rate.

Both study groups showed improvement in quality of life measures, with a greater improvement in the lorcaserin group, Dr. Smith and his colleagues wrote (N. Engl. J. Med. 2010;363:245-56).

There were no significant differences between lorcaserin and placebo in the development of valvulopathy (less than 3% in both groups), and no severe mitral or aortic insufficiency was found. The two groups also showed no differences in change in pulmonary-artery systolic pressure.

However, “the actual incidence of [Food and Drug Administration]-defined valvulopathy was below the pretrial estimates; as a result, the trial was slightly underpowered regarding the primary echocardiographic safety end point,” the investigators noted.

Rates of serious adverse events were similarly low in both study groups, as were rates of depression, depressive symptoms, depressed mood, and suicidal thoughts.

Rates of headache and nausea were higher with lorcaserin than with placebo, but the symptoms tended to be mild and to resolve even with continued use of the agents.

The study was limited by its high dropout rate. It remains unknown whether the findings are applicable to patient groups that were excluded from the trial such as those with a BMI over 45, an eating disorder, or diabetes.

My Take

Studies Needed to Confirm Safety

The history of many pharmacologic therapies for obesity—including rimonabant, sibutramine, fenfluramine, and dexfenfluramine—is characterized by withdrawal from the market after postmarketing discovery of serious adverse effects. Given this history, “the justification for using lorcaserin to manage obesity is not greater efficacy than currently available drugs, but rather an apparently much better safety and adverse-event profile,” said Dr. Arne Astrup.

 

 

Lorcaserin therapy also was associated with “slight, but clinically relevant, improvements in almost all reported surrogate measures of diabetes and cardiovascular risk. These findings are important in light of the problems with drugs such as rimonabant and sibutramine, which do not produce similar reductions in blood pressure, heart rate, and levels of [LDL] cholesterol that would be expected with the weight loss achieved,” he noted.

“Lorcaserin use does not seem to increase the risk of valvulopathy, pulmonary hypertension, depression, or suicidal thoughts, but phase III studies will be required to confirm these initial findings in larger populations of patients,” he added.

DR. ASTRUP is in the department of nutrition at the University of Copenhagen in Frederiksberg, Denmark. He reported being a board member and receiving grants from Novo Nordisk, Neurosearch, and Merck. These comments are taken from his editorial accompanying Dr. Smith's report (N. Engl. J. Med. 2010;363:288-90).

Vitals

Publications
Publications
Topics
Article Type
Display Headline
Lorcaserin Produced Significant Weight Loss
Display Headline
Lorcaserin Produced Significant Weight Loss
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

High Rates of Postop Sepsis Suggest Need for Screening

Article Type
Changed
Display Headline
High Rates of Postop Sepsis Suggest Need for Screening

The rates of sepsis and septic shock following general surgery are so excessive that identifying high-risk patients and screening them at 12-hour intervals for signs and symptoms may be warranted, according to a report.

An analysis of data on more than 360,000 general surgery patients showed that those at highest risk are older than 60 years of age, undergo emergency rather than elective surgery, and have a major comorbidity. The findings suggest that patients with any of these three risk factors “warrant a high index of suspicion…and that this patient population would most likely benefit from mandatory sepsis screening,” said Dr. Laura J. Moore and her associates at Methodist Hospital, Houston.

To date, programs to limit perioperative complications have focused on prevention plus early recognition and treatment of thromboembolism, surgery-related MI, and surgical site infections. These efforts have produced a significant decline in all three complications and in related mortality.

But the incidences of postoperative sepsis and septic shock have remained alarmingly high—far greater than those of thromboembolism and MI—and the associated mortality also remains excessively high (50%).

To characterize the severity and extent of postoperative sepsis and septic shock, Dr. Moore and her colleagues analyzed information that had been collected prospectively in the American College of Surgeons NSQIP (National Surgical Quality Improvement Program) database. They examined data on 363,897 patients treated at 121 academic and community hospitals in 2005-2007.

A total of 8,350 patients (2.3%) developed sepsis, and 5,977 (1.6%) developed septic shock following general surgery. In comparison, pulmonary embolism developed in 0.3% and MI in 0.2%.

The development of sepsis raised the rate of 30-day mortality fourfold, whereas septic shock raised it 33-fold, the researchers said (Arch. Surg. 2010;145:695-700).

“Septic shock occurs 10 times more frequently than MI and has the same mortality rate; thus, it kills 10 times more people,” they said. “Therefore, our level of vigilance in identifying sepsis and septic shock needs to mimic, if not surpass, our vigilance for identifying MI and PE.”

Because closer monitoring of all surgical patients for signs and symptoms of sepsis is not realistic, it should be limited to those at highest risk. In this analysis, the percentage of patients older than age 60 was only 40% in the overall study group, compared with 52% in the group that developed sepsis and 70% in the group that developed septic shock.

The rate of sepsis was only 2% and that of septic shock was only 1% in patients undergoing elective procedures, compared with rates of approximately 5% for both sepsis and septic shock in patients undergoing emergency procedures.

Finally, approximately 90% of patients who developed sepsis and 97% of those who developed septic shock had at least one major comorbidity, compared with only 70% of those who did not develop sepsis. “The presence of any of the NSQIP–documented comorbidities increased the odds of developing sepsis or septic shock by sixfold” and raised the 30-day mortality by 22-fold, Dr. Moore said.

They found that clinicians at Methodist did not always accurately identify sepsis at the bedside in the most timely way. “A distinct window of early intervention exists in which the septic source must be eliminated and physiologic derangements corrected,” the investigators said.

The hospital implemented a program in which patients with any of these risk factors were screened every 12 hours for heart rate, white blood cell count, temperature, and respiratory rate. The program decreased sepsis-related mortality.

Disclosures: This study was supported by the Methodist Hospital Research Institute, Houston. No disclosures were reported.

Article PDF
Author and Disclosure Information

Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

The rates of sepsis and septic shock following general surgery are so excessive that identifying high-risk patients and screening them at 12-hour intervals for signs and symptoms may be warranted, according to a report.

An analysis of data on more than 360,000 general surgery patients showed that those at highest risk are older than 60 years of age, undergo emergency rather than elective surgery, and have a major comorbidity. The findings suggest that patients with any of these three risk factors “warrant a high index of suspicion…and that this patient population would most likely benefit from mandatory sepsis screening,” said Dr. Laura J. Moore and her associates at Methodist Hospital, Houston.

To date, programs to limit perioperative complications have focused on prevention plus early recognition and treatment of thromboembolism, surgery-related MI, and surgical site infections. These efforts have produced a significant decline in all three complications and in related mortality.

But the incidences of postoperative sepsis and septic shock have remained alarmingly high—far greater than those of thromboembolism and MI—and the associated mortality also remains excessively high (50%).

To characterize the severity and extent of postoperative sepsis and septic shock, Dr. Moore and her colleagues analyzed information that had been collected prospectively in the American College of Surgeons NSQIP (National Surgical Quality Improvement Program) database. They examined data on 363,897 patients treated at 121 academic and community hospitals in 2005-2007.

A total of 8,350 patients (2.3%) developed sepsis, and 5,977 (1.6%) developed septic shock following general surgery. In comparison, pulmonary embolism developed in 0.3% and MI in 0.2%.

The development of sepsis raised the rate of 30-day mortality fourfold, whereas septic shock raised it 33-fold, the researchers said (Arch. Surg. 2010;145:695-700).

“Septic shock occurs 10 times more frequently than MI and has the same mortality rate; thus, it kills 10 times more people,” they said. “Therefore, our level of vigilance in identifying sepsis and septic shock needs to mimic, if not surpass, our vigilance for identifying MI and PE.”

Because closer monitoring of all surgical patients for signs and symptoms of sepsis is not realistic, it should be limited to those at highest risk. In this analysis, the percentage of patients older than age 60 was only 40% in the overall study group, compared with 52% in the group that developed sepsis and 70% in the group that developed septic shock.

The rate of sepsis was only 2% and that of septic shock was only 1% in patients undergoing elective procedures, compared with rates of approximately 5% for both sepsis and septic shock in patients undergoing emergency procedures.

Finally, approximately 90% of patients who developed sepsis and 97% of those who developed septic shock had at least one major comorbidity, compared with only 70% of those who did not develop sepsis. “The presence of any of the NSQIP–documented comorbidities increased the odds of developing sepsis or septic shock by sixfold” and raised the 30-day mortality by 22-fold, Dr. Moore said.

They found that clinicians at Methodist did not always accurately identify sepsis at the bedside in the most timely way. “A distinct window of early intervention exists in which the septic source must be eliminated and physiologic derangements corrected,” the investigators said.

The hospital implemented a program in which patients with any of these risk factors were screened every 12 hours for heart rate, white blood cell count, temperature, and respiratory rate. The program decreased sepsis-related mortality.

Disclosures: This study was supported by the Methodist Hospital Research Institute, Houston. No disclosures were reported.

The rates of sepsis and septic shock following general surgery are so excessive that identifying high-risk patients and screening them at 12-hour intervals for signs and symptoms may be warranted, according to a report.

An analysis of data on more than 360,000 general surgery patients showed that those at highest risk are older than 60 years of age, undergo emergency rather than elective surgery, and have a major comorbidity. The findings suggest that patients with any of these three risk factors “warrant a high index of suspicion…and that this patient population would most likely benefit from mandatory sepsis screening,” said Dr. Laura J. Moore and her associates at Methodist Hospital, Houston.

To date, programs to limit perioperative complications have focused on prevention plus early recognition and treatment of thromboembolism, surgery-related MI, and surgical site infections. These efforts have produced a significant decline in all three complications and in related mortality.

But the incidences of postoperative sepsis and septic shock have remained alarmingly high—far greater than those of thromboembolism and MI—and the associated mortality also remains excessively high (50%).

To characterize the severity and extent of postoperative sepsis and septic shock, Dr. Moore and her colleagues analyzed information that had been collected prospectively in the American College of Surgeons NSQIP (National Surgical Quality Improvement Program) database. They examined data on 363,897 patients treated at 121 academic and community hospitals in 2005-2007.

A total of 8,350 patients (2.3%) developed sepsis, and 5,977 (1.6%) developed septic shock following general surgery. In comparison, pulmonary embolism developed in 0.3% and MI in 0.2%.

The development of sepsis raised the rate of 30-day mortality fourfold, whereas septic shock raised it 33-fold, the researchers said (Arch. Surg. 2010;145:695-700).

“Septic shock occurs 10 times more frequently than MI and has the same mortality rate; thus, it kills 10 times more people,” they said. “Therefore, our level of vigilance in identifying sepsis and septic shock needs to mimic, if not surpass, our vigilance for identifying MI and PE.”

Because closer monitoring of all surgical patients for signs and symptoms of sepsis is not realistic, it should be limited to those at highest risk. In this analysis, the percentage of patients older than age 60 was only 40% in the overall study group, compared with 52% in the group that developed sepsis and 70% in the group that developed septic shock.

The rate of sepsis was only 2% and that of septic shock was only 1% in patients undergoing elective procedures, compared with rates of approximately 5% for both sepsis and septic shock in patients undergoing emergency procedures.

Finally, approximately 90% of patients who developed sepsis and 97% of those who developed septic shock had at least one major comorbidity, compared with only 70% of those who did not develop sepsis. “The presence of any of the NSQIP–documented comorbidities increased the odds of developing sepsis or septic shock by sixfold” and raised the 30-day mortality by 22-fold, Dr. Moore said.

They found that clinicians at Methodist did not always accurately identify sepsis at the bedside in the most timely way. “A distinct window of early intervention exists in which the septic source must be eliminated and physiologic derangements corrected,” the investigators said.

The hospital implemented a program in which patients with any of these risk factors were screened every 12 hours for heart rate, white blood cell count, temperature, and respiratory rate. The program decreased sepsis-related mortality.

Disclosures: This study was supported by the Methodist Hospital Research Institute, Houston. No disclosures were reported.

Topics
Article Type
Display Headline
High Rates of Postop Sepsis Suggest Need for Screening
Display Headline
High Rates of Postop Sepsis Suggest Need for Screening
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Obesity at Age 18 May Raise Risk of Later Psoriatic Arthritis

Article Type
Changed
Display Headline
Obesity at Age 18 May Raise Risk of Later Psoriatic Arthritis

People who are obese at age 18 may be at an increased risk for psoriatic arthritis later in life, according to a report in the July issue of the Archives of Dermatology.

In a single-center study of 943 psoriasis patients, those who reported being obese at age 18 were three times more likely to develop psoriatic arthritis (PsA), compared with patients who reported having a normal body mass index at age 18, reported Dr. Razieh Soltani-Arabshahi and associates of the University of Utah School of Medicine, Salt Lake City.

In a previous study, the researchers found that patients with psoriasis had an increased BMI, compared with controls. So, they “set out to study if obesity increases the risk of PsA,” using data from a large cohort of subjects enrolled in the Utah Psoriasis Initiative, the researchers noted.

The cohort included consecutive patients older than 18 years who attended university-affiliated psoriasis clinics in 2002-2008 and provided detailed demographic and clinical data. A total of 250 (27%) of the 943 subjects included in the study reported having PsA.

Of the study patients, 14% had been overweight and 5% had been obese at age 18, according to self-reported height and weight measurements.

Higher BMI was associated with an increased risk of developing PsA, independent of other risk factors such as nail involvement. Each unit increase in BMI at age 18 corresponded to a 5% increase in risk of PsA.

In addition, patients who were obese at age 18 showed an earlier onset of PsA, compared with patients of normal weight at age 18. Twenty percent of those who had been overweight or obese at 18 years developed PsA by age 35. In comparison, among patients of normal weight at age 18, 20% did not develop PsA until age 48.

Moreover, patients who had been overweight or obese at age 18 were more likely to report having severe psoriasis (47% and 57%, respectively) than patients who were of normal weight at age 18 (39%).

The design of the study did not permit the investigators to infer causality. However, it is plausible that obesity and its associated inflammatory state might contribute to both psoriasis and PsA, Dr. Soltani-Arabshahi and colleagues reported (Arch. Dermatol. 2010;146:721-6).

“Evaluation of additional sample sets in an attempt to replicate these results is imperative for strong conclusions to be drawn,” they noted.

The study was limited in that it relied on subjects’ self-report of height and weight earlier in life, self-report of psoriasis severity, and self-report of diagnosis of PsA.

The study was supported in part by the Utah Psoriasis Initiative and the Benning Foundation. Dr. Soltani-Arabshahi’s associates reported numerous industry relationships.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
psoriasis, obesity, diabetes, dermatology, pediatrics, rheumatology, Razieh Soltani-Arabshahi
Author and Disclosure Information

Author and Disclosure Information

People who are obese at age 18 may be at an increased risk for psoriatic arthritis later in life, according to a report in the July issue of the Archives of Dermatology.

In a single-center study of 943 psoriasis patients, those who reported being obese at age 18 were three times more likely to develop psoriatic arthritis (PsA), compared with patients who reported having a normal body mass index at age 18, reported Dr. Razieh Soltani-Arabshahi and associates of the University of Utah School of Medicine, Salt Lake City.

In a previous study, the researchers found that patients with psoriasis had an increased BMI, compared with controls. So, they “set out to study if obesity increases the risk of PsA,” using data from a large cohort of subjects enrolled in the Utah Psoriasis Initiative, the researchers noted.

The cohort included consecutive patients older than 18 years who attended university-affiliated psoriasis clinics in 2002-2008 and provided detailed demographic and clinical data. A total of 250 (27%) of the 943 subjects included in the study reported having PsA.

Of the study patients, 14% had been overweight and 5% had been obese at age 18, according to self-reported height and weight measurements.

Higher BMI was associated with an increased risk of developing PsA, independent of other risk factors such as nail involvement. Each unit increase in BMI at age 18 corresponded to a 5% increase in risk of PsA.

In addition, patients who were obese at age 18 showed an earlier onset of PsA, compared with patients of normal weight at age 18. Twenty percent of those who had been overweight or obese at 18 years developed PsA by age 35. In comparison, among patients of normal weight at age 18, 20% did not develop PsA until age 48.

Moreover, patients who had been overweight or obese at age 18 were more likely to report having severe psoriasis (47% and 57%, respectively) than patients who were of normal weight at age 18 (39%).

The design of the study did not permit the investigators to infer causality. However, it is plausible that obesity and its associated inflammatory state might contribute to both psoriasis and PsA, Dr. Soltani-Arabshahi and colleagues reported (Arch. Dermatol. 2010;146:721-6).

“Evaluation of additional sample sets in an attempt to replicate these results is imperative for strong conclusions to be drawn,” they noted.

The study was limited in that it relied on subjects’ self-report of height and weight earlier in life, self-report of psoriasis severity, and self-report of diagnosis of PsA.

The study was supported in part by the Utah Psoriasis Initiative and the Benning Foundation. Dr. Soltani-Arabshahi’s associates reported numerous industry relationships.

People who are obese at age 18 may be at an increased risk for psoriatic arthritis later in life, according to a report in the July issue of the Archives of Dermatology.

In a single-center study of 943 psoriasis patients, those who reported being obese at age 18 were three times more likely to develop psoriatic arthritis (PsA), compared with patients who reported having a normal body mass index at age 18, reported Dr. Razieh Soltani-Arabshahi and associates of the University of Utah School of Medicine, Salt Lake City.

In a previous study, the researchers found that patients with psoriasis had an increased BMI, compared with controls. So, they “set out to study if obesity increases the risk of PsA,” using data from a large cohort of subjects enrolled in the Utah Psoriasis Initiative, the researchers noted.

The cohort included consecutive patients older than 18 years who attended university-affiliated psoriasis clinics in 2002-2008 and provided detailed demographic and clinical data. A total of 250 (27%) of the 943 subjects included in the study reported having PsA.

Of the study patients, 14% had been overweight and 5% had been obese at age 18, according to self-reported height and weight measurements.

Higher BMI was associated with an increased risk of developing PsA, independent of other risk factors such as nail involvement. Each unit increase in BMI at age 18 corresponded to a 5% increase in risk of PsA.

In addition, patients who were obese at age 18 showed an earlier onset of PsA, compared with patients of normal weight at age 18. Twenty percent of those who had been overweight or obese at 18 years developed PsA by age 35. In comparison, among patients of normal weight at age 18, 20% did not develop PsA until age 48.

Moreover, patients who had been overweight or obese at age 18 were more likely to report having severe psoriasis (47% and 57%, respectively) than patients who were of normal weight at age 18 (39%).

The design of the study did not permit the investigators to infer causality. However, it is plausible that obesity and its associated inflammatory state might contribute to both psoriasis and PsA, Dr. Soltani-Arabshahi and colleagues reported (Arch. Dermatol. 2010;146:721-6).

“Evaluation of additional sample sets in an attempt to replicate these results is imperative for strong conclusions to be drawn,” they noted.

The study was limited in that it relied on subjects’ self-report of height and weight earlier in life, self-report of psoriasis severity, and self-report of diagnosis of PsA.

The study was supported in part by the Utah Psoriasis Initiative and the Benning Foundation. Dr. Soltani-Arabshahi’s associates reported numerous industry relationships.

Publications
Publications
Topics
Article Type
Display Headline
Obesity at Age 18 May Raise Risk of Later Psoriatic Arthritis
Display Headline
Obesity at Age 18 May Raise Risk of Later Psoriatic Arthritis
Legacy Keywords
psoriasis, obesity, diabetes, dermatology, pediatrics, rheumatology, Razieh Soltani-Arabshahi
Legacy Keywords
psoriasis, obesity, diabetes, dermatology, pediatrics, rheumatology, Razieh Soltani-Arabshahi
Article Source

PURLs Copyright

Inside the Article

White Rice Raised Diabetes Risk 17%, Over Brown Rice

Article Type
Changed
Display Headline
White Rice Raised Diabetes Risk 17%, Over Brown Rice

Consumption of white rice appears to increase the risk of developing type 2 diabetes, whereas consumption of brown rice appears to decrease that risk, according to a report based on data from three studies.

“From a public health point of view, replacing refined grains such as white rice [with] whole grains, including brown rice, should be recommended to facilitate the prevention of type 2 diabetes,” said Dr. Qi Sun of the Harvard School of Public Health, Boston, and associates.

White rice is known to have a higher glycemic index than than that of brown rice. The consumption of white rice in the United States has more than tripled since the 1930s.

White rice's relationship to type 2 diabetes has been studied in several Asian countries, where it is a staple accounting for as much as 75% of the diet. But this is the first prospective study to specifically assess the relationship between the disease and the intake of both white and brown rice in a Western population, where white rice accounts for approximately 2% of the diet, Dr. Sun and his colleagues noted.

The researchers used data from three large cohort studies that documented food intake to examine this association, assessing diet and diabetes status in 39,765 men in the HPFS (Health Professionals Follow-Up Study), 69,120 women in the NHS I (Nurses' Health Study I), and 88,343 women in the NHS II.

There were 2,648 incident cases of diabetes during 20 years of follow-up in the HPFS, 5,500 cases during 22 years of follow-up in the NHS I, and 2,359 cases during 14 years of follow-up in the NHS II, Dr. Sun and his colleagues reported.

Greater consumption of white rice was associated with a higher risk of diabetes across all three studies. This association was attenuated after the data were adjusted to account for multiple lifestyle and dietary risk factors, “but a trend of increased risk associated with high white rice intake remained,” the investigators said.

“In comparison with those in the lowest category of white rice intake, participants who had at least five servings of white rice per week had a 17% higher risk of developing type 2 diabetes,” they said (Arch. Intern. Med. 2010;170:961-9).

In contrast, greater consumption of brown rice was associated with a lower risk of diabetes. This association was attenuated but still remained significant after the data were adjusted to account for lifestyle and dietary risk factors.

“When compared with the participants who ate less than 1 serving of brown rice per month, the pooled risk reduction of type 2 diabetes was 0.89 for intake of 2 or more servings per week,” Dr. Sun and colleagues said.

“Because brown rice consumption levels were rather low in our participants, we could not determine whether brown rice intake at much higher levels is associated with a further reduction of diabetes risk,” they added.

The researchers then assessed the relative risks associated with replacing one-third of a serving of white rice per day with the same amount of brown rice. “In all three cohorts, substituting brown rice for white rice was consistently associated with a lower risk of type 2 diabetes.” Every 50-g substitution of brown rice for white per day was associated with a 0.84 risk reduction.

The study evaluated the association between diet and diabetes among working, highly educated health professionals of predominantly European ancestry. The findings may not be generalizable to other populations, Dr. Sun and associates said.

Disclosures: The study was supported by a grant from the National Institutes of Health. Dr. Sun is supported by Unilever Corporate Research. No financial conflicts of interest were reported.

Substituting brown rice for white was consistently associated with a lower risk for type 2 diabetes.

Source ©Daniel Gilbey/Fotolia.com

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Consumption of white rice appears to increase the risk of developing type 2 diabetes, whereas consumption of brown rice appears to decrease that risk, according to a report based on data from three studies.

“From a public health point of view, replacing refined grains such as white rice [with] whole grains, including brown rice, should be recommended to facilitate the prevention of type 2 diabetes,” said Dr. Qi Sun of the Harvard School of Public Health, Boston, and associates.

White rice is known to have a higher glycemic index than than that of brown rice. The consumption of white rice in the United States has more than tripled since the 1930s.

White rice's relationship to type 2 diabetes has been studied in several Asian countries, where it is a staple accounting for as much as 75% of the diet. But this is the first prospective study to specifically assess the relationship between the disease and the intake of both white and brown rice in a Western population, where white rice accounts for approximately 2% of the diet, Dr. Sun and his colleagues noted.

The researchers used data from three large cohort studies that documented food intake to examine this association, assessing diet and diabetes status in 39,765 men in the HPFS (Health Professionals Follow-Up Study), 69,120 women in the NHS I (Nurses' Health Study I), and 88,343 women in the NHS II.

There were 2,648 incident cases of diabetes during 20 years of follow-up in the HPFS, 5,500 cases during 22 years of follow-up in the NHS I, and 2,359 cases during 14 years of follow-up in the NHS II, Dr. Sun and his colleagues reported.

Greater consumption of white rice was associated with a higher risk of diabetes across all three studies. This association was attenuated after the data were adjusted to account for multiple lifestyle and dietary risk factors, “but a trend of increased risk associated with high white rice intake remained,” the investigators said.

“In comparison with those in the lowest category of white rice intake, participants who had at least five servings of white rice per week had a 17% higher risk of developing type 2 diabetes,” they said (Arch. Intern. Med. 2010;170:961-9).

In contrast, greater consumption of brown rice was associated with a lower risk of diabetes. This association was attenuated but still remained significant after the data were adjusted to account for lifestyle and dietary risk factors.

“When compared with the participants who ate less than 1 serving of brown rice per month, the pooled risk reduction of type 2 diabetes was 0.89 for intake of 2 or more servings per week,” Dr. Sun and colleagues said.

“Because brown rice consumption levels were rather low in our participants, we could not determine whether brown rice intake at much higher levels is associated with a further reduction of diabetes risk,” they added.

The researchers then assessed the relative risks associated with replacing one-third of a serving of white rice per day with the same amount of brown rice. “In all three cohorts, substituting brown rice for white rice was consistently associated with a lower risk of type 2 diabetes.” Every 50-g substitution of brown rice for white per day was associated with a 0.84 risk reduction.

The study evaluated the association between diet and diabetes among working, highly educated health professionals of predominantly European ancestry. The findings may not be generalizable to other populations, Dr. Sun and associates said.

Disclosures: The study was supported by a grant from the National Institutes of Health. Dr. Sun is supported by Unilever Corporate Research. No financial conflicts of interest were reported.

Substituting brown rice for white was consistently associated with a lower risk for type 2 diabetes.

Source ©Daniel Gilbey/Fotolia.com

Consumption of white rice appears to increase the risk of developing type 2 diabetes, whereas consumption of brown rice appears to decrease that risk, according to a report based on data from three studies.

“From a public health point of view, replacing refined grains such as white rice [with] whole grains, including brown rice, should be recommended to facilitate the prevention of type 2 diabetes,” said Dr. Qi Sun of the Harvard School of Public Health, Boston, and associates.

White rice is known to have a higher glycemic index than than that of brown rice. The consumption of white rice in the United States has more than tripled since the 1930s.

White rice's relationship to type 2 diabetes has been studied in several Asian countries, where it is a staple accounting for as much as 75% of the diet. But this is the first prospective study to specifically assess the relationship between the disease and the intake of both white and brown rice in a Western population, where white rice accounts for approximately 2% of the diet, Dr. Sun and his colleagues noted.

The researchers used data from three large cohort studies that documented food intake to examine this association, assessing diet and diabetes status in 39,765 men in the HPFS (Health Professionals Follow-Up Study), 69,120 women in the NHS I (Nurses' Health Study I), and 88,343 women in the NHS II.

There were 2,648 incident cases of diabetes during 20 years of follow-up in the HPFS, 5,500 cases during 22 years of follow-up in the NHS I, and 2,359 cases during 14 years of follow-up in the NHS II, Dr. Sun and his colleagues reported.

Greater consumption of white rice was associated with a higher risk of diabetes across all three studies. This association was attenuated after the data were adjusted to account for multiple lifestyle and dietary risk factors, “but a trend of increased risk associated with high white rice intake remained,” the investigators said.

“In comparison with those in the lowest category of white rice intake, participants who had at least five servings of white rice per week had a 17% higher risk of developing type 2 diabetes,” they said (Arch. Intern. Med. 2010;170:961-9).

In contrast, greater consumption of brown rice was associated with a lower risk of diabetes. This association was attenuated but still remained significant after the data were adjusted to account for lifestyle and dietary risk factors.

“When compared with the participants who ate less than 1 serving of brown rice per month, the pooled risk reduction of type 2 diabetes was 0.89 for intake of 2 or more servings per week,” Dr. Sun and colleagues said.

“Because brown rice consumption levels were rather low in our participants, we could not determine whether brown rice intake at much higher levels is associated with a further reduction of diabetes risk,” they added.

The researchers then assessed the relative risks associated with replacing one-third of a serving of white rice per day with the same amount of brown rice. “In all three cohorts, substituting brown rice for white rice was consistently associated with a lower risk of type 2 diabetes.” Every 50-g substitution of brown rice for white per day was associated with a 0.84 risk reduction.

The study evaluated the association between diet and diabetes among working, highly educated health professionals of predominantly European ancestry. The findings may not be generalizable to other populations, Dr. Sun and associates said.

Disclosures: The study was supported by a grant from the National Institutes of Health. Dr. Sun is supported by Unilever Corporate Research. No financial conflicts of interest were reported.

Substituting brown rice for white was consistently associated with a lower risk for type 2 diabetes.

Source ©Daniel Gilbey/Fotolia.com

Publications
Publications
Topics
Article Type
Display Headline
White Rice Raised Diabetes Risk 17%, Over Brown Rice
Display Headline
White Rice Raised Diabetes Risk 17%, Over Brown Rice
Article Source

From Archives of Internal Medicine

PURLs Copyright

Inside the Article

Article PDF Media

Annual High-Dose Vit. D Raises Fall, Break Risks

Article Type
Changed
Display Headline
Annual High-Dose Vit. D Raises Fall, Break Risks

Far from protecting older women from falls and fractures, yearly high-dose oral vitamin D raised the risk of falls by 15% and that of fractures by 26%, according to an Australian study.

These risks were highest in the 3-month period immediately after each annual dose, said Kerrie M. Sanders, Ph.D., of the University of Melbourne and her associates.

As this study used the “largest total annual dose of vitamin D (500,000 IU) reported in any large randomized controlled trial,” it is possible that these adverse outcomes are related to the dosage, or perhaps to the once-a-year regimen. But the levels of 25-hydroxycholecalciferol achieved in these subjects can also occur with other dosing regimens, so it appears that the safety of all high-dose vitamin D supplementation warrants further examination, they noted.

Dr. Sanders and her colleagues performed their single-center study in 2,256 white women aged at least 70. They were considered at risk for hip fracture because of their family or personal histories or because they reported recent falls.

The subjects were randomly assigned to receive a single oral dose of vitamin D (cholecalciferol) or a matching placebo at the same time every year for 3-5 years. Lab studies in a subgroup of the subjects showed that the active treatment raised levels of 25-hydroxycholecalciferol an average of 41%, as expected.

There were 5,404 falls during follow-up, involving 74% of the women taking vitamin D and 68% of those taking placebo. The rate of falls was 83 per 100 person-years with vitamin D, compared with 73 per 100 person-years with placebo, a statistically significant difference.

The increase in falls with active treatment was noted in falls that produced fractures and those that didn't, and in falls that produced soft-tissue injury.

A total of 155 women taking vitamin D sustained 171 fractures during follow-up, compared with 125 women taking placebo who sustained 135 fractures. This translates to a rate of 4.9 fractures per 100 person-years with active treatment and 3.9 fractures per 100 person-years with placebo.

These risks of falls and of fractures did not change after the data were adjusted to account for subjects' calcium intake.

“Contrary to our hypothesis, participants receiving annual high-dose oral cholecalciferol experienced 15% more falls and 26% more fractures than [did] the placebo group. Women not only experienced excess fractures after more frequent falls but also experienced more fractures that were not associated with a fall,” the investigators noted (JAMA 2010;303:1815-22).

The reason for these counterproductive effects is not yet known, but it is possible that the once-a-year oral regimen—compared with either a regimen that divides the oral doses or one that uses intramuscular doses—is at fault. “It is reasonable to speculate that high serum levels of vitamin D or metabolites resulting from the large annual dose, subsequent decrease in the levels, or both might be causal,” they wrote.

In an accompanying editorial, Dr. Bess Dawson-Hughes and Susan S. Harris, D.Sc., of the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts University, Boston, said that these study findings should not detract from the importance of “correcting widespread vitamin D deficiency and insufficiency.

“There is no evidence for adverse effects of more frequent, lower-dose regimens, so daily, weekly, or monthly dosing with vitamin D3 appears to be the best option for clinicians at this time,” they noted (JAMA 2010;303:1861-2).

Disclosures: This study was supported by the National Health and Medical Research Council and the Australian Government Department of Health and Ageing. No conflicts of interest were reported by Dr. Sanders and her associates, Dr. Dawson-Hughes, or Dr. Harris.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Far from protecting older women from falls and fractures, yearly high-dose oral vitamin D raised the risk of falls by 15% and that of fractures by 26%, according to an Australian study.

These risks were highest in the 3-month period immediately after each annual dose, said Kerrie M. Sanders, Ph.D., of the University of Melbourne and her associates.

As this study used the “largest total annual dose of vitamin D (500,000 IU) reported in any large randomized controlled trial,” it is possible that these adverse outcomes are related to the dosage, or perhaps to the once-a-year regimen. But the levels of 25-hydroxycholecalciferol achieved in these subjects can also occur with other dosing regimens, so it appears that the safety of all high-dose vitamin D supplementation warrants further examination, they noted.

Dr. Sanders and her colleagues performed their single-center study in 2,256 white women aged at least 70. They were considered at risk for hip fracture because of their family or personal histories or because they reported recent falls.

The subjects were randomly assigned to receive a single oral dose of vitamin D (cholecalciferol) or a matching placebo at the same time every year for 3-5 years. Lab studies in a subgroup of the subjects showed that the active treatment raised levels of 25-hydroxycholecalciferol an average of 41%, as expected.

There were 5,404 falls during follow-up, involving 74% of the women taking vitamin D and 68% of those taking placebo. The rate of falls was 83 per 100 person-years with vitamin D, compared with 73 per 100 person-years with placebo, a statistically significant difference.

The increase in falls with active treatment was noted in falls that produced fractures and those that didn't, and in falls that produced soft-tissue injury.

A total of 155 women taking vitamin D sustained 171 fractures during follow-up, compared with 125 women taking placebo who sustained 135 fractures. This translates to a rate of 4.9 fractures per 100 person-years with active treatment and 3.9 fractures per 100 person-years with placebo.

These risks of falls and of fractures did not change after the data were adjusted to account for subjects' calcium intake.

“Contrary to our hypothesis, participants receiving annual high-dose oral cholecalciferol experienced 15% more falls and 26% more fractures than [did] the placebo group. Women not only experienced excess fractures after more frequent falls but also experienced more fractures that were not associated with a fall,” the investigators noted (JAMA 2010;303:1815-22).

The reason for these counterproductive effects is not yet known, but it is possible that the once-a-year oral regimen—compared with either a regimen that divides the oral doses or one that uses intramuscular doses—is at fault. “It is reasonable to speculate that high serum levels of vitamin D or metabolites resulting from the large annual dose, subsequent decrease in the levels, or both might be causal,” they wrote.

In an accompanying editorial, Dr. Bess Dawson-Hughes and Susan S. Harris, D.Sc., of the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts University, Boston, said that these study findings should not detract from the importance of “correcting widespread vitamin D deficiency and insufficiency.

“There is no evidence for adverse effects of more frequent, lower-dose regimens, so daily, weekly, or monthly dosing with vitamin D3 appears to be the best option for clinicians at this time,” they noted (JAMA 2010;303:1861-2).

Disclosures: This study was supported by the National Health and Medical Research Council and the Australian Government Department of Health and Ageing. No conflicts of interest were reported by Dr. Sanders and her associates, Dr. Dawson-Hughes, or Dr. Harris.

Far from protecting older women from falls and fractures, yearly high-dose oral vitamin D raised the risk of falls by 15% and that of fractures by 26%, according to an Australian study.

These risks were highest in the 3-month period immediately after each annual dose, said Kerrie M. Sanders, Ph.D., of the University of Melbourne and her associates.

As this study used the “largest total annual dose of vitamin D (500,000 IU) reported in any large randomized controlled trial,” it is possible that these adverse outcomes are related to the dosage, or perhaps to the once-a-year regimen. But the levels of 25-hydroxycholecalciferol achieved in these subjects can also occur with other dosing regimens, so it appears that the safety of all high-dose vitamin D supplementation warrants further examination, they noted.

Dr. Sanders and her colleagues performed their single-center study in 2,256 white women aged at least 70. They were considered at risk for hip fracture because of their family or personal histories or because they reported recent falls.

The subjects were randomly assigned to receive a single oral dose of vitamin D (cholecalciferol) or a matching placebo at the same time every year for 3-5 years. Lab studies in a subgroup of the subjects showed that the active treatment raised levels of 25-hydroxycholecalciferol an average of 41%, as expected.

There were 5,404 falls during follow-up, involving 74% of the women taking vitamin D and 68% of those taking placebo. The rate of falls was 83 per 100 person-years with vitamin D, compared with 73 per 100 person-years with placebo, a statistically significant difference.

The increase in falls with active treatment was noted in falls that produced fractures and those that didn't, and in falls that produced soft-tissue injury.

A total of 155 women taking vitamin D sustained 171 fractures during follow-up, compared with 125 women taking placebo who sustained 135 fractures. This translates to a rate of 4.9 fractures per 100 person-years with active treatment and 3.9 fractures per 100 person-years with placebo.

These risks of falls and of fractures did not change after the data were adjusted to account for subjects' calcium intake.

“Contrary to our hypothesis, participants receiving annual high-dose oral cholecalciferol experienced 15% more falls and 26% more fractures than [did] the placebo group. Women not only experienced excess fractures after more frequent falls but also experienced more fractures that were not associated with a fall,” the investigators noted (JAMA 2010;303:1815-22).

The reason for these counterproductive effects is not yet known, but it is possible that the once-a-year oral regimen—compared with either a regimen that divides the oral doses or one that uses intramuscular doses—is at fault. “It is reasonable to speculate that high serum levels of vitamin D or metabolites resulting from the large annual dose, subsequent decrease in the levels, or both might be causal,” they wrote.

In an accompanying editorial, Dr. Bess Dawson-Hughes and Susan S. Harris, D.Sc., of the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts University, Boston, said that these study findings should not detract from the importance of “correcting widespread vitamin D deficiency and insufficiency.

“There is no evidence for adverse effects of more frequent, lower-dose regimens, so daily, weekly, or monthly dosing with vitamin D3 appears to be the best option for clinicians at this time,” they noted (JAMA 2010;303:1861-2).

Disclosures: This study was supported by the National Health and Medical Research Council and the Australian Government Department of Health and Ageing. No conflicts of interest were reported by Dr. Sanders and her associates, Dr. Dawson-Hughes, or Dr. Harris.

Publications
Publications
Topics
Article Type
Display Headline
Annual High-Dose Vit. D Raises Fall, Break Risks
Display Headline
Annual High-Dose Vit. D Raises Fall, Break Risks
Article Source

From JAMA

PURLs Copyright

Inside the Article

Article PDF Media

Beta-Blockers May Boost COPD Survival Rates

Article Type
Changed
Display Headline
Beta-Blockers May Boost COPD Survival Rates

Beta-blockers appeared to improve survival in chronic obstructive pulmonary disease and decreased the risk of exacerbations by nearly 30%, according to a recent report.

Beta-blockers are known to improve survival in patients with a wide spectrum of cardiovascular diseases. But the benefits shown in an observational cohort study were surprising, the study investigators noted, because the drugs often are withheld in COPD patients because of fear they will promote bronchospasm and induce respiratory failure.

Even more surprising was the finding that beta-blockers benefited COPD patients who had no known cardiovascular disease, said Dr. Frans H. Rutten of the University Medical Center Utrecht, the Netherlands, and his associates.

“Traditional dogma … states that beta-blockers are contraindicated in COPD because of their presumed bronchoconstrictive properties and 'competition' with beta-2 agonists,” the researchers said. In theory, however, those drugs could benefit COPD patients “by tempering the sympathetic nervous system or by reducing the ischemic burden,” they added (Arch. Intern. Med. 2010;170:880-7).

The researchers assessed 2,230 patients aged 45 years and older (mean age 65 years) who attended 23 general practices in the vicinity of Utrecht from 1995 through 2005. Those patients either had COPD at the start of the study period (560 patients) or developed the disorder during the study (1,670 patients).

A total of 665 patients used beta-blockers, while 1,565 did not.

Overall, 686 patients in the study died. All-cause mortality was 27% among those who used beta-blockers, a significantly smaller proportion than the 32% among subjects who did not use the drugs.

Similarly, 1,055 of the study's patients had at least one COPD exacerbation during follow-up. That included 43% of those who used beta-blockers, a significantly smaller proportion than the 49% rate in patients who did not use the drugs.

“To our knowledge, this is the first observational study that shows that long-term treatment with beta-blockers may improve survival and reduce the risk of an exacerbation of COPD in the broad spectrum of patients” with COPD, Dr. Rutten and his colleagues said.

“Cardioselective beta-blockers had larger beneficial effects on mortality than nonselective ones, but similar effects on risk of exacerbation of COPD,” they said.

“Interestingly, the association of beta-blocker use with all-cause mortality and risk of exacerbation of COPD also remained in patients who were taking two or more pulmonary drugs or who were using inhaled beta-2 sympathicomimetics or anticholinergic agents,” the investigators noted. “Therefore, inhaled pulmonary medication seems not to interfere with the results of beta-blocker use.”

A recent meta-analysis of randomized trials has already shown that beta-blockers are well tolerated by COPD patients. With the results of the observational study added to those findings, it seems clear that “the time has come to confirm these results in a randomized controlled trial,” Dr. Rutten and his associates said.

The study findings “provide a rationale for the practicing clinicians to use beta-blockers (even noncardioselective ones such as carvedilol) cautiously in their patients with COPD who also have a coexisting cardiovascular condition for which a beta-blocker is required, noted Dr. Don D. Sin and Dr. S.F. Paul Man, both of the University of British Columbia and Providence Heart and Lung Institute, Vancouver, in an editorial comment accompanying the report (Arch. Intern. Med. 2010;170:849-50).

“These data may be of great clinical relevance in COPD because cardiovascular diseases (and not respiratory failure) are the leading causes of hospitalization,” Dr. Sin and Dr. Man noted, “accounting for nearly 50% of all hospital admissions, as well as being the second-leading cause of mortality, responsible for 25% of all deaths, in patients with mild to moderate COPD.

Disclosures: No financial conflicts of interest were reported.

Article PDF
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Beta-blockers appeared to improve survival in chronic obstructive pulmonary disease and decreased the risk of exacerbations by nearly 30%, according to a recent report.

Beta-blockers are known to improve survival in patients with a wide spectrum of cardiovascular diseases. But the benefits shown in an observational cohort study were surprising, the study investigators noted, because the drugs often are withheld in COPD patients because of fear they will promote bronchospasm and induce respiratory failure.

Even more surprising was the finding that beta-blockers benefited COPD patients who had no known cardiovascular disease, said Dr. Frans H. Rutten of the University Medical Center Utrecht, the Netherlands, and his associates.

“Traditional dogma … states that beta-blockers are contraindicated in COPD because of their presumed bronchoconstrictive properties and 'competition' with beta-2 agonists,” the researchers said. In theory, however, those drugs could benefit COPD patients “by tempering the sympathetic nervous system or by reducing the ischemic burden,” they added (Arch. Intern. Med. 2010;170:880-7).

The researchers assessed 2,230 patients aged 45 years and older (mean age 65 years) who attended 23 general practices in the vicinity of Utrecht from 1995 through 2005. Those patients either had COPD at the start of the study period (560 patients) or developed the disorder during the study (1,670 patients).

A total of 665 patients used beta-blockers, while 1,565 did not.

Overall, 686 patients in the study died. All-cause mortality was 27% among those who used beta-blockers, a significantly smaller proportion than the 32% among subjects who did not use the drugs.

Similarly, 1,055 of the study's patients had at least one COPD exacerbation during follow-up. That included 43% of those who used beta-blockers, a significantly smaller proportion than the 49% rate in patients who did not use the drugs.

“To our knowledge, this is the first observational study that shows that long-term treatment with beta-blockers may improve survival and reduce the risk of an exacerbation of COPD in the broad spectrum of patients” with COPD, Dr. Rutten and his colleagues said.

“Cardioselective beta-blockers had larger beneficial effects on mortality than nonselective ones, but similar effects on risk of exacerbation of COPD,” they said.

“Interestingly, the association of beta-blocker use with all-cause mortality and risk of exacerbation of COPD also remained in patients who were taking two or more pulmonary drugs or who were using inhaled beta-2 sympathicomimetics or anticholinergic agents,” the investigators noted. “Therefore, inhaled pulmonary medication seems not to interfere with the results of beta-blocker use.”

A recent meta-analysis of randomized trials has already shown that beta-blockers are well tolerated by COPD patients. With the results of the observational study added to those findings, it seems clear that “the time has come to confirm these results in a randomized controlled trial,” Dr. Rutten and his associates said.

The study findings “provide a rationale for the practicing clinicians to use beta-blockers (even noncardioselective ones such as carvedilol) cautiously in their patients with COPD who also have a coexisting cardiovascular condition for which a beta-blocker is required, noted Dr. Don D. Sin and Dr. S.F. Paul Man, both of the University of British Columbia and Providence Heart and Lung Institute, Vancouver, in an editorial comment accompanying the report (Arch. Intern. Med. 2010;170:849-50).

“These data may be of great clinical relevance in COPD because cardiovascular diseases (and not respiratory failure) are the leading causes of hospitalization,” Dr. Sin and Dr. Man noted, “accounting for nearly 50% of all hospital admissions, as well as being the second-leading cause of mortality, responsible for 25% of all deaths, in patients with mild to moderate COPD.

Disclosures: No financial conflicts of interest were reported.

Beta-blockers appeared to improve survival in chronic obstructive pulmonary disease and decreased the risk of exacerbations by nearly 30%, according to a recent report.

Beta-blockers are known to improve survival in patients with a wide spectrum of cardiovascular diseases. But the benefits shown in an observational cohort study were surprising, the study investigators noted, because the drugs often are withheld in COPD patients because of fear they will promote bronchospasm and induce respiratory failure.

Even more surprising was the finding that beta-blockers benefited COPD patients who had no known cardiovascular disease, said Dr. Frans H. Rutten of the University Medical Center Utrecht, the Netherlands, and his associates.

“Traditional dogma … states that beta-blockers are contraindicated in COPD because of their presumed bronchoconstrictive properties and 'competition' with beta-2 agonists,” the researchers said. In theory, however, those drugs could benefit COPD patients “by tempering the sympathetic nervous system or by reducing the ischemic burden,” they added (Arch. Intern. Med. 2010;170:880-7).

The researchers assessed 2,230 patients aged 45 years and older (mean age 65 years) who attended 23 general practices in the vicinity of Utrecht from 1995 through 2005. Those patients either had COPD at the start of the study period (560 patients) or developed the disorder during the study (1,670 patients).

A total of 665 patients used beta-blockers, while 1,565 did not.

Overall, 686 patients in the study died. All-cause mortality was 27% among those who used beta-blockers, a significantly smaller proportion than the 32% among subjects who did not use the drugs.

Similarly, 1,055 of the study's patients had at least one COPD exacerbation during follow-up. That included 43% of those who used beta-blockers, a significantly smaller proportion than the 49% rate in patients who did not use the drugs.

“To our knowledge, this is the first observational study that shows that long-term treatment with beta-blockers may improve survival and reduce the risk of an exacerbation of COPD in the broad spectrum of patients” with COPD, Dr. Rutten and his colleagues said.

“Cardioselective beta-blockers had larger beneficial effects on mortality than nonselective ones, but similar effects on risk of exacerbation of COPD,” they said.

“Interestingly, the association of beta-blocker use with all-cause mortality and risk of exacerbation of COPD also remained in patients who were taking two or more pulmonary drugs or who were using inhaled beta-2 sympathicomimetics or anticholinergic agents,” the investigators noted. “Therefore, inhaled pulmonary medication seems not to interfere with the results of beta-blocker use.”

A recent meta-analysis of randomized trials has already shown that beta-blockers are well tolerated by COPD patients. With the results of the observational study added to those findings, it seems clear that “the time has come to confirm these results in a randomized controlled trial,” Dr. Rutten and his associates said.

The study findings “provide a rationale for the practicing clinicians to use beta-blockers (even noncardioselective ones such as carvedilol) cautiously in their patients with COPD who also have a coexisting cardiovascular condition for which a beta-blocker is required, noted Dr. Don D. Sin and Dr. S.F. Paul Man, both of the University of British Columbia and Providence Heart and Lung Institute, Vancouver, in an editorial comment accompanying the report (Arch. Intern. Med. 2010;170:849-50).

“These data may be of great clinical relevance in COPD because cardiovascular diseases (and not respiratory failure) are the leading causes of hospitalization,” Dr. Sin and Dr. Man noted, “accounting for nearly 50% of all hospital admissions, as well as being the second-leading cause of mortality, responsible for 25% of all deaths, in patients with mild to moderate COPD.

Disclosures: No financial conflicts of interest were reported.

Publications
Publications
Topics
Article Type
Display Headline
Beta-Blockers May Boost COPD Survival Rates
Display Headline
Beta-Blockers May Boost COPD Survival Rates
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Low-Dose Oral Steroids Work for Acute COPD

Article Type
Changed
Display Headline
Low-Dose Oral Steroids Work for Acute COPD

Major Finding: Treatment failure occurred in 10.9% of patients given high-dose IV steroids and 10.3% of those given low-dose oral steroids, a nonsignificant difference.

Data Source: An epidemiologic cohort study of nearly 80,000 adults hospitalized at 414 U.S. medical centers for acute exacerbations of COPD.

Disclosures: Premier Healthcare Informatics of Charlotte, N.C., provided the data used in this study. The authors reported no financial conflicts of interest.

Low-dose oral corticosteroids are as effective as high-dose intravenous corticosteroids in the initial treatment of acute exacerbations of COPD, according to findings from a retrospective cohort study of nearly 80,000 COPD hospitalizations.

In the study, 92% of the patients were initially given high-dose IV corticosteroids instead of less-risky low-dose oral steroids. This contrasts sharply with recommendations favoring a low-dose regimen included in clinical guidelines published by leading professional societies in the United States, the United Kingdom, and other European nations, said Dr. Peter K. Lindenauer of the Center for Quality of Care Research at Baystate Medical Center, Springfield, Mass., and his associates.

Dr. Lindenauer and his colleagues compared outcomes with these two treatment approaches using a database designed to measure health care quality and utilization. They reviewed the records of 79,985 hospitalizations for acute exacerbation of COPD at 414 U.S. medical centers over a 2-year period.

“Participating hospitals are geographically diverse and similar to the composition of acute care hospitals nationwide and are predominantly small to midsize nonteaching facilities that serve a largely urban patient population,” they noted.

The study participants had a median age of 69 years and had COPD that was uncomplicated by pneumonia or pulmonary embolism. The primary outcome was a composite measure of treatment failure, defined as the need for mechanical ventilation after the second day of hospitalization; death during hospitalization; or readmission for COPD within 30 days of discharge.

Overall, 11% of patients had this primary outcome, with approximately 1% requiring mechanical ventilation, 1% dying during hospitalization, and 9% being readmitted.

A total of 92% of patients were initially treated with high-dose IV steroids, and 8% were started on low-dose oral steroids. The composite outcome of treatment failure occurred in 10.9% of patients given high-dose IV steroids and 10.3% of those given low-dose oral steroids, a nonsignificant difference.

Similarly, the individual outcome of in-hospital mortality was approximately 1% in both groups, the investigators said (JAMA 2010;303:2359-67).

Further analysis showed that patients given oral steroids as recommended had lower hospital costs and shorter lengths of stay. Previous studies of the issue have shown that the oral route decreases patient pain and immobility, they added.

The findings clearly show that not complying with treatment recommendations and instead giving high-dose IV steroids to patients with acute exacerbations of COPD “does not appear to be associated with any measurable clinical benefit and at the same time exposes patients to the risks and inconvenience of an intravenous line, potentially unnecessarily high doses of steroids, greater hospital costs, and longer lengths of stay,” Dr. Lindenauer and his associates said.

“Because high-dose IV therapy is so common and because patients with COPD are hospitalized frequently for exacerbations, our findings have a significant potential to alter practice,” they added.

This study was not designed to determine why so many clinicians in real-world practice don't comply with recommendations.

Article PDF
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: Treatment failure occurred in 10.9% of patients given high-dose IV steroids and 10.3% of those given low-dose oral steroids, a nonsignificant difference.

Data Source: An epidemiologic cohort study of nearly 80,000 adults hospitalized at 414 U.S. medical centers for acute exacerbations of COPD.

Disclosures: Premier Healthcare Informatics of Charlotte, N.C., provided the data used in this study. The authors reported no financial conflicts of interest.

Low-dose oral corticosteroids are as effective as high-dose intravenous corticosteroids in the initial treatment of acute exacerbations of COPD, according to findings from a retrospective cohort study of nearly 80,000 COPD hospitalizations.

In the study, 92% of the patients were initially given high-dose IV corticosteroids instead of less-risky low-dose oral steroids. This contrasts sharply with recommendations favoring a low-dose regimen included in clinical guidelines published by leading professional societies in the United States, the United Kingdom, and other European nations, said Dr. Peter K. Lindenauer of the Center for Quality of Care Research at Baystate Medical Center, Springfield, Mass., and his associates.

Dr. Lindenauer and his colleagues compared outcomes with these two treatment approaches using a database designed to measure health care quality and utilization. They reviewed the records of 79,985 hospitalizations for acute exacerbation of COPD at 414 U.S. medical centers over a 2-year period.

“Participating hospitals are geographically diverse and similar to the composition of acute care hospitals nationwide and are predominantly small to midsize nonteaching facilities that serve a largely urban patient population,” they noted.

The study participants had a median age of 69 years and had COPD that was uncomplicated by pneumonia or pulmonary embolism. The primary outcome was a composite measure of treatment failure, defined as the need for mechanical ventilation after the second day of hospitalization; death during hospitalization; or readmission for COPD within 30 days of discharge.

Overall, 11% of patients had this primary outcome, with approximately 1% requiring mechanical ventilation, 1% dying during hospitalization, and 9% being readmitted.

A total of 92% of patients were initially treated with high-dose IV steroids, and 8% were started on low-dose oral steroids. The composite outcome of treatment failure occurred in 10.9% of patients given high-dose IV steroids and 10.3% of those given low-dose oral steroids, a nonsignificant difference.

Similarly, the individual outcome of in-hospital mortality was approximately 1% in both groups, the investigators said (JAMA 2010;303:2359-67).

Further analysis showed that patients given oral steroids as recommended had lower hospital costs and shorter lengths of stay. Previous studies of the issue have shown that the oral route decreases patient pain and immobility, they added.

The findings clearly show that not complying with treatment recommendations and instead giving high-dose IV steroids to patients with acute exacerbations of COPD “does not appear to be associated with any measurable clinical benefit and at the same time exposes patients to the risks and inconvenience of an intravenous line, potentially unnecessarily high doses of steroids, greater hospital costs, and longer lengths of stay,” Dr. Lindenauer and his associates said.

“Because high-dose IV therapy is so common and because patients with COPD are hospitalized frequently for exacerbations, our findings have a significant potential to alter practice,” they added.

This study was not designed to determine why so many clinicians in real-world practice don't comply with recommendations.

Major Finding: Treatment failure occurred in 10.9% of patients given high-dose IV steroids and 10.3% of those given low-dose oral steroids, a nonsignificant difference.

Data Source: An epidemiologic cohort study of nearly 80,000 adults hospitalized at 414 U.S. medical centers for acute exacerbations of COPD.

Disclosures: Premier Healthcare Informatics of Charlotte, N.C., provided the data used in this study. The authors reported no financial conflicts of interest.

Low-dose oral corticosteroids are as effective as high-dose intravenous corticosteroids in the initial treatment of acute exacerbations of COPD, according to findings from a retrospective cohort study of nearly 80,000 COPD hospitalizations.

In the study, 92% of the patients were initially given high-dose IV corticosteroids instead of less-risky low-dose oral steroids. This contrasts sharply with recommendations favoring a low-dose regimen included in clinical guidelines published by leading professional societies in the United States, the United Kingdom, and other European nations, said Dr. Peter K. Lindenauer of the Center for Quality of Care Research at Baystate Medical Center, Springfield, Mass., and his associates.

Dr. Lindenauer and his colleagues compared outcomes with these two treatment approaches using a database designed to measure health care quality and utilization. They reviewed the records of 79,985 hospitalizations for acute exacerbation of COPD at 414 U.S. medical centers over a 2-year period.

“Participating hospitals are geographically diverse and similar to the composition of acute care hospitals nationwide and are predominantly small to midsize nonteaching facilities that serve a largely urban patient population,” they noted.

The study participants had a median age of 69 years and had COPD that was uncomplicated by pneumonia or pulmonary embolism. The primary outcome was a composite measure of treatment failure, defined as the need for mechanical ventilation after the second day of hospitalization; death during hospitalization; or readmission for COPD within 30 days of discharge.

Overall, 11% of patients had this primary outcome, with approximately 1% requiring mechanical ventilation, 1% dying during hospitalization, and 9% being readmitted.

A total of 92% of patients were initially treated with high-dose IV steroids, and 8% were started on low-dose oral steroids. The composite outcome of treatment failure occurred in 10.9% of patients given high-dose IV steroids and 10.3% of those given low-dose oral steroids, a nonsignificant difference.

Similarly, the individual outcome of in-hospital mortality was approximately 1% in both groups, the investigators said (JAMA 2010;303:2359-67).

Further analysis showed that patients given oral steroids as recommended had lower hospital costs and shorter lengths of stay. Previous studies of the issue have shown that the oral route decreases patient pain and immobility, they added.

The findings clearly show that not complying with treatment recommendations and instead giving high-dose IV steroids to patients with acute exacerbations of COPD “does not appear to be associated with any measurable clinical benefit and at the same time exposes patients to the risks and inconvenience of an intravenous line, potentially unnecessarily high doses of steroids, greater hospital costs, and longer lengths of stay,” Dr. Lindenauer and his associates said.

“Because high-dose IV therapy is so common and because patients with COPD are hospitalized frequently for exacerbations, our findings have a significant potential to alter practice,” they added.

This study was not designed to determine why so many clinicians in real-world practice don't comply with recommendations.

Publications
Publications
Topics
Article Type
Display Headline
Low-Dose Oral Steroids Work for Acute COPD
Display Headline
Low-Dose Oral Steroids Work for Acute COPD
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Weight, Fat Gain in Middle and Older Age Linked to Diabetes

Article Type
Changed
Display Headline
Weight, Fat Gain in Middle and Older Age Linked to Diabetes

Major Finding: Subjects in the highest adiposity category had a two- to sixfold greater risk of incident diabetes than those in the lowest category.

Data Source: Prospective, longitudinal cohort study of 4,193 subjects aged 65 and older.

Disclosures: Study was supported by the National Heart, Lung, and Blood Institute; the National Institute on Aging; the University of Pittsburgh Claude D. Pepper Older Americans Independence Center; and the National Institute of Neurological Disorders and Stroke. No financial conflicts of interest were reported.

Weight gain and fat accumulation in both middle and older age raise the risk of diabetes, according to a prospective cohort study.

The links between overweight and diabetes, and between central adiposity and diabetes, are well known in younger adults but have not been fully explored in older adults, said Mary L. Biggs, Ph.D., of the University of Washington School of Public Health and Community Medicine, Seattle, and her associates.

They examined these associations using data from 4,193 subjects who participated in the Cardiovascular Health Study, a prospective, longitudinal cohort study of people aged 65 years and older living in four communities in North Carolina, Maryland, Pennsylvania, and California. The subjects were enrolled beginning in 1989 and followed annually for a median of 12 years.

The mean age at baseline was 73 years; 59% of the subjects were women, and 10% were African American.

Changes in the participants' weight, body mass index, fat mass, waist circumference, waist-to-hip ratio, and waist-to-height ratio were documented from baseline onward, at ages 65 and older. The subjects also were asked to report body composition measures from when they were age 50, so that their BMI at age 50 could be calculated.

During follow-up, 339 subjects developed diabetes.

Measures of overall and of central adiposity at both middle age (50 years) and older age (at least 65 years) were significantly associated with the risk of developing diabetes in men and women. Subjects in the highest category of adiposity had a two- to sixfold greater risk of incident diabetes than did those in the lowest category.

Similarly, the risk of diabetes rose monotonically with the amount of weight gained between age 50 and baseline. “Compared with participants whose weight remained stable [during that interval], those who gained 9 kg or more between the age of 50 years and study entry had an approximately threefold greater risk of developing diabetes during follow-up,” Dr. Biggs and her colleagues said (JAMA 2010;303:2504-12).

“Participants who were obese (BMI greater than or equal to 30) at 50 years of age and who experienced the most weight gain (greater than 9 kg) between the age of 50 years and study entry had five times the risk of developing diabetes, compared with weight-stable participants with normal BMI (less than 25) at 50 years of age,” they added.

Subjects in the highest categories of both BMI and waist circumference were more than four times as likely to develop diabetes as were subjects in the lowest categories of those measures, the investigators noted.

The increased risk associated with adiposity appeared to wane as subjects aged, but even among participants aged 75 and older, those in the highest category of BMI still had double the risk of developing diabetes, compared with those in the lowest category of BMI.

The reason that diabetes risk declines somewhat after age 75 is not known. It is possible that anthropomorphic measures may not adequately quantify body fat at that age because of age-related changes in body composition, such as decreased muscle mass and decreased height.

“A second possibility is that regional fat distribution is more important in the etiology of diabetes than absolute fat mass,” the researchers wrote. Another reason may be that the pathology of diabetes in older adults differs from that in younger adults. Or it simply may be that people more susceptible to adiposity-related death do not survive into old age, resulting in selective survival of fitter people, Dr. Biggs and her colleagues said.

The investigators were somewhat surprised to note that the risk of diabetes did not decline in subjects who lost weight during follow-up. Again, the reason is not yet known.

“Older adults may lose proportionately more muscle mass with weight loss than younger ones, decreasing the accuracy of weight loss as a surrogate for loss of adipose tissue in older adults. Furthermore, the loss of skeletal muscle mass may decrease insulin sensitivity, negating the benefit derived from fat loss,” they noted.

However, clinicians should note that the relation between weight loss and diabetes risk in older adults is complex, and “our results do not preclude the possibility that voluntary weight loss reduces the risk of diabetes in older adults,” they added.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: Subjects in the highest adiposity category had a two- to sixfold greater risk of incident diabetes than those in the lowest category.

Data Source: Prospective, longitudinal cohort study of 4,193 subjects aged 65 and older.

Disclosures: Study was supported by the National Heart, Lung, and Blood Institute; the National Institute on Aging; the University of Pittsburgh Claude D. Pepper Older Americans Independence Center; and the National Institute of Neurological Disorders and Stroke. No financial conflicts of interest were reported.

Weight gain and fat accumulation in both middle and older age raise the risk of diabetes, according to a prospective cohort study.

The links between overweight and diabetes, and between central adiposity and diabetes, are well known in younger adults but have not been fully explored in older adults, said Mary L. Biggs, Ph.D., of the University of Washington School of Public Health and Community Medicine, Seattle, and her associates.

They examined these associations using data from 4,193 subjects who participated in the Cardiovascular Health Study, a prospective, longitudinal cohort study of people aged 65 years and older living in four communities in North Carolina, Maryland, Pennsylvania, and California. The subjects were enrolled beginning in 1989 and followed annually for a median of 12 years.

The mean age at baseline was 73 years; 59% of the subjects were women, and 10% were African American.

Changes in the participants' weight, body mass index, fat mass, waist circumference, waist-to-hip ratio, and waist-to-height ratio were documented from baseline onward, at ages 65 and older. The subjects also were asked to report body composition measures from when they were age 50, so that their BMI at age 50 could be calculated.

During follow-up, 339 subjects developed diabetes.

Measures of overall and of central adiposity at both middle age (50 years) and older age (at least 65 years) were significantly associated with the risk of developing diabetes in men and women. Subjects in the highest category of adiposity had a two- to sixfold greater risk of incident diabetes than did those in the lowest category.

Similarly, the risk of diabetes rose monotonically with the amount of weight gained between age 50 and baseline. “Compared with participants whose weight remained stable [during that interval], those who gained 9 kg or more between the age of 50 years and study entry had an approximately threefold greater risk of developing diabetes during follow-up,” Dr. Biggs and her colleagues said (JAMA 2010;303:2504-12).

“Participants who were obese (BMI greater than or equal to 30) at 50 years of age and who experienced the most weight gain (greater than 9 kg) between the age of 50 years and study entry had five times the risk of developing diabetes, compared with weight-stable participants with normal BMI (less than 25) at 50 years of age,” they added.

Subjects in the highest categories of both BMI and waist circumference were more than four times as likely to develop diabetes as were subjects in the lowest categories of those measures, the investigators noted.

The increased risk associated with adiposity appeared to wane as subjects aged, but even among participants aged 75 and older, those in the highest category of BMI still had double the risk of developing diabetes, compared with those in the lowest category of BMI.

The reason that diabetes risk declines somewhat after age 75 is not known. It is possible that anthropomorphic measures may not adequately quantify body fat at that age because of age-related changes in body composition, such as decreased muscle mass and decreased height.

“A second possibility is that regional fat distribution is more important in the etiology of diabetes than absolute fat mass,” the researchers wrote. Another reason may be that the pathology of diabetes in older adults differs from that in younger adults. Or it simply may be that people more susceptible to adiposity-related death do not survive into old age, resulting in selective survival of fitter people, Dr. Biggs and her colleagues said.

The investigators were somewhat surprised to note that the risk of diabetes did not decline in subjects who lost weight during follow-up. Again, the reason is not yet known.

“Older adults may lose proportionately more muscle mass with weight loss than younger ones, decreasing the accuracy of weight loss as a surrogate for loss of adipose tissue in older adults. Furthermore, the loss of skeletal muscle mass may decrease insulin sensitivity, negating the benefit derived from fat loss,” they noted.

However, clinicians should note that the relation between weight loss and diabetes risk in older adults is complex, and “our results do not preclude the possibility that voluntary weight loss reduces the risk of diabetes in older adults,” they added.

Major Finding: Subjects in the highest adiposity category had a two- to sixfold greater risk of incident diabetes than those in the lowest category.

Data Source: Prospective, longitudinal cohort study of 4,193 subjects aged 65 and older.

Disclosures: Study was supported by the National Heart, Lung, and Blood Institute; the National Institute on Aging; the University of Pittsburgh Claude D. Pepper Older Americans Independence Center; and the National Institute of Neurological Disorders and Stroke. No financial conflicts of interest were reported.

Weight gain and fat accumulation in both middle and older age raise the risk of diabetes, according to a prospective cohort study.

The links between overweight and diabetes, and between central adiposity and diabetes, are well known in younger adults but have not been fully explored in older adults, said Mary L. Biggs, Ph.D., of the University of Washington School of Public Health and Community Medicine, Seattle, and her associates.

They examined these associations using data from 4,193 subjects who participated in the Cardiovascular Health Study, a prospective, longitudinal cohort study of people aged 65 years and older living in four communities in North Carolina, Maryland, Pennsylvania, and California. The subjects were enrolled beginning in 1989 and followed annually for a median of 12 years.

The mean age at baseline was 73 years; 59% of the subjects were women, and 10% were African American.

Changes in the participants' weight, body mass index, fat mass, waist circumference, waist-to-hip ratio, and waist-to-height ratio were documented from baseline onward, at ages 65 and older. The subjects also were asked to report body composition measures from when they were age 50, so that their BMI at age 50 could be calculated.

During follow-up, 339 subjects developed diabetes.

Measures of overall and of central adiposity at both middle age (50 years) and older age (at least 65 years) were significantly associated with the risk of developing diabetes in men and women. Subjects in the highest category of adiposity had a two- to sixfold greater risk of incident diabetes than did those in the lowest category.

Similarly, the risk of diabetes rose monotonically with the amount of weight gained between age 50 and baseline. “Compared with participants whose weight remained stable [during that interval], those who gained 9 kg or more between the age of 50 years and study entry had an approximately threefold greater risk of developing diabetes during follow-up,” Dr. Biggs and her colleagues said (JAMA 2010;303:2504-12).

“Participants who were obese (BMI greater than or equal to 30) at 50 years of age and who experienced the most weight gain (greater than 9 kg) between the age of 50 years and study entry had five times the risk of developing diabetes, compared with weight-stable participants with normal BMI (less than 25) at 50 years of age,” they added.

Subjects in the highest categories of both BMI and waist circumference were more than four times as likely to develop diabetes as were subjects in the lowest categories of those measures, the investigators noted.

The increased risk associated with adiposity appeared to wane as subjects aged, but even among participants aged 75 and older, those in the highest category of BMI still had double the risk of developing diabetes, compared with those in the lowest category of BMI.

The reason that diabetes risk declines somewhat after age 75 is not known. It is possible that anthropomorphic measures may not adequately quantify body fat at that age because of age-related changes in body composition, such as decreased muscle mass and decreased height.

“A second possibility is that regional fat distribution is more important in the etiology of diabetes than absolute fat mass,” the researchers wrote. Another reason may be that the pathology of diabetes in older adults differs from that in younger adults. Or it simply may be that people more susceptible to adiposity-related death do not survive into old age, resulting in selective survival of fitter people, Dr. Biggs and her colleagues said.

The investigators were somewhat surprised to note that the risk of diabetes did not decline in subjects who lost weight during follow-up. Again, the reason is not yet known.

“Older adults may lose proportionately more muscle mass with weight loss than younger ones, decreasing the accuracy of weight loss as a surrogate for loss of adipose tissue in older adults. Furthermore, the loss of skeletal muscle mass may decrease insulin sensitivity, negating the benefit derived from fat loss,” they noted.

However, clinicians should note that the relation between weight loss and diabetes risk in older adults is complex, and “our results do not preclude the possibility that voluntary weight loss reduces the risk of diabetes in older adults,” they added.

Publications
Publications
Topics
Article Type
Display Headline
Weight, Fat Gain in Middle and Older Age Linked to Diabetes
Display Headline
Weight, Fat Gain in Middle and Older Age Linked to Diabetes
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Strategies to Limit Bleeding in PCI Underused

Article Type
Changed
Display Headline
Strategies to Limit Bleeding in PCI Underused

Strategies to limit periprocedural bleeding in percutaneous coronary interventions are effective, particularly in the patients at highest risk for bleeding, according to a study of a nationwide database.

Unfortunately, these strategies are underused, and the very patients most likely to benefit are the least likely to receive them, reported Dr. Steven P. Marso of Saint Luke's Mid America Heart Institute, Kansas City, Mo., and his associates.

Targeting bleeding complications “holds great potential for improving the safety and cost-effectiveness of PCI,” since more than 1 million of the procedures are performed in the United States every year and bleeding complications occur in 2%-6% of them. Major bleeding events raise the risks for early and late mortality, MI, and stroke, and also “result in an average 4- to 6-day increase in length of stay and, on average, increase hospital costs by $6,000-$8,000,” the investigators noted.

They studied bleeding complications using the National Cardiovascular Data Registry, a nationwide database of catheterization and PCI procedures at more than 1,100 medical centers. They assessed data on more than 1.5 million patients who underwent PCI via the femoral artery approach during the period from Jan. 1, 2004, to Sept. 30, 2008.

To mitigate bleeding, vascular closure devices such as Angio-Seal (St. Jude Medical) and Perclose A-T (Abbott Vascular), the drug bivalirudin (Angiomax), or both were used in 24%, 23% and 18% of patients, respectively. Manual compression, used in 35% of patients, served as the control strategy.

Periprocedural bleeding, the primary outcome for this study, occurred in more than 30,000 patients (2%). This was defined as bleeding that required a blood transfusion or a prolonged hospital stay, or bleeding that was associated with a greater than 3-g/dL decline in hemoglobin level.

Bleeding events occurred in 2.1% of patients in whom vascular closure devices were used, 1.6% who received bivalirudin, and 0.9% who received both preventive strategies. In comparison, bleeding events occurred in 2.8% of patients who received manual compression.

Independently of preprocedural risk of bleeding, vascular closure devices were associated with 6.7 fewer bleeding events per 1,000 patients, bivalirudin was associated with 8.5 fewer events per 1,000, and the combination of both devices and bivalirudin was associated with 14.2 fewer events, compared with manual compression.

In a further analysis of the data, patients were stratified according to their estimated risk for bleeding before the procedure began. As bleeding risk increased, differences in actual bleeding rates between the four strategies became more pronounced.

Among the highest-risk patients, “the use of vascular closure devices plus bivalirudin was associated with an absolute 3.8% lower bleeding rate, which translates into an estimated number needed to treat of 33 to prevent 1 bleeding event, as compared with manual compression,” Dr. Marso and his colleagues wrote (JAMA 2010;303:2156-64).

A total of 40.3% of patients at highest risk received manual compression, compared with 30.8% of those with the lowest risk. However, 14.4% of the highest-risk patients received bivalirudin plus vascular closure, compared with 21.0% of the lowest-risk patients.

Translating these study findings into change in clinical practice will be “challenging” for several reasons, the researchers noted.

“First, assessing the risk for bleeding in clinical practice is neither inherently intuitive nor commonly used. Second, physicians have more experience using bivalirudin in lower-risk patients, since it was first studied in patients undergoing elective PCI and only recently in higher-risk patients,” they said.

In addition, some patients are not suited to one or the other of these strategies. Bivalirudin is not recommended in those taking anticoagulants or those who have a chronic total occlusion, and closure devices are not recommended in patients with certain anatomical limitations such as severe calcification or peripheral artery disease.

Disclosures: Dr. Marso and his associates reported receiving support from many sources, including The Medicines Company (maker of bivalirudin), Amylin Pharmaceuticals, Boston Scientific, Volcano Corp., Terumo Corp., Abbott Vascular, and NovoNordisk.

Article PDF
Author and Disclosure Information

Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Strategies to limit periprocedural bleeding in percutaneous coronary interventions are effective, particularly in the patients at highest risk for bleeding, according to a study of a nationwide database.

Unfortunately, these strategies are underused, and the very patients most likely to benefit are the least likely to receive them, reported Dr. Steven P. Marso of Saint Luke's Mid America Heart Institute, Kansas City, Mo., and his associates.

Targeting bleeding complications “holds great potential for improving the safety and cost-effectiveness of PCI,” since more than 1 million of the procedures are performed in the United States every year and bleeding complications occur in 2%-6% of them. Major bleeding events raise the risks for early and late mortality, MI, and stroke, and also “result in an average 4- to 6-day increase in length of stay and, on average, increase hospital costs by $6,000-$8,000,” the investigators noted.

They studied bleeding complications using the National Cardiovascular Data Registry, a nationwide database of catheterization and PCI procedures at more than 1,100 medical centers. They assessed data on more than 1.5 million patients who underwent PCI via the femoral artery approach during the period from Jan. 1, 2004, to Sept. 30, 2008.

To mitigate bleeding, vascular closure devices such as Angio-Seal (St. Jude Medical) and Perclose A-T (Abbott Vascular), the drug bivalirudin (Angiomax), or both were used in 24%, 23% and 18% of patients, respectively. Manual compression, used in 35% of patients, served as the control strategy.

Periprocedural bleeding, the primary outcome for this study, occurred in more than 30,000 patients (2%). This was defined as bleeding that required a blood transfusion or a prolonged hospital stay, or bleeding that was associated with a greater than 3-g/dL decline in hemoglobin level.

Bleeding events occurred in 2.1% of patients in whom vascular closure devices were used, 1.6% who received bivalirudin, and 0.9% who received both preventive strategies. In comparison, bleeding events occurred in 2.8% of patients who received manual compression.

Independently of preprocedural risk of bleeding, vascular closure devices were associated with 6.7 fewer bleeding events per 1,000 patients, bivalirudin was associated with 8.5 fewer events per 1,000, and the combination of both devices and bivalirudin was associated with 14.2 fewer events, compared with manual compression.

In a further analysis of the data, patients were stratified according to their estimated risk for bleeding before the procedure began. As bleeding risk increased, differences in actual bleeding rates between the four strategies became more pronounced.

Among the highest-risk patients, “the use of vascular closure devices plus bivalirudin was associated with an absolute 3.8% lower bleeding rate, which translates into an estimated number needed to treat of 33 to prevent 1 bleeding event, as compared with manual compression,” Dr. Marso and his colleagues wrote (JAMA 2010;303:2156-64).

A total of 40.3% of patients at highest risk received manual compression, compared with 30.8% of those with the lowest risk. However, 14.4% of the highest-risk patients received bivalirudin plus vascular closure, compared with 21.0% of the lowest-risk patients.

Translating these study findings into change in clinical practice will be “challenging” for several reasons, the researchers noted.

“First, assessing the risk for bleeding in clinical practice is neither inherently intuitive nor commonly used. Second, physicians have more experience using bivalirudin in lower-risk patients, since it was first studied in patients undergoing elective PCI and only recently in higher-risk patients,” they said.

In addition, some patients are not suited to one or the other of these strategies. Bivalirudin is not recommended in those taking anticoagulants or those who have a chronic total occlusion, and closure devices are not recommended in patients with certain anatomical limitations such as severe calcification or peripheral artery disease.

Disclosures: Dr. Marso and his associates reported receiving support from many sources, including The Medicines Company (maker of bivalirudin), Amylin Pharmaceuticals, Boston Scientific, Volcano Corp., Terumo Corp., Abbott Vascular, and NovoNordisk.

Strategies to limit periprocedural bleeding in percutaneous coronary interventions are effective, particularly in the patients at highest risk for bleeding, according to a study of a nationwide database.

Unfortunately, these strategies are underused, and the very patients most likely to benefit are the least likely to receive them, reported Dr. Steven P. Marso of Saint Luke's Mid America Heart Institute, Kansas City, Mo., and his associates.

Targeting bleeding complications “holds great potential for improving the safety and cost-effectiveness of PCI,” since more than 1 million of the procedures are performed in the United States every year and bleeding complications occur in 2%-6% of them. Major bleeding events raise the risks for early and late mortality, MI, and stroke, and also “result in an average 4- to 6-day increase in length of stay and, on average, increase hospital costs by $6,000-$8,000,” the investigators noted.

They studied bleeding complications using the National Cardiovascular Data Registry, a nationwide database of catheterization and PCI procedures at more than 1,100 medical centers. They assessed data on more than 1.5 million patients who underwent PCI via the femoral artery approach during the period from Jan. 1, 2004, to Sept. 30, 2008.

To mitigate bleeding, vascular closure devices such as Angio-Seal (St. Jude Medical) and Perclose A-T (Abbott Vascular), the drug bivalirudin (Angiomax), or both were used in 24%, 23% and 18% of patients, respectively. Manual compression, used in 35% of patients, served as the control strategy.

Periprocedural bleeding, the primary outcome for this study, occurred in more than 30,000 patients (2%). This was defined as bleeding that required a blood transfusion or a prolonged hospital stay, or bleeding that was associated with a greater than 3-g/dL decline in hemoglobin level.

Bleeding events occurred in 2.1% of patients in whom vascular closure devices were used, 1.6% who received bivalirudin, and 0.9% who received both preventive strategies. In comparison, bleeding events occurred in 2.8% of patients who received manual compression.

Independently of preprocedural risk of bleeding, vascular closure devices were associated with 6.7 fewer bleeding events per 1,000 patients, bivalirudin was associated with 8.5 fewer events per 1,000, and the combination of both devices and bivalirudin was associated with 14.2 fewer events, compared with manual compression.

In a further analysis of the data, patients were stratified according to their estimated risk for bleeding before the procedure began. As bleeding risk increased, differences in actual bleeding rates between the four strategies became more pronounced.

Among the highest-risk patients, “the use of vascular closure devices plus bivalirudin was associated with an absolute 3.8% lower bleeding rate, which translates into an estimated number needed to treat of 33 to prevent 1 bleeding event, as compared with manual compression,” Dr. Marso and his colleagues wrote (JAMA 2010;303:2156-64).

A total of 40.3% of patients at highest risk received manual compression, compared with 30.8% of those with the lowest risk. However, 14.4% of the highest-risk patients received bivalirudin plus vascular closure, compared with 21.0% of the lowest-risk patients.

Translating these study findings into change in clinical practice will be “challenging” for several reasons, the researchers noted.

“First, assessing the risk for bleeding in clinical practice is neither inherently intuitive nor commonly used. Second, physicians have more experience using bivalirudin in lower-risk patients, since it was first studied in patients undergoing elective PCI and only recently in higher-risk patients,” they said.

In addition, some patients are not suited to one or the other of these strategies. Bivalirudin is not recommended in those taking anticoagulants or those who have a chronic total occlusion, and closure devices are not recommended in patients with certain anatomical limitations such as severe calcification or peripheral artery disease.

Disclosures: Dr. Marso and his associates reported receiving support from many sources, including The Medicines Company (maker of bivalirudin), Amylin Pharmaceuticals, Boston Scientific, Volcano Corp., Terumo Corp., Abbott Vascular, and NovoNordisk.

Topics
Article Type
Display Headline
Strategies to Limit Bleeding in PCI Underused
Display Headline
Strategies to Limit Bleeding in PCI Underused
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

H1N1 Experiences Point to Control Strategies

Article Type
Changed
Display Headline
H1N1 Experiences Point to Control Strategies

Strategies to control the spread of seasonal influenza outbreaks work to help curb influenza A(H1N1) outbreaks as well, suggest two studies conducted in Singapore and Hong Kong.

In the first report, standard containment strategies along with “ring chemoprophylaxis” were effective at controlling transmission of H1N1 in Singapore early in the course of the 2009 pandemic, according to Dr. Vernon J. Lee of the Singapore Ministry of Defense's Center for Health Services Research.

In a separate report on the early H1N1 experience in Hong Kong, researchers found that in community households, the virus showed traits that were broadly similar to those of seasonal influenza A in transmissibility, viral shedding, and clinical illness.

While these findings have implications for future outbreaks, they do not necessarily “inform the success of potential containment efforts implemented at the source of the next influenza pandemic or implemented to prevent the introduction of influenza into a community,” Dr. Timothy M. Uyeki of the Centers for Disease Control and Prevention, Atlanta, pointed out in an editorial accompanying the two reports (N. Engl. J. Med. 2010; 362:2221-3).

In the first study, Dr. Lee and associates described early H1N1 outbreaks in four military camps, including one military hospital. This is one of the first studies to document the real-world effectiveness antiviral “ring chemoprophylaxis” in a pandemic, they said.

“Ring chemoprophylaxis” entails containing a viral outbreak within a targeted geographic area surrounding an index case by administering a drug—in this case, oseltamivir—to everyone in the area, not just to known, close contacts. In this study, all members of the affected military units, where opportunities for contact were substantial, were included in prophylaxis effort, even though they did not fulfill standard criteria as close contacts. Larger “rings” of prophylaxis were established if cases developed in multiple units.

All personnel suspected of being infected were isolated in the hospital if they tested positive. All asymptomatic personnel in the same unit were screened 3 times per week using nasopharyngeal swabs and PCR testing plus symptom questionnaires and monitoring of body temperature, until the outbreak subsided.

Such a strategy had the potential for intense transmission of the virus, similar to environments such as hospital wards, schools, and long-term care facilities. However, the “ring” approach based on spatial proximity brought an early halt to transmission, they noted.

Among a total of 1,175 personnel, a total of 82 confirmed cases of H1N1 virus were documented during the 4 outbreaks. Only 7 of these patients (0.6% of the study population) developed symptoms after the prophylaxis program had begun; the remaining 75 had been infected before the intervention was implemented. The overall infection rate was 5.9%.

By comparison, the rate of influenza infection was 57% in another study of Taiwanese military recruits, 42% aboard a U.S. Navy ship, 71% in a British boarding school, and 35% in a New York City school, Dr. Lee and his colleagues said (N. Engl. J. Med. 2010;362:2166-74).

“Our experience provides evidence that early case detection and the use of antiviral ring prophylaxis effectively truncate the spread of infection during an epidemic, giving empirical support to theoretical mathematical models,” they said.

“Aggressive prophylaxis may be justifiable … to protect vulnerable populations such as frail or elderly residents of long-term care facilities or persons in closed or semiclosed environments such as schools, prisons, and military camps,” Dr. Lee and his associates added.

In the second study, Benjamin J. Cowling, Ph.D., of the University of Hong Kong, and his associates assessed both H1N1 and seasonal flu transmission among 99 index patients and their 284 contacts in 99 households throughout the city at the beginning of the pandemic.

Clinical illness was similar between H1N1 and the seasonal flu. The incubation period was estimated to be 3.2 days for H1N1, very similar to the 3.4-day incubation period for the seasonal flu. Also similar was the duration of viral shedding, which was 5-7 days for both infections.

The secondary attack rate—the rate at which household contacts acquired the virus from index cases—also was similar between H1N1 and seasonal flu. However, the initial attack rate, meaning the rate at which index cases became infected, was much higher with H1N1 than with seasonal flu, as was reported worldwide.

“This difference in attack rates could be associated with the lack of preexisting immunity against the pandemic influenza virus, rather than an inherent difference in transmissibility” between H1N1 and seasonal flu, Dr. Cowling and his colleagues pointed out (N. Engl. J. Med. 2010;362:2175-84).

Overall, their findings suggest that H1N1 flu and seasonal flu viruses “are associated with similar viral-load dynamics, severity of clinical illness, and transmissibility,” the investigators said.

 

 

Disclosures: Dr. Lee's study was supported by the Singapore Ministry of Defense; the National University of Singapore; and the Singapore Agency for Science, Research, and Technology. Dr. Cowling's study was supported by the National Institute of Allergy and Infectious Diseases (U.S.) and Hong Kong University. Dr. Lee's associates reported ties to GlaxoSmithKline, Novartis, Adamas Pharmaceuticals, Baxter, MerLion Pharmaceuticals, Pfizer, and Wyeth.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Strategies to control the spread of seasonal influenza outbreaks work to help curb influenza A(H1N1) outbreaks as well, suggest two studies conducted in Singapore and Hong Kong.

In the first report, standard containment strategies along with “ring chemoprophylaxis” were effective at controlling transmission of H1N1 in Singapore early in the course of the 2009 pandemic, according to Dr. Vernon J. Lee of the Singapore Ministry of Defense's Center for Health Services Research.

In a separate report on the early H1N1 experience in Hong Kong, researchers found that in community households, the virus showed traits that were broadly similar to those of seasonal influenza A in transmissibility, viral shedding, and clinical illness.

While these findings have implications for future outbreaks, they do not necessarily “inform the success of potential containment efforts implemented at the source of the next influenza pandemic or implemented to prevent the introduction of influenza into a community,” Dr. Timothy M. Uyeki of the Centers for Disease Control and Prevention, Atlanta, pointed out in an editorial accompanying the two reports (N. Engl. J. Med. 2010; 362:2221-3).

In the first study, Dr. Lee and associates described early H1N1 outbreaks in four military camps, including one military hospital. This is one of the first studies to document the real-world effectiveness antiviral “ring chemoprophylaxis” in a pandemic, they said.

“Ring chemoprophylaxis” entails containing a viral outbreak within a targeted geographic area surrounding an index case by administering a drug—in this case, oseltamivir—to everyone in the area, not just to known, close contacts. In this study, all members of the affected military units, where opportunities for contact were substantial, were included in prophylaxis effort, even though they did not fulfill standard criteria as close contacts. Larger “rings” of prophylaxis were established if cases developed in multiple units.

All personnel suspected of being infected were isolated in the hospital if they tested positive. All asymptomatic personnel in the same unit were screened 3 times per week using nasopharyngeal swabs and PCR testing plus symptom questionnaires and monitoring of body temperature, until the outbreak subsided.

Such a strategy had the potential for intense transmission of the virus, similar to environments such as hospital wards, schools, and long-term care facilities. However, the “ring” approach based on spatial proximity brought an early halt to transmission, they noted.

Among a total of 1,175 personnel, a total of 82 confirmed cases of H1N1 virus were documented during the 4 outbreaks. Only 7 of these patients (0.6% of the study population) developed symptoms after the prophylaxis program had begun; the remaining 75 had been infected before the intervention was implemented. The overall infection rate was 5.9%.

By comparison, the rate of influenza infection was 57% in another study of Taiwanese military recruits, 42% aboard a U.S. Navy ship, 71% in a British boarding school, and 35% in a New York City school, Dr. Lee and his colleagues said (N. Engl. J. Med. 2010;362:2166-74).

“Our experience provides evidence that early case detection and the use of antiviral ring prophylaxis effectively truncate the spread of infection during an epidemic, giving empirical support to theoretical mathematical models,” they said.

“Aggressive prophylaxis may be justifiable … to protect vulnerable populations such as frail or elderly residents of long-term care facilities or persons in closed or semiclosed environments such as schools, prisons, and military camps,” Dr. Lee and his associates added.

In the second study, Benjamin J. Cowling, Ph.D., of the University of Hong Kong, and his associates assessed both H1N1 and seasonal flu transmission among 99 index patients and their 284 contacts in 99 households throughout the city at the beginning of the pandemic.

Clinical illness was similar between H1N1 and the seasonal flu. The incubation period was estimated to be 3.2 days for H1N1, very similar to the 3.4-day incubation period for the seasonal flu. Also similar was the duration of viral shedding, which was 5-7 days for both infections.

The secondary attack rate—the rate at which household contacts acquired the virus from index cases—also was similar between H1N1 and seasonal flu. However, the initial attack rate, meaning the rate at which index cases became infected, was much higher with H1N1 than with seasonal flu, as was reported worldwide.

“This difference in attack rates could be associated with the lack of preexisting immunity against the pandemic influenza virus, rather than an inherent difference in transmissibility” between H1N1 and seasonal flu, Dr. Cowling and his colleagues pointed out (N. Engl. J. Med. 2010;362:2175-84).

Overall, their findings suggest that H1N1 flu and seasonal flu viruses “are associated with similar viral-load dynamics, severity of clinical illness, and transmissibility,” the investigators said.

 

 

Disclosures: Dr. Lee's study was supported by the Singapore Ministry of Defense; the National University of Singapore; and the Singapore Agency for Science, Research, and Technology. Dr. Cowling's study was supported by the National Institute of Allergy and Infectious Diseases (U.S.) and Hong Kong University. Dr. Lee's associates reported ties to GlaxoSmithKline, Novartis, Adamas Pharmaceuticals, Baxter, MerLion Pharmaceuticals, Pfizer, and Wyeth.

Strategies to control the spread of seasonal influenza outbreaks work to help curb influenza A(H1N1) outbreaks as well, suggest two studies conducted in Singapore and Hong Kong.

In the first report, standard containment strategies along with “ring chemoprophylaxis” were effective at controlling transmission of H1N1 in Singapore early in the course of the 2009 pandemic, according to Dr. Vernon J. Lee of the Singapore Ministry of Defense's Center for Health Services Research.

In a separate report on the early H1N1 experience in Hong Kong, researchers found that in community households, the virus showed traits that were broadly similar to those of seasonal influenza A in transmissibility, viral shedding, and clinical illness.

While these findings have implications for future outbreaks, they do not necessarily “inform the success of potential containment efforts implemented at the source of the next influenza pandemic or implemented to prevent the introduction of influenza into a community,” Dr. Timothy M. Uyeki of the Centers for Disease Control and Prevention, Atlanta, pointed out in an editorial accompanying the two reports (N. Engl. J. Med. 2010; 362:2221-3).

In the first study, Dr. Lee and associates described early H1N1 outbreaks in four military camps, including one military hospital. This is one of the first studies to document the real-world effectiveness antiviral “ring chemoprophylaxis” in a pandemic, they said.

“Ring chemoprophylaxis” entails containing a viral outbreak within a targeted geographic area surrounding an index case by administering a drug—in this case, oseltamivir—to everyone in the area, not just to known, close contacts. In this study, all members of the affected military units, where opportunities for contact were substantial, were included in prophylaxis effort, even though they did not fulfill standard criteria as close contacts. Larger “rings” of prophylaxis were established if cases developed in multiple units.

All personnel suspected of being infected were isolated in the hospital if they tested positive. All asymptomatic personnel in the same unit were screened 3 times per week using nasopharyngeal swabs and PCR testing plus symptom questionnaires and monitoring of body temperature, until the outbreak subsided.

Such a strategy had the potential for intense transmission of the virus, similar to environments such as hospital wards, schools, and long-term care facilities. However, the “ring” approach based on spatial proximity brought an early halt to transmission, they noted.

Among a total of 1,175 personnel, a total of 82 confirmed cases of H1N1 virus were documented during the 4 outbreaks. Only 7 of these patients (0.6% of the study population) developed symptoms after the prophylaxis program had begun; the remaining 75 had been infected before the intervention was implemented. The overall infection rate was 5.9%.

By comparison, the rate of influenza infection was 57% in another study of Taiwanese military recruits, 42% aboard a U.S. Navy ship, 71% in a British boarding school, and 35% in a New York City school, Dr. Lee and his colleagues said (N. Engl. J. Med. 2010;362:2166-74).

“Our experience provides evidence that early case detection and the use of antiviral ring prophylaxis effectively truncate the spread of infection during an epidemic, giving empirical support to theoretical mathematical models,” they said.

“Aggressive prophylaxis may be justifiable … to protect vulnerable populations such as frail or elderly residents of long-term care facilities or persons in closed or semiclosed environments such as schools, prisons, and military camps,” Dr. Lee and his associates added.

In the second study, Benjamin J. Cowling, Ph.D., of the University of Hong Kong, and his associates assessed both H1N1 and seasonal flu transmission among 99 index patients and their 284 contacts in 99 households throughout the city at the beginning of the pandemic.

Clinical illness was similar between H1N1 and the seasonal flu. The incubation period was estimated to be 3.2 days for H1N1, very similar to the 3.4-day incubation period for the seasonal flu. Also similar was the duration of viral shedding, which was 5-7 days for both infections.

The secondary attack rate—the rate at which household contacts acquired the virus from index cases—also was similar between H1N1 and seasonal flu. However, the initial attack rate, meaning the rate at which index cases became infected, was much higher with H1N1 than with seasonal flu, as was reported worldwide.

“This difference in attack rates could be associated with the lack of preexisting immunity against the pandemic influenza virus, rather than an inherent difference in transmissibility” between H1N1 and seasonal flu, Dr. Cowling and his colleagues pointed out (N. Engl. J. Med. 2010;362:2175-84).

Overall, their findings suggest that H1N1 flu and seasonal flu viruses “are associated with similar viral-load dynamics, severity of clinical illness, and transmissibility,” the investigators said.

 

 

Disclosures: Dr. Lee's study was supported by the Singapore Ministry of Defense; the National University of Singapore; and the Singapore Agency for Science, Research, and Technology. Dr. Cowling's study was supported by the National Institute of Allergy and Infectious Diseases (U.S.) and Hong Kong University. Dr. Lee's associates reported ties to GlaxoSmithKline, Novartis, Adamas Pharmaceuticals, Baxter, MerLion Pharmaceuticals, Pfizer, and Wyeth.

Publications
Publications
Topics
Article Type
Display Headline
H1N1 Experiences Point to Control Strategies
Display Headline
H1N1 Experiences Point to Control Strategies
Article Source

PURLs Copyright

Inside the Article

Article PDF Media