User login
Antibiotics, Topical Steroids Show No Effect in Acute Bacterial Sinusitis
Neither an antibiotic nor a steroid nasal spray is effective against acute bacterial sinusitis, according to a randomized study of 240 adults in the United Kingdom.
These findings add to the growing evidence that antibiotics do not yield useful clinical effects in this patient population, particularly when weighed against their disadvantages, and that topical steroids usually are not beneficial either.
The patients were recruited from 74 primary care practices between 2001 and 2005. The median age was 44 years, and the median duration of symptoms of bacterial sinusitis before the doctor visit was 7 days, Dr. Ian G. Williamson of the University of Southampton (England) and his associates reported.
The subjects were randomly assigned to receive 500 mg of amoxicillin 3 times per day for 7 days or a placebo, in combination with either budesonide or a placebo nasal spray once per day for 10 days. Patients reported the frequency and severity of 11 symptoms in a diary.
The proportion of patients who continued to have symptoms after 10 or more days of treatment was 29% with amoxicillin and 33% with placebo, a difference that was not significant. Similarly, the proportion who continued to have symptoms after 10 days of treatment with budesonide nasal spray was 31%—exactly the same as the proportion who continued to have symptoms with a placebo nasal spray.
There also were no differences between the study groups in time until cure was reported. The investigators said that 40% of the subjects in each group were cured by 1 week (J. Am. Med. Assoc. 2007;298:2487–96).
“Among patients with the typical features of acute bacterial sinusitis, neither an antibiotic nor a topical steroid, alone or in combination, [is] effective in altering the symptom severity, the duration, or the natural history of the condition,” the researchers concluded.
“Our rigorous case definition of sinusitis is likely to mean that less-well-defined cases treated routinely by physicians in primary care will show even less effect from taking antibiotics” and nasal steroids, Dr. Williamson and his associates noted.
Topical steroids may be of some benefit in milder cases of bacterial sinusitis than those that were treated in this study, because drug delivery to the nasal mucosa may be more effective before thick secretions, closure of the ostium, and severe inflammation develop, they added.
The study was “the largest nonpharmaceutically funded double-blind, randomized, placebo-controlled trial assessing the effectiveness of amoxicillin in cases of acute [bacterial] sinusitis … presenting to family physicians, and the only adequately powered trial of budesonide in this patient group,” they said.
Neither an antibiotic nor a steroid nasal spray is effective against acute bacterial sinusitis, according to a randomized study of 240 adults in the United Kingdom.
These findings add to the growing evidence that antibiotics do not yield useful clinical effects in this patient population, particularly when weighed against their disadvantages, and that topical steroids usually are not beneficial either.
The patients were recruited from 74 primary care practices between 2001 and 2005. The median age was 44 years, and the median duration of symptoms of bacterial sinusitis before the doctor visit was 7 days, Dr. Ian G. Williamson of the University of Southampton (England) and his associates reported.
The subjects were randomly assigned to receive 500 mg of amoxicillin 3 times per day for 7 days or a placebo, in combination with either budesonide or a placebo nasal spray once per day for 10 days. Patients reported the frequency and severity of 11 symptoms in a diary.
The proportion of patients who continued to have symptoms after 10 or more days of treatment was 29% with amoxicillin and 33% with placebo, a difference that was not significant. Similarly, the proportion who continued to have symptoms after 10 days of treatment with budesonide nasal spray was 31%—exactly the same as the proportion who continued to have symptoms with a placebo nasal spray.
There also were no differences between the study groups in time until cure was reported. The investigators said that 40% of the subjects in each group were cured by 1 week (J. Am. Med. Assoc. 2007;298:2487–96).
“Among patients with the typical features of acute bacterial sinusitis, neither an antibiotic nor a topical steroid, alone or in combination, [is] effective in altering the symptom severity, the duration, or the natural history of the condition,” the researchers concluded.
“Our rigorous case definition of sinusitis is likely to mean that less-well-defined cases treated routinely by physicians in primary care will show even less effect from taking antibiotics” and nasal steroids, Dr. Williamson and his associates noted.
Topical steroids may be of some benefit in milder cases of bacterial sinusitis than those that were treated in this study, because drug delivery to the nasal mucosa may be more effective before thick secretions, closure of the ostium, and severe inflammation develop, they added.
The study was “the largest nonpharmaceutically funded double-blind, randomized, placebo-controlled trial assessing the effectiveness of amoxicillin in cases of acute [bacterial] sinusitis … presenting to family physicians, and the only adequately powered trial of budesonide in this patient group,” they said.
Neither an antibiotic nor a steroid nasal spray is effective against acute bacterial sinusitis, according to a randomized study of 240 adults in the United Kingdom.
These findings add to the growing evidence that antibiotics do not yield useful clinical effects in this patient population, particularly when weighed against their disadvantages, and that topical steroids usually are not beneficial either.
The patients were recruited from 74 primary care practices between 2001 and 2005. The median age was 44 years, and the median duration of symptoms of bacterial sinusitis before the doctor visit was 7 days, Dr. Ian G. Williamson of the University of Southampton (England) and his associates reported.
The subjects were randomly assigned to receive 500 mg of amoxicillin 3 times per day for 7 days or a placebo, in combination with either budesonide or a placebo nasal spray once per day for 10 days. Patients reported the frequency and severity of 11 symptoms in a diary.
The proportion of patients who continued to have symptoms after 10 or more days of treatment was 29% with amoxicillin and 33% with placebo, a difference that was not significant. Similarly, the proportion who continued to have symptoms after 10 days of treatment with budesonide nasal spray was 31%—exactly the same as the proportion who continued to have symptoms with a placebo nasal spray.
There also were no differences between the study groups in time until cure was reported. The investigators said that 40% of the subjects in each group were cured by 1 week (J. Am. Med. Assoc. 2007;298:2487–96).
“Among patients with the typical features of acute bacterial sinusitis, neither an antibiotic nor a topical steroid, alone or in combination, [is] effective in altering the symptom severity, the duration, or the natural history of the condition,” the researchers concluded.
“Our rigorous case definition of sinusitis is likely to mean that less-well-defined cases treated routinely by physicians in primary care will show even less effect from taking antibiotics” and nasal steroids, Dr. Williamson and his associates noted.
Topical steroids may be of some benefit in milder cases of bacterial sinusitis than those that were treated in this study, because drug delivery to the nasal mucosa may be more effective before thick secretions, closure of the ostium, and severe inflammation develop, they added.
The study was “the largest nonpharmaceutically funded double-blind, randomized, placebo-controlled trial assessing the effectiveness of amoxicillin in cases of acute [bacterial] sinusitis … presenting to family physicians, and the only adequately powered trial of budesonide in this patient group,” they said.
Smoking Tied to Greater Type 2 Diabetes Risk
Cigarette smoking is associated with an increased risk of developing type 2 diabetes, results of a meta-analysis suggest.
“Active smokers had an increased risk of developing type 2 diabetes, compared with nonsmokers, with a pooled relative risk of 1.44,” study investigators reported.
The researchers conducted a meta-analysis of all 25 prospective cohort studies of the issue in the United States, Europe, Japan, and Israel that were published between 1992 and 2006.
All of the studies examined a possible link between smoking and irregularities of glucose metabolism, and all but one found a positive association, Dr. Carole Willi of the University of Lausanne (Switzerland) and her associates wrote.
The number of study subjects ranged from 630 people to more than 700,000 people, for a total of 1.2 million subjects and 45,844 cases of incident diabetes in the meta-analysis. Overall, 35% of the people were current smokers. Follow-up ranged from 5 to 30 years.
The association between smoking and diabetes remained robust through numerous statistical analyses that explored study factors as well as clinical variables. The findings also suggested a dose-response relationship, because the association with diabetes was stronger among heavy smokers than among light smokers, and was stronger in active smokers than in former smokers.
“Given this consistency, we conclude that the relevant question should no longer be whether this association exists, but rather whether this established association is causal,” Dr. Willi and her associates said (JAMA 2007;298:2654–64).
“There is theoretical biological plausibility for causality in that smoking may lead to insulin resistance or inadequate compensatory insulin secretion responses according to several but not all studies,” they wrote.
They noted that “Smoking also has a clinical significant effect on both oral and intravenous glucose tolerance tests that could influence diabetes detection.”
The adverse effect of smoking on diabetes risk “has been generally underrecognized,” Dr. Eric L. Ding and Dr. Frank B. Hu of the Harvard School of Public Health, Boston, said in an editorial accompanying the report.
Dr. Ding and Dr. Hu estimated that 12% of all type 2 diabetes in the United States may be attributable to smoking, based on this study's estimates, statistics on smoking prevalence, and an accepted population-attributable risk formula (JAMA 2007;298:2675–6).
In addition, “an estimated 2.3 million cases of diabetes in the United States and a corresponding $14.9 billion of the annual U.S. $132 billion diabetes cost burden may be attributable to smoking,” they said.
Although the exact mechanism by which smoking may contribute to the development of diabetes hasn't been identified, smoking is known to be related to central adiposity, to increase inflammation and oxidative stress, to directly damage beta-cell function, to impair endothelial function, and to impair insulin sensitivity and glucose tolerance, Dr. Ding and Dr. Hu said.
Given the findings of Dr. Willi and her associates, it is “important and prudent for clinicians to screen for and carefully monitor glucose levels among current and former smokers,” they added.
Active smokers had a pooled relative risk of 1.44 for developing type 2 diabetes compared with nonsmokers. PhotoDisc
Cigarette smoking is associated with an increased risk of developing type 2 diabetes, results of a meta-analysis suggest.
“Active smokers had an increased risk of developing type 2 diabetes, compared with nonsmokers, with a pooled relative risk of 1.44,” study investigators reported.
The researchers conducted a meta-analysis of all 25 prospective cohort studies of the issue in the United States, Europe, Japan, and Israel that were published between 1992 and 2006.
All of the studies examined a possible link between smoking and irregularities of glucose metabolism, and all but one found a positive association, Dr. Carole Willi of the University of Lausanne (Switzerland) and her associates wrote.
The number of study subjects ranged from 630 people to more than 700,000 people, for a total of 1.2 million subjects and 45,844 cases of incident diabetes in the meta-analysis. Overall, 35% of the people were current smokers. Follow-up ranged from 5 to 30 years.
The association between smoking and diabetes remained robust through numerous statistical analyses that explored study factors as well as clinical variables. The findings also suggested a dose-response relationship, because the association with diabetes was stronger among heavy smokers than among light smokers, and was stronger in active smokers than in former smokers.
“Given this consistency, we conclude that the relevant question should no longer be whether this association exists, but rather whether this established association is causal,” Dr. Willi and her associates said (JAMA 2007;298:2654–64).
“There is theoretical biological plausibility for causality in that smoking may lead to insulin resistance or inadequate compensatory insulin secretion responses according to several but not all studies,” they wrote.
They noted that “Smoking also has a clinical significant effect on both oral and intravenous glucose tolerance tests that could influence diabetes detection.”
The adverse effect of smoking on diabetes risk “has been generally underrecognized,” Dr. Eric L. Ding and Dr. Frank B. Hu of the Harvard School of Public Health, Boston, said in an editorial accompanying the report.
Dr. Ding and Dr. Hu estimated that 12% of all type 2 diabetes in the United States may be attributable to smoking, based on this study's estimates, statistics on smoking prevalence, and an accepted population-attributable risk formula (JAMA 2007;298:2675–6).
In addition, “an estimated 2.3 million cases of diabetes in the United States and a corresponding $14.9 billion of the annual U.S. $132 billion diabetes cost burden may be attributable to smoking,” they said.
Although the exact mechanism by which smoking may contribute to the development of diabetes hasn't been identified, smoking is known to be related to central adiposity, to increase inflammation and oxidative stress, to directly damage beta-cell function, to impair endothelial function, and to impair insulin sensitivity and glucose tolerance, Dr. Ding and Dr. Hu said.
Given the findings of Dr. Willi and her associates, it is “important and prudent for clinicians to screen for and carefully monitor glucose levels among current and former smokers,” they added.
Active smokers had a pooled relative risk of 1.44 for developing type 2 diabetes compared with nonsmokers. PhotoDisc
Cigarette smoking is associated with an increased risk of developing type 2 diabetes, results of a meta-analysis suggest.
“Active smokers had an increased risk of developing type 2 diabetes, compared with nonsmokers, with a pooled relative risk of 1.44,” study investigators reported.
The researchers conducted a meta-analysis of all 25 prospective cohort studies of the issue in the United States, Europe, Japan, and Israel that were published between 1992 and 2006.
All of the studies examined a possible link between smoking and irregularities of glucose metabolism, and all but one found a positive association, Dr. Carole Willi of the University of Lausanne (Switzerland) and her associates wrote.
The number of study subjects ranged from 630 people to more than 700,000 people, for a total of 1.2 million subjects and 45,844 cases of incident diabetes in the meta-analysis. Overall, 35% of the people were current smokers. Follow-up ranged from 5 to 30 years.
The association between smoking and diabetes remained robust through numerous statistical analyses that explored study factors as well as clinical variables. The findings also suggested a dose-response relationship, because the association with diabetes was stronger among heavy smokers than among light smokers, and was stronger in active smokers than in former smokers.
“Given this consistency, we conclude that the relevant question should no longer be whether this association exists, but rather whether this established association is causal,” Dr. Willi and her associates said (JAMA 2007;298:2654–64).
“There is theoretical biological plausibility for causality in that smoking may lead to insulin resistance or inadequate compensatory insulin secretion responses according to several but not all studies,” they wrote.
They noted that “Smoking also has a clinical significant effect on both oral and intravenous glucose tolerance tests that could influence diabetes detection.”
The adverse effect of smoking on diabetes risk “has been generally underrecognized,” Dr. Eric L. Ding and Dr. Frank B. Hu of the Harvard School of Public Health, Boston, said in an editorial accompanying the report.
Dr. Ding and Dr. Hu estimated that 12% of all type 2 diabetes in the United States may be attributable to smoking, based on this study's estimates, statistics on smoking prevalence, and an accepted population-attributable risk formula (JAMA 2007;298:2675–6).
In addition, “an estimated 2.3 million cases of diabetes in the United States and a corresponding $14.9 billion of the annual U.S. $132 billion diabetes cost burden may be attributable to smoking,” they said.
Although the exact mechanism by which smoking may contribute to the development of diabetes hasn't been identified, smoking is known to be related to central adiposity, to increase inflammation and oxidative stress, to directly damage beta-cell function, to impair endothelial function, and to impair insulin sensitivity and glucose tolerance, Dr. Ding and Dr. Hu said.
Given the findings of Dr. Willi and her associates, it is “important and prudent for clinicians to screen for and carefully monitor glucose levels among current and former smokers,” they added.
Active smokers had a pooled relative risk of 1.44 for developing type 2 diabetes compared with nonsmokers. PhotoDisc
Calcification Predicts CHD, CVD Risks in Some Women
Women with a “low-risk” Framingham heart score who are found to have coronary artery calcification on chest CT have a sixfold greater risk of a coronary event and a fivefold greater risk of a cardiovascular event developing within 4 years than those with no calcification.
In the Multi-Ethnic Study of Atherosclerosis (MESA), 90% of women aged older than 45 years were classified as low risk based on the Framingham score, yet about one-third of them had coronary artery calcification on chest CT. Cardiovascular risks in these women were significantly higher than in women without such calcification.
The study is a prospective epidemiologic assessment of subclinical atherosclerosis measures in more than 6,800 men and women aged 45–84 years. They had no known cardiovascular disease at baseline in 2000, said Dr. Susan G. Lakoski of Wake Forest University, Winston-Salem, N.C., and her associates.
They followed 2,684 of the female subjects whose Framingham scores classified them as low risk, meaning that their estimated risk of coronary heart disease (CHD) or cardiovascular disease (CVD) events was less than 10% over the next 10 years. Chest CT showed 870 (32%) of these women had occult coronary artery calcification, including 105 (4%) with advanced calcification. During 4 years of follow-up, 24 of these “low-risk” subjects had CHD events, and 34 had CVD events.
The absolute risk of a CHD event was 0.9%, and of a CVD event, 1.3%. But “there was a sixfold greater risk for a CHD event in women with prevalent [calcification] compared with women [who had] no detectable coronary calcium,” which remained significant after adjusting for factors such as age, and body mass index. Similarly, “there was a fivefold greater risk of a CVD event in women with prevalent [calcification].” This risk was also maintained in adjusted models (Arch. Intern. Med. 2007;167:2437–42).
Women with a “low-risk” Framingham heart score who are found to have coronary artery calcification on chest CT have a sixfold greater risk of a coronary event and a fivefold greater risk of a cardiovascular event developing within 4 years than those with no calcification.
In the Multi-Ethnic Study of Atherosclerosis (MESA), 90% of women aged older than 45 years were classified as low risk based on the Framingham score, yet about one-third of them had coronary artery calcification on chest CT. Cardiovascular risks in these women were significantly higher than in women without such calcification.
The study is a prospective epidemiologic assessment of subclinical atherosclerosis measures in more than 6,800 men and women aged 45–84 years. They had no known cardiovascular disease at baseline in 2000, said Dr. Susan G. Lakoski of Wake Forest University, Winston-Salem, N.C., and her associates.
They followed 2,684 of the female subjects whose Framingham scores classified them as low risk, meaning that their estimated risk of coronary heart disease (CHD) or cardiovascular disease (CVD) events was less than 10% over the next 10 years. Chest CT showed 870 (32%) of these women had occult coronary artery calcification, including 105 (4%) with advanced calcification. During 4 years of follow-up, 24 of these “low-risk” subjects had CHD events, and 34 had CVD events.
The absolute risk of a CHD event was 0.9%, and of a CVD event, 1.3%. But “there was a sixfold greater risk for a CHD event in women with prevalent [calcification] compared with women [who had] no detectable coronary calcium,” which remained significant after adjusting for factors such as age, and body mass index. Similarly, “there was a fivefold greater risk of a CVD event in women with prevalent [calcification].” This risk was also maintained in adjusted models (Arch. Intern. Med. 2007;167:2437–42).
Women with a “low-risk” Framingham heart score who are found to have coronary artery calcification on chest CT have a sixfold greater risk of a coronary event and a fivefold greater risk of a cardiovascular event developing within 4 years than those with no calcification.
In the Multi-Ethnic Study of Atherosclerosis (MESA), 90% of women aged older than 45 years were classified as low risk based on the Framingham score, yet about one-third of them had coronary artery calcification on chest CT. Cardiovascular risks in these women were significantly higher than in women without such calcification.
The study is a prospective epidemiologic assessment of subclinical atherosclerosis measures in more than 6,800 men and women aged 45–84 years. They had no known cardiovascular disease at baseline in 2000, said Dr. Susan G. Lakoski of Wake Forest University, Winston-Salem, N.C., and her associates.
They followed 2,684 of the female subjects whose Framingham scores classified them as low risk, meaning that their estimated risk of coronary heart disease (CHD) or cardiovascular disease (CVD) events was less than 10% over the next 10 years. Chest CT showed 870 (32%) of these women had occult coronary artery calcification, including 105 (4%) with advanced calcification. During 4 years of follow-up, 24 of these “low-risk” subjects had CHD events, and 34 had CVD events.
The absolute risk of a CHD event was 0.9%, and of a CVD event, 1.3%. But “there was a sixfold greater risk for a CHD event in women with prevalent [calcification] compared with women [who had] no detectable coronary calcium,” which remained significant after adjusting for factors such as age, and body mass index. Similarly, “there was a fivefold greater risk of a CVD event in women with prevalent [calcification].” This risk was also maintained in adjusted models (Arch. Intern. Med. 2007;167:2437–42).
Conduct Problems Tied to Mothers' Drinking
Prenatal alcohol exposure appears to cause later conduct problems in childhood, reported Dr. Brian M. D'Onofrio of Indiana University, Bloomington, and his associates.
In contrast, the later attention and impulsivity problems seen in children who were exposed to alcohol in utero appear to be caused by other factors correlated with maternal drinking rather than to the alcohol exposure itself, the researchers said.
Dr. D'Onofrio and his associates used data collected in a large longitudinal study of adolescents and young adults to examine the relationship between drinking in young women and behavior in their offspring. The survey, funded by the U.S. Bureau of Labor Statistics, covered a racially diverse sample of more than 6,000 subjects who were assessed annually from 1979 through 1994 and then biannually since then (Arch. Gen. Psychiatry 2007;64:1296–304).
Dr. D'Onofrio and his associates analyzed data on a subsample of 4,912 young female subjects who had at least one child aged 4–11 years by the 2004 assessment. The women had furnished information on their substance use both before they had become pregnant and during their pregnancies. They then reported on their children's conduct problems and attention/impulsivity problems using the Behavior Problem Index.
Prenatal exposure strongly correlated with conduct problems, and children with exposure to higher levels of alcohol had more such problems than those exposed to less alcohol. Compared with children who were not exposed to alcohol in utero, those who were exposed to alcohol every day had an increase of 0.35 standard deviations in conduct problems.
This link persisted after the data were adjusted to account for potentially confounding factors such as prenatal exposure to nicotine and other drugs, maternal traits, and genetic and environmental factors. It also persisted in comparisons with siblings and cousins, and in a number of statistical models.
“The results of all models are consistent with a causal association between prenatal alcohol exposure and offspring conduct problems,” the investigators said.
In contrast, prenatal alcohol exposure did not appear to be causally related to attention/impulsivity problems, although these problems were highly prevalent in exposed children. It is likely that some other factor related to maternal drinking explains this association, they added.
This large-scale study complements but does not replace more focused studies that can more accurately assess the particular mental health problems in children who were exposed to alcohol prenatally, Dr. D'Onofrio and his associates noted.
Prenatal alcohol exposure appears to cause later conduct problems in childhood, reported Dr. Brian M. D'Onofrio of Indiana University, Bloomington, and his associates.
In contrast, the later attention and impulsivity problems seen in children who were exposed to alcohol in utero appear to be caused by other factors correlated with maternal drinking rather than to the alcohol exposure itself, the researchers said.
Dr. D'Onofrio and his associates used data collected in a large longitudinal study of adolescents and young adults to examine the relationship between drinking in young women and behavior in their offspring. The survey, funded by the U.S. Bureau of Labor Statistics, covered a racially diverse sample of more than 6,000 subjects who were assessed annually from 1979 through 1994 and then biannually since then (Arch. Gen. Psychiatry 2007;64:1296–304).
Dr. D'Onofrio and his associates analyzed data on a subsample of 4,912 young female subjects who had at least one child aged 4–11 years by the 2004 assessment. The women had furnished information on their substance use both before they had become pregnant and during their pregnancies. They then reported on their children's conduct problems and attention/impulsivity problems using the Behavior Problem Index.
Prenatal exposure strongly correlated with conduct problems, and children with exposure to higher levels of alcohol had more such problems than those exposed to less alcohol. Compared with children who were not exposed to alcohol in utero, those who were exposed to alcohol every day had an increase of 0.35 standard deviations in conduct problems.
This link persisted after the data were adjusted to account for potentially confounding factors such as prenatal exposure to nicotine and other drugs, maternal traits, and genetic and environmental factors. It also persisted in comparisons with siblings and cousins, and in a number of statistical models.
“The results of all models are consistent with a causal association between prenatal alcohol exposure and offspring conduct problems,” the investigators said.
In contrast, prenatal alcohol exposure did not appear to be causally related to attention/impulsivity problems, although these problems were highly prevalent in exposed children. It is likely that some other factor related to maternal drinking explains this association, they added.
This large-scale study complements but does not replace more focused studies that can more accurately assess the particular mental health problems in children who were exposed to alcohol prenatally, Dr. D'Onofrio and his associates noted.
Prenatal alcohol exposure appears to cause later conduct problems in childhood, reported Dr. Brian M. D'Onofrio of Indiana University, Bloomington, and his associates.
In contrast, the later attention and impulsivity problems seen in children who were exposed to alcohol in utero appear to be caused by other factors correlated with maternal drinking rather than to the alcohol exposure itself, the researchers said.
Dr. D'Onofrio and his associates used data collected in a large longitudinal study of adolescents and young adults to examine the relationship between drinking in young women and behavior in their offspring. The survey, funded by the U.S. Bureau of Labor Statistics, covered a racially diverse sample of more than 6,000 subjects who were assessed annually from 1979 through 1994 and then biannually since then (Arch. Gen. Psychiatry 2007;64:1296–304).
Dr. D'Onofrio and his associates analyzed data on a subsample of 4,912 young female subjects who had at least one child aged 4–11 years by the 2004 assessment. The women had furnished information on their substance use both before they had become pregnant and during their pregnancies. They then reported on their children's conduct problems and attention/impulsivity problems using the Behavior Problem Index.
Prenatal exposure strongly correlated with conduct problems, and children with exposure to higher levels of alcohol had more such problems than those exposed to less alcohol. Compared with children who were not exposed to alcohol in utero, those who were exposed to alcohol every day had an increase of 0.35 standard deviations in conduct problems.
This link persisted after the data were adjusted to account for potentially confounding factors such as prenatal exposure to nicotine and other drugs, maternal traits, and genetic and environmental factors. It also persisted in comparisons with siblings and cousins, and in a number of statistical models.
“The results of all models are consistent with a causal association between prenatal alcohol exposure and offspring conduct problems,” the investigators said.
In contrast, prenatal alcohol exposure did not appear to be causally related to attention/impulsivity problems, although these problems were highly prevalent in exposed children. It is likely that some other factor related to maternal drinking explains this association, they added.
This large-scale study complements but does not replace more focused studies that can more accurately assess the particular mental health problems in children who were exposed to alcohol prenatally, Dr. D'Onofrio and his associates noted.
Antioxidant Doesn't Benefit Cognitive Performance Short Term
The antioxidant β-carotene does not improve cognitive performance among healthy older men in the short term, according to a subgroup analysis of data from a longitudinal study.
These findings add to the growing list of study results concluding that counteracting long-term oxidative stress with antioxidants doesn't appear to protect against cognitive decline. However, it is still possible that long-term treatment with β-carotene may confer “modest” neuroprotection, reported Francine Grodstein, Sc.D., and her associates in the Physicians' Health Study (PHS) II.
The PHS II is an ancillary study of the Physicians' Health Study, a randomized clinical trial assessing whether vitamin supplements prevent cancer and cardiovascular disease. Cognitive evaluations were added to the trial to assess any cognitive impact of supplementation.
The PHS II study extended the follow-up on a subgroup of 7,641 male physicians (average age 73 years) from 1997 through 2003, and also added 7,000 new recruits aged 55 and older in 1998–2001.
Dr. Grodstein and her associates assessed cognitive outcomes in 2,989 subjects who took placebo and 2,967 subjects who took β-carotene for a wide range of durations, ranging from 2 months to 20 years. Verbal memory, immediate and delayed recall, category fluency, and mental state were assessed.
β-Carotene yielded no cognitive benefits in subjects who had taken it for 3 years or less, according to Dr. Grodstein of Harvard School of Public Health, Boston, and her associates.
However, subjects who had taken β-carotene for at least 15 years showed better scores on several cognitive measures than did those who had taken placebo. “In general, the effect of long-term β-carotene treatment was comparable to delaying cognitive aging by 1 to 1.5 years,” the researchers said (Arch. Intern. Med. 2007;167:2184–90).
Nevertheless, in a subset of 4,074 subjects who had further cognitive assessments 2–4 years later, these differences were found to be not statistically significant.
Regarding this last finding, Dr. Kristine Yaffe of the University of California, San Francisco, said in an editorial comment accompanying this report, “it is curious that the authors minimize the results for approximately 4,000 men who had repeated cognitive testing.”
Dr. Yaffe noted that “several trials have examined relatively long durations of antioxidant exposure (up to 10 years) and failed to find an effect of treatment on cognitive outcomes,” (Arch. Intern. Med. 2007;167:2167–8).
“For the clinician, there is no convincing justification to recommend the use of antioxidant dietary supplements to maintain cognitive performance in cognitively normal adults or in those with mild cognitive impairment. Furthermore, there is new concern that high-dose antioxidant supplementation, including β-carotene, may have adverse health consequences including mortality,” Dr. Yaffe said.
Men who took β-carotene for 3 years or less showed no improvement in cognitive performance. ©photo-Dave/Fotolia.com
The antioxidant β-carotene does not improve cognitive performance among healthy older men in the short term, according to a subgroup analysis of data from a longitudinal study.
These findings add to the growing list of study results concluding that counteracting long-term oxidative stress with antioxidants doesn't appear to protect against cognitive decline. However, it is still possible that long-term treatment with β-carotene may confer “modest” neuroprotection, reported Francine Grodstein, Sc.D., and her associates in the Physicians' Health Study (PHS) II.
The PHS II is an ancillary study of the Physicians' Health Study, a randomized clinical trial assessing whether vitamin supplements prevent cancer and cardiovascular disease. Cognitive evaluations were added to the trial to assess any cognitive impact of supplementation.
The PHS II study extended the follow-up on a subgroup of 7,641 male physicians (average age 73 years) from 1997 through 2003, and also added 7,000 new recruits aged 55 and older in 1998–2001.
Dr. Grodstein and her associates assessed cognitive outcomes in 2,989 subjects who took placebo and 2,967 subjects who took β-carotene for a wide range of durations, ranging from 2 months to 20 years. Verbal memory, immediate and delayed recall, category fluency, and mental state were assessed.
β-Carotene yielded no cognitive benefits in subjects who had taken it for 3 years or less, according to Dr. Grodstein of Harvard School of Public Health, Boston, and her associates.
However, subjects who had taken β-carotene for at least 15 years showed better scores on several cognitive measures than did those who had taken placebo. “In general, the effect of long-term β-carotene treatment was comparable to delaying cognitive aging by 1 to 1.5 years,” the researchers said (Arch. Intern. Med. 2007;167:2184–90).
Nevertheless, in a subset of 4,074 subjects who had further cognitive assessments 2–4 years later, these differences were found to be not statistically significant.
Regarding this last finding, Dr. Kristine Yaffe of the University of California, San Francisco, said in an editorial comment accompanying this report, “it is curious that the authors minimize the results for approximately 4,000 men who had repeated cognitive testing.”
Dr. Yaffe noted that “several trials have examined relatively long durations of antioxidant exposure (up to 10 years) and failed to find an effect of treatment on cognitive outcomes,” (Arch. Intern. Med. 2007;167:2167–8).
“For the clinician, there is no convincing justification to recommend the use of antioxidant dietary supplements to maintain cognitive performance in cognitively normal adults or in those with mild cognitive impairment. Furthermore, there is new concern that high-dose antioxidant supplementation, including β-carotene, may have adverse health consequences including mortality,” Dr. Yaffe said.
Men who took β-carotene for 3 years or less showed no improvement in cognitive performance. ©photo-Dave/Fotolia.com
The antioxidant β-carotene does not improve cognitive performance among healthy older men in the short term, according to a subgroup analysis of data from a longitudinal study.
These findings add to the growing list of study results concluding that counteracting long-term oxidative stress with antioxidants doesn't appear to protect against cognitive decline. However, it is still possible that long-term treatment with β-carotene may confer “modest” neuroprotection, reported Francine Grodstein, Sc.D., and her associates in the Physicians' Health Study (PHS) II.
The PHS II is an ancillary study of the Physicians' Health Study, a randomized clinical trial assessing whether vitamin supplements prevent cancer and cardiovascular disease. Cognitive evaluations were added to the trial to assess any cognitive impact of supplementation.
The PHS II study extended the follow-up on a subgroup of 7,641 male physicians (average age 73 years) from 1997 through 2003, and also added 7,000 new recruits aged 55 and older in 1998–2001.
Dr. Grodstein and her associates assessed cognitive outcomes in 2,989 subjects who took placebo and 2,967 subjects who took β-carotene for a wide range of durations, ranging from 2 months to 20 years. Verbal memory, immediate and delayed recall, category fluency, and mental state were assessed.
β-Carotene yielded no cognitive benefits in subjects who had taken it for 3 years or less, according to Dr. Grodstein of Harvard School of Public Health, Boston, and her associates.
However, subjects who had taken β-carotene for at least 15 years showed better scores on several cognitive measures than did those who had taken placebo. “In general, the effect of long-term β-carotene treatment was comparable to delaying cognitive aging by 1 to 1.5 years,” the researchers said (Arch. Intern. Med. 2007;167:2184–90).
Nevertheless, in a subset of 4,074 subjects who had further cognitive assessments 2–4 years later, these differences were found to be not statistically significant.
Regarding this last finding, Dr. Kristine Yaffe of the University of California, San Francisco, said in an editorial comment accompanying this report, “it is curious that the authors minimize the results for approximately 4,000 men who had repeated cognitive testing.”
Dr. Yaffe noted that “several trials have examined relatively long durations of antioxidant exposure (up to 10 years) and failed to find an effect of treatment on cognitive outcomes,” (Arch. Intern. Med. 2007;167:2167–8).
“For the clinician, there is no convincing justification to recommend the use of antioxidant dietary supplements to maintain cognitive performance in cognitively normal adults or in those with mild cognitive impairment. Furthermore, there is new concern that high-dose antioxidant supplementation, including β-carotene, may have adverse health consequences including mortality,” Dr. Yaffe said.
Men who took β-carotene for 3 years or less showed no improvement in cognitive performance. ©photo-Dave/Fotolia.com
Pedometer Use Motivates BMI, Blood Pressure Dip
Using a pedometer significantly increases a patient's physical activity level—by a magnitude of about 1 mile of walking per day, results of a meta-analysis suggest.
This increased activity level in turn appears to lead to clinically relevant reductions in body mass index and blood pressure, according to Dr. Dena M. Bravata of Stanford (Calif.) University and her associates.
Pedometers are small, relatively inexpensive devices worn at the hip to count the number of steps a person walks each day. They have recently become popular “as tools for motivating and monitoring physical activity,” with wearers often encouraged to aim for taking 10,000 steps daily. To date, there has been no detailed evidence of the device's effectiveness, however, and no indication that it improves health outcomes, the investigators wrote.
They conducted a meta-analysis of 26 studies, including 8 randomized clinical trials, which reported pedometer use in adult outpatients. Pooling the data allowed them to evaluate outcomes for 2,767 subjects. Mean intervention duration was 18 weeks. The mean subject age was 49 years. Most were overweight and relatively inactive at baseline, but were normotensive.
Using a pedometer significantly raised subjects' activity levels by an average of more than 2,000 steps per day, as long as it was done in conjunction with a specified step goal and the use of a step diary. Subjects increased their walking whether they worked toward a 10,000-step target or an alternative personalized step goal (JAMA 2007;298:2296–304).
Those who used a pedometer also significantly decreased their body mass index by 0.38 from baseline, but their weight loss was not simply a function of the increase in steps walked every day. “This suggests that participation in the intervention either increased activity not measured by the pedometer or resulted in decreased caloric consumption, or both,” the researchers noted.
Pedometer users also significantly decreased their systolic blood pressure by nearly 4 mm Hg from baseline, which is notable because most were normotensive. This reduction in blood pressure seemed to be independent of decreases in BMI, again suggesting that use of the device entails benefits not measured by step count alone, they said, adding it is not known if these improvements are sustained long term.
Using pedometers decreased patients' body mass index as well as their systolic blood pressure. Elsevier Global Medical News
Using a pedometer significantly increases a patient's physical activity level—by a magnitude of about 1 mile of walking per day, results of a meta-analysis suggest.
This increased activity level in turn appears to lead to clinically relevant reductions in body mass index and blood pressure, according to Dr. Dena M. Bravata of Stanford (Calif.) University and her associates.
Pedometers are small, relatively inexpensive devices worn at the hip to count the number of steps a person walks each day. They have recently become popular “as tools for motivating and monitoring physical activity,” with wearers often encouraged to aim for taking 10,000 steps daily. To date, there has been no detailed evidence of the device's effectiveness, however, and no indication that it improves health outcomes, the investigators wrote.
They conducted a meta-analysis of 26 studies, including 8 randomized clinical trials, which reported pedometer use in adult outpatients. Pooling the data allowed them to evaluate outcomes for 2,767 subjects. Mean intervention duration was 18 weeks. The mean subject age was 49 years. Most were overweight and relatively inactive at baseline, but were normotensive.
Using a pedometer significantly raised subjects' activity levels by an average of more than 2,000 steps per day, as long as it was done in conjunction with a specified step goal and the use of a step diary. Subjects increased their walking whether they worked toward a 10,000-step target or an alternative personalized step goal (JAMA 2007;298:2296–304).
Those who used a pedometer also significantly decreased their body mass index by 0.38 from baseline, but their weight loss was not simply a function of the increase in steps walked every day. “This suggests that participation in the intervention either increased activity not measured by the pedometer or resulted in decreased caloric consumption, or both,” the researchers noted.
Pedometer users also significantly decreased their systolic blood pressure by nearly 4 mm Hg from baseline, which is notable because most were normotensive. This reduction in blood pressure seemed to be independent of decreases in BMI, again suggesting that use of the device entails benefits not measured by step count alone, they said, adding it is not known if these improvements are sustained long term.
Using pedometers decreased patients' body mass index as well as their systolic blood pressure. Elsevier Global Medical News
Using a pedometer significantly increases a patient's physical activity level—by a magnitude of about 1 mile of walking per day, results of a meta-analysis suggest.
This increased activity level in turn appears to lead to clinically relevant reductions in body mass index and blood pressure, according to Dr. Dena M. Bravata of Stanford (Calif.) University and her associates.
Pedometers are small, relatively inexpensive devices worn at the hip to count the number of steps a person walks each day. They have recently become popular “as tools for motivating and monitoring physical activity,” with wearers often encouraged to aim for taking 10,000 steps daily. To date, there has been no detailed evidence of the device's effectiveness, however, and no indication that it improves health outcomes, the investigators wrote.
They conducted a meta-analysis of 26 studies, including 8 randomized clinical trials, which reported pedometer use in adult outpatients. Pooling the data allowed them to evaluate outcomes for 2,767 subjects. Mean intervention duration was 18 weeks. The mean subject age was 49 years. Most were overweight and relatively inactive at baseline, but were normotensive.
Using a pedometer significantly raised subjects' activity levels by an average of more than 2,000 steps per day, as long as it was done in conjunction with a specified step goal and the use of a step diary. Subjects increased their walking whether they worked toward a 10,000-step target or an alternative personalized step goal (JAMA 2007;298:2296–304).
Those who used a pedometer also significantly decreased their body mass index by 0.38 from baseline, but their weight loss was not simply a function of the increase in steps walked every day. “This suggests that participation in the intervention either increased activity not measured by the pedometer or resulted in decreased caloric consumption, or both,” the researchers noted.
Pedometer users also significantly decreased their systolic blood pressure by nearly 4 mm Hg from baseline, which is notable because most were normotensive. This reduction in blood pressure seemed to be independent of decreases in BMI, again suggesting that use of the device entails benefits not measured by step count alone, they said, adding it is not known if these improvements are sustained long term.
Using pedometers decreased patients' body mass index as well as their systolic blood pressure. Elsevier Global Medical News
Suicides, CHD Deaths Up After Gastric Bypass
Researchers have found “a substantial excess” in deaths attributable to suicide and to coronary heart disease among patients who have undergone bariatric surgery, according to a report.
This descriptive study was not designed to ascertain the basis for this excess mortality, but the investigators postulated that the reasons may be connected in part to obesity itself and its attendant comorbidities, which preceded the surgery. Continued obesity, even after substantial weight loss, as well as weight regain, also probably play a role, according to Dr. Bennet I. Omalu of the University of Pittsburgh and his associates.
The researchers reviewed the records of 16,683 bariatric surgeries performed in Pennsylvania from 1995 through 2004. There were 440 deaths, for an overall mortality of 2.6%. Age- and sex-specific death rates were substantially higher than those for the general population, even after procedure-related deaths were excluded from the analysis.
Coronary heart disease was the leading cause of death, accounting for about 20% of deaths that occurred 30 days or more after the procedure. “In the group aged 45–54 years, the CHD mortality rate for women after bariatric surgery was 15.2/10,000 person-years, compared with the rate of similarly aged women in Pennsylvania of 5.46/10, 000,” Dr. Omalu and his associates wrote (Arch. Surg. 2007;142:923–8).
There were 16 suicides (4%) and 14 drug overdoses (3%), some of which may have been misclassified as accidents rather than suicides. Most occurred more than a year after the surgery, “suggesting that careful follow-up, especially the need to recognize and treat depression, should be provided,” the authors noted.
Researchers have found “a substantial excess” in deaths attributable to suicide and to coronary heart disease among patients who have undergone bariatric surgery, according to a report.
This descriptive study was not designed to ascertain the basis for this excess mortality, but the investigators postulated that the reasons may be connected in part to obesity itself and its attendant comorbidities, which preceded the surgery. Continued obesity, even after substantial weight loss, as well as weight regain, also probably play a role, according to Dr. Bennet I. Omalu of the University of Pittsburgh and his associates.
The researchers reviewed the records of 16,683 bariatric surgeries performed in Pennsylvania from 1995 through 2004. There were 440 deaths, for an overall mortality of 2.6%. Age- and sex-specific death rates were substantially higher than those for the general population, even after procedure-related deaths were excluded from the analysis.
Coronary heart disease was the leading cause of death, accounting for about 20% of deaths that occurred 30 days or more after the procedure. “In the group aged 45–54 years, the CHD mortality rate for women after bariatric surgery was 15.2/10,000 person-years, compared with the rate of similarly aged women in Pennsylvania of 5.46/10, 000,” Dr. Omalu and his associates wrote (Arch. Surg. 2007;142:923–8).
There were 16 suicides (4%) and 14 drug overdoses (3%), some of which may have been misclassified as accidents rather than suicides. Most occurred more than a year after the surgery, “suggesting that careful follow-up, especially the need to recognize and treat depression, should be provided,” the authors noted.
Researchers have found “a substantial excess” in deaths attributable to suicide and to coronary heart disease among patients who have undergone bariatric surgery, according to a report.
This descriptive study was not designed to ascertain the basis for this excess mortality, but the investigators postulated that the reasons may be connected in part to obesity itself and its attendant comorbidities, which preceded the surgery. Continued obesity, even after substantial weight loss, as well as weight regain, also probably play a role, according to Dr. Bennet I. Omalu of the University of Pittsburgh and his associates.
The researchers reviewed the records of 16,683 bariatric surgeries performed in Pennsylvania from 1995 through 2004. There were 440 deaths, for an overall mortality of 2.6%. Age- and sex-specific death rates were substantially higher than those for the general population, even after procedure-related deaths were excluded from the analysis.
Coronary heart disease was the leading cause of death, accounting for about 20% of deaths that occurred 30 days or more after the procedure. “In the group aged 45–54 years, the CHD mortality rate for women after bariatric surgery was 15.2/10,000 person-years, compared with the rate of similarly aged women in Pennsylvania of 5.46/10, 000,” Dr. Omalu and his associates wrote (Arch. Surg. 2007;142:923–8).
There were 16 suicides (4%) and 14 drug overdoses (3%), some of which may have been misclassified as accidents rather than suicides. Most occurred more than a year after the surgery, “suggesting that careful follow-up, especially the need to recognize and treat depression, should be provided,” the authors noted.
Modest Weight Loss Before Bariatric Surgery Predicts Postoperative Success
High-risk morbidly obese patients who lose 10% or more of their excess body weight before undergoing bariatric surgery shed postoperative weight more rapidly than do those who do not lose the excess weight preoperatively, reported Dr. Christopher D. Still and his associates.
The investigators, of the center for nutrition and weight management at the Geisinger Medical Center in Danville, Pa., said patients who lose that “modest” amount in the preoperative period also are less likely to have a long hospital stay, probably because they have fewer complications.
“Optimal preparation for high-risk individuals with significant comorbid medical problems remains controversial,” they noted. Geisinger Medical Center's preoperative program encourages modest short-term weight loss to help control existing medical problems such as diabetes, sleep apnea, steatohepatitis, and cardiometabolic syndrome.
Dr. Still and his associates assessed the postoperative course in 884 patients who underwent open or laparoscopic Roux-en-Y gastric bypass between 2002 and 2006 at their center. Preoperative weight loss was initially attempted by means of a prudent low-fat diet and modest exercise. If that approach was ineffective, patients were instructed to follow a 1,000–1,500-kcal liquid diet.
The mean patient age was 45 years, and mean body mass index was 51.3 kg/m
They also were less likely to need a long hospital stay, possibly because of a reduced rate of postoperative complications, Dr. Still and his associates said (Arch. Surg. 2007;142:994–8).
It is possible that those who lost weight preoperatively were the most motivated and compliant patients, and thus the most likely to have successful surgical results. Longer follow-up should answer this question, they said.
ELSEVIER GLOBAL MEDICAL NEWS
High-risk morbidly obese patients who lose 10% or more of their excess body weight before undergoing bariatric surgery shed postoperative weight more rapidly than do those who do not lose the excess weight preoperatively, reported Dr. Christopher D. Still and his associates.
The investigators, of the center for nutrition and weight management at the Geisinger Medical Center in Danville, Pa., said patients who lose that “modest” amount in the preoperative period also are less likely to have a long hospital stay, probably because they have fewer complications.
“Optimal preparation for high-risk individuals with significant comorbid medical problems remains controversial,” they noted. Geisinger Medical Center's preoperative program encourages modest short-term weight loss to help control existing medical problems such as diabetes, sleep apnea, steatohepatitis, and cardiometabolic syndrome.
Dr. Still and his associates assessed the postoperative course in 884 patients who underwent open or laparoscopic Roux-en-Y gastric bypass between 2002 and 2006 at their center. Preoperative weight loss was initially attempted by means of a prudent low-fat diet and modest exercise. If that approach was ineffective, patients were instructed to follow a 1,000–1,500-kcal liquid diet.
The mean patient age was 45 years, and mean body mass index was 51.3 kg/m
They also were less likely to need a long hospital stay, possibly because of a reduced rate of postoperative complications, Dr. Still and his associates said (Arch. Surg. 2007;142:994–8).
It is possible that those who lost weight preoperatively were the most motivated and compliant patients, and thus the most likely to have successful surgical results. Longer follow-up should answer this question, they said.
ELSEVIER GLOBAL MEDICAL NEWS
High-risk morbidly obese patients who lose 10% or more of their excess body weight before undergoing bariatric surgery shed postoperative weight more rapidly than do those who do not lose the excess weight preoperatively, reported Dr. Christopher D. Still and his associates.
The investigators, of the center for nutrition and weight management at the Geisinger Medical Center in Danville, Pa., said patients who lose that “modest” amount in the preoperative period also are less likely to have a long hospital stay, probably because they have fewer complications.
“Optimal preparation for high-risk individuals with significant comorbid medical problems remains controversial,” they noted. Geisinger Medical Center's preoperative program encourages modest short-term weight loss to help control existing medical problems such as diabetes, sleep apnea, steatohepatitis, and cardiometabolic syndrome.
Dr. Still and his associates assessed the postoperative course in 884 patients who underwent open or laparoscopic Roux-en-Y gastric bypass between 2002 and 2006 at their center. Preoperative weight loss was initially attempted by means of a prudent low-fat diet and modest exercise. If that approach was ineffective, patients were instructed to follow a 1,000–1,500-kcal liquid diet.
The mean patient age was 45 years, and mean body mass index was 51.3 kg/m
They also were less likely to need a long hospital stay, possibly because of a reduced rate of postoperative complications, Dr. Still and his associates said (Arch. Surg. 2007;142:994–8).
It is possible that those who lost weight preoperatively were the most motivated and compliant patients, and thus the most likely to have successful surgical results. Longer follow-up should answer this question, they said.
ELSEVIER GLOBAL MEDICAL NEWS
HIPAA Privacy Rule May Impede Clinical Research
The Health Insurance Portability and Accountability Act's privacy rule has stymied clinical research by making it more expensive and time consuming, according to the findings of a national survey involving more than 1,500 epidemiologists.
The Institute of Medicine commissioned this first-ever, large-scale survey to assess the effect of the privacy rule, which was implemented in 2003 to protect research subjects' privacy while still preserving the legitimate use and disclosure of their health information. The findings confirm those of case reports and smaller or single-institution studies: The privacy rule's overall effect on research has been more negative than positive, Dr. Roberta B. Ness and her associates said.
The rule requires researchers to obtain written authorization to access medical records or to obtain a waiver from an institutional review board (IRB). In practice, compliance entails following and documenting complex bureaucratic procedures—particularly patient consent—that complicate the research process.
A total of 1,527 epidemiologists from academia, industry, government, and nongovernment organizations completed the anonymous Web-based survey, which elicited both positive and negative feedback on the privacy rule, said Dr. Ness, of the University of Pittsburgh, and her associates.
Three major themes emerged from the responses. First, a solid majority “expressed frustration and concern that the implementation of the privacy rule had added patient burden without substantially enhancing privacy protection.” In the words of one respondent, an “already cumbersome patient consent form now has an additional [page and a half] explaining HIPAA restrictions. This detracts from the informed consent process pertaining to the more critical issue: the actual medical risks and benefits of participating.”
Nearly 70% of respondents said that complying with the rule made their work much more difficult; an additional 16% said it made their work more difficult. In all, 40% said the rule greatly increased costs, and another 21% said it raised costs moderately. And half said it added considerably to the time needed to complete studies, while an additional 20% said it required extra time. Only 10% said that the rule strengthened public trust, and only 25% said it enhanced patient confidentiality.
Second, research institutions varied widely in their interpretation of privacy rule regulations. This impeded multicenter projects, and left many researchers confused about what research their IRB might or might not sanction. As many as one in nine epidemiologists (11%) had conceived of a study but did not submit it to an IRB because they thought it would not obtain approval under the HIPAA privacy rule, Dr. Ness and her associates said (JAMA 2007;298:2164–70).
Third, compliance with the privacy rule slowed research to such a degree that half of the respondents felt it is “seriously affecting” public health surveillance, which may threaten the ability to combat epidemics and other dangers.
The Health Insurance Portability and Accountability Act's privacy rule has stymied clinical research by making it more expensive and time consuming, according to the findings of a national survey involving more than 1,500 epidemiologists.
The Institute of Medicine commissioned this first-ever, large-scale survey to assess the effect of the privacy rule, which was implemented in 2003 to protect research subjects' privacy while still preserving the legitimate use and disclosure of their health information. The findings confirm those of case reports and smaller or single-institution studies: The privacy rule's overall effect on research has been more negative than positive, Dr. Roberta B. Ness and her associates said.
The rule requires researchers to obtain written authorization to access medical records or to obtain a waiver from an institutional review board (IRB). In practice, compliance entails following and documenting complex bureaucratic procedures—particularly patient consent—that complicate the research process.
A total of 1,527 epidemiologists from academia, industry, government, and nongovernment organizations completed the anonymous Web-based survey, which elicited both positive and negative feedback on the privacy rule, said Dr. Ness, of the University of Pittsburgh, and her associates.
Three major themes emerged from the responses. First, a solid majority “expressed frustration and concern that the implementation of the privacy rule had added patient burden without substantially enhancing privacy protection.” In the words of one respondent, an “already cumbersome patient consent form now has an additional [page and a half] explaining HIPAA restrictions. This detracts from the informed consent process pertaining to the more critical issue: the actual medical risks and benefits of participating.”
Nearly 70% of respondents said that complying with the rule made their work much more difficult; an additional 16% said it made their work more difficult. In all, 40% said the rule greatly increased costs, and another 21% said it raised costs moderately. And half said it added considerably to the time needed to complete studies, while an additional 20% said it required extra time. Only 10% said that the rule strengthened public trust, and only 25% said it enhanced patient confidentiality.
Second, research institutions varied widely in their interpretation of privacy rule regulations. This impeded multicenter projects, and left many researchers confused about what research their IRB might or might not sanction. As many as one in nine epidemiologists (11%) had conceived of a study but did not submit it to an IRB because they thought it would not obtain approval under the HIPAA privacy rule, Dr. Ness and her associates said (JAMA 2007;298:2164–70).
Third, compliance with the privacy rule slowed research to such a degree that half of the respondents felt it is “seriously affecting” public health surveillance, which may threaten the ability to combat epidemics and other dangers.
The Health Insurance Portability and Accountability Act's privacy rule has stymied clinical research by making it more expensive and time consuming, according to the findings of a national survey involving more than 1,500 epidemiologists.
The Institute of Medicine commissioned this first-ever, large-scale survey to assess the effect of the privacy rule, which was implemented in 2003 to protect research subjects' privacy while still preserving the legitimate use and disclosure of their health information. The findings confirm those of case reports and smaller or single-institution studies: The privacy rule's overall effect on research has been more negative than positive, Dr. Roberta B. Ness and her associates said.
The rule requires researchers to obtain written authorization to access medical records or to obtain a waiver from an institutional review board (IRB). In practice, compliance entails following and documenting complex bureaucratic procedures—particularly patient consent—that complicate the research process.
A total of 1,527 epidemiologists from academia, industry, government, and nongovernment organizations completed the anonymous Web-based survey, which elicited both positive and negative feedback on the privacy rule, said Dr. Ness, of the University of Pittsburgh, and her associates.
Three major themes emerged from the responses. First, a solid majority “expressed frustration and concern that the implementation of the privacy rule had added patient burden without substantially enhancing privacy protection.” In the words of one respondent, an “already cumbersome patient consent form now has an additional [page and a half] explaining HIPAA restrictions. This detracts from the informed consent process pertaining to the more critical issue: the actual medical risks and benefits of participating.”
Nearly 70% of respondents said that complying with the rule made their work much more difficult; an additional 16% said it made their work more difficult. In all, 40% said the rule greatly increased costs, and another 21% said it raised costs moderately. And half said it added considerably to the time needed to complete studies, while an additional 20% said it required extra time. Only 10% said that the rule strengthened public trust, and only 25% said it enhanced patient confidentiality.
Second, research institutions varied widely in their interpretation of privacy rule regulations. This impeded multicenter projects, and left many researchers confused about what research their IRB might or might not sanction. As many as one in nine epidemiologists (11%) had conceived of a study but did not submit it to an IRB because they thought it would not obtain approval under the HIPAA privacy rule, Dr. Ness and her associates said (JAMA 2007;298:2164–70).
Third, compliance with the privacy rule slowed research to such a degree that half of the respondents felt it is “seriously affecting” public health surveillance, which may threaten the ability to combat epidemics and other dangers.
Teriparatide Beats Alendronate in Prospective Trial
The anabolic agent teriparatide outperformed alendronate in patients with glucocorticoid-induced osteoporosis who were at high risk for fractures in a large, randomized, controlled trial.
Study participants taking teriparatide were significantly less likely to sustain new vertebral fractures and showed greater increases in bone mineral density (BMD) at the spine and hip, the investigators wrote in the the New England Journal of Medicine.
International guidelines recommend bisphosphonates like alendronate for patients who either already have or are at risk for glucocorticoid-induced osteoporosis, they noted. But recombinant teriparatide—human parathyroid hormone (1–34)—is thought to stimulate bone formation, increase bone mass, and reduce the risk of vertebral and nonvertebral fractures.
“Teriparatide may be a rational treatment for glucocorticoid-induced osteoporosis because it directly stimulates osteoblastogenesis and inhibits osteoblast apoptosis, thereby counteracting two key mechanisms through which glucocorticoid therapy promotes bone loss,” reported Dr. Kenneth G. Saag of the University of Alabama at Birmingham, and his associates.
They are conducting what they called the first randomized, controlled clinical trial comparing teriparatide with a bisphosphonate in this patient population. Dr. Saag reported their results for the first 18 months of a planned 36. The trial is supported by Eli Lilly & Co., which markets teriparatide as Forteo in the United States.
Study participants in 12 countries in North America, South America, and Europe were randomly assigned to either injectable teriparatide plus an oral placebo or oral alendronate plus an injectable placebo every day. All also received daily calcium and vitamin D supplements.
The subjects were 345 women and 83 men aged 22–89 years who had established osteoporosis because of long-term glucocorticoid therapy for a variety of disorders.
After 18 months, BMD at the lumbar spine increased to a significantly greater degree in subjects taking teriparatide (7.2%) than in patients taking alendronate (3.4%). The same was true for total hip bone mineral density (3.8% and 2.4%, respectively).
Markers of bone formation increased almost 70% and those of bone resorption increased about 45% at 6 months in subjects taking teriparatide, while these markers decreased in subjects taking alendronate.
New vertebral fractures developed in 10 subjects taking alendronate, compared with only 1 taking teriparatide, a significant difference. The number of subjects who developed new nonvertebral fractures did not differ significantly between the two groups (N. Engl. J. Med. 2007;357:2028–39).
“Safety profiles in the two study groups were similar, with no significant differences in the overall incidence of adverse events, the incidence of serious adverse events, or the incidence of events either leading to withdrawal from the study or considered to be possibly related to the study drug,” they added.
However 70 subjects in the alendronate group and 64 in the teriparatide group dropped out of the study. Thirteen (6.1%) of the 214 patients in the alendronate group and 25 (11.7%) of the 214 in the teriparatide group discontinued because of an adverse event.
In an editorial accompanying this report, Dr. Philip N. Sambrook, of the University of Sidney (Australia), noted this “moderately high” dropout rate of 30% may indicate that adherence to either treatment may be limited, particularly because this patient group is “often already unwell.
“The persistence of [adverse] effects in the ongoing 18-month extension of the study will be of interest,” he noted (N. Engl. J. Med. 2007;357:2084–6).
Nevertheless, “for patients with low bone mineral density who are receiving long-term low-dose glucocorticoid therapy, the study by Saag et al. suggests that teriparatide should be considered as a potential first-line therapy,” he said.
'Teriparatide may be a rational treatment for glucocorticoid-induced osteoporosis.' DR. SAAG
The anabolic agent teriparatide outperformed alendronate in patients with glucocorticoid-induced osteoporosis who were at high risk for fractures in a large, randomized, controlled trial.
Study participants taking teriparatide were significantly less likely to sustain new vertebral fractures and showed greater increases in bone mineral density (BMD) at the spine and hip, the investigators wrote in the the New England Journal of Medicine.
International guidelines recommend bisphosphonates like alendronate for patients who either already have or are at risk for glucocorticoid-induced osteoporosis, they noted. But recombinant teriparatide—human parathyroid hormone (1–34)—is thought to stimulate bone formation, increase bone mass, and reduce the risk of vertebral and nonvertebral fractures.
“Teriparatide may be a rational treatment for glucocorticoid-induced osteoporosis because it directly stimulates osteoblastogenesis and inhibits osteoblast apoptosis, thereby counteracting two key mechanisms through which glucocorticoid therapy promotes bone loss,” reported Dr. Kenneth G. Saag of the University of Alabama at Birmingham, and his associates.
They are conducting what they called the first randomized, controlled clinical trial comparing teriparatide with a bisphosphonate in this patient population. Dr. Saag reported their results for the first 18 months of a planned 36. The trial is supported by Eli Lilly & Co., which markets teriparatide as Forteo in the United States.
Study participants in 12 countries in North America, South America, and Europe were randomly assigned to either injectable teriparatide plus an oral placebo or oral alendronate plus an injectable placebo every day. All also received daily calcium and vitamin D supplements.
The subjects were 345 women and 83 men aged 22–89 years who had established osteoporosis because of long-term glucocorticoid therapy for a variety of disorders.
After 18 months, BMD at the lumbar spine increased to a significantly greater degree in subjects taking teriparatide (7.2%) than in patients taking alendronate (3.4%). The same was true for total hip bone mineral density (3.8% and 2.4%, respectively).
Markers of bone formation increased almost 70% and those of bone resorption increased about 45% at 6 months in subjects taking teriparatide, while these markers decreased in subjects taking alendronate.
New vertebral fractures developed in 10 subjects taking alendronate, compared with only 1 taking teriparatide, a significant difference. The number of subjects who developed new nonvertebral fractures did not differ significantly between the two groups (N. Engl. J. Med. 2007;357:2028–39).
“Safety profiles in the two study groups were similar, with no significant differences in the overall incidence of adverse events, the incidence of serious adverse events, or the incidence of events either leading to withdrawal from the study or considered to be possibly related to the study drug,” they added.
However 70 subjects in the alendronate group and 64 in the teriparatide group dropped out of the study. Thirteen (6.1%) of the 214 patients in the alendronate group and 25 (11.7%) of the 214 in the teriparatide group discontinued because of an adverse event.
In an editorial accompanying this report, Dr. Philip N. Sambrook, of the University of Sidney (Australia), noted this “moderately high” dropout rate of 30% may indicate that adherence to either treatment may be limited, particularly because this patient group is “often already unwell.
“The persistence of [adverse] effects in the ongoing 18-month extension of the study will be of interest,” he noted (N. Engl. J. Med. 2007;357:2084–6).
Nevertheless, “for patients with low bone mineral density who are receiving long-term low-dose glucocorticoid therapy, the study by Saag et al. suggests that teriparatide should be considered as a potential first-line therapy,” he said.
'Teriparatide may be a rational treatment for glucocorticoid-induced osteoporosis.' DR. SAAG
The anabolic agent teriparatide outperformed alendronate in patients with glucocorticoid-induced osteoporosis who were at high risk for fractures in a large, randomized, controlled trial.
Study participants taking teriparatide were significantly less likely to sustain new vertebral fractures and showed greater increases in bone mineral density (BMD) at the spine and hip, the investigators wrote in the the New England Journal of Medicine.
International guidelines recommend bisphosphonates like alendronate for patients who either already have or are at risk for glucocorticoid-induced osteoporosis, they noted. But recombinant teriparatide—human parathyroid hormone (1–34)—is thought to stimulate bone formation, increase bone mass, and reduce the risk of vertebral and nonvertebral fractures.
“Teriparatide may be a rational treatment for glucocorticoid-induced osteoporosis because it directly stimulates osteoblastogenesis and inhibits osteoblast apoptosis, thereby counteracting two key mechanisms through which glucocorticoid therapy promotes bone loss,” reported Dr. Kenneth G. Saag of the University of Alabama at Birmingham, and his associates.
They are conducting what they called the first randomized, controlled clinical trial comparing teriparatide with a bisphosphonate in this patient population. Dr. Saag reported their results for the first 18 months of a planned 36. The trial is supported by Eli Lilly & Co., which markets teriparatide as Forteo in the United States.
Study participants in 12 countries in North America, South America, and Europe were randomly assigned to either injectable teriparatide plus an oral placebo or oral alendronate plus an injectable placebo every day. All also received daily calcium and vitamin D supplements.
The subjects were 345 women and 83 men aged 22–89 years who had established osteoporosis because of long-term glucocorticoid therapy for a variety of disorders.
After 18 months, BMD at the lumbar spine increased to a significantly greater degree in subjects taking teriparatide (7.2%) than in patients taking alendronate (3.4%). The same was true for total hip bone mineral density (3.8% and 2.4%, respectively).
Markers of bone formation increased almost 70% and those of bone resorption increased about 45% at 6 months in subjects taking teriparatide, while these markers decreased in subjects taking alendronate.
New vertebral fractures developed in 10 subjects taking alendronate, compared with only 1 taking teriparatide, a significant difference. The number of subjects who developed new nonvertebral fractures did not differ significantly between the two groups (N. Engl. J. Med. 2007;357:2028–39).
“Safety profiles in the two study groups were similar, with no significant differences in the overall incidence of adverse events, the incidence of serious adverse events, or the incidence of events either leading to withdrawal from the study or considered to be possibly related to the study drug,” they added.
However 70 subjects in the alendronate group and 64 in the teriparatide group dropped out of the study. Thirteen (6.1%) of the 214 patients in the alendronate group and 25 (11.7%) of the 214 in the teriparatide group discontinued because of an adverse event.
In an editorial accompanying this report, Dr. Philip N. Sambrook, of the University of Sidney (Australia), noted this “moderately high” dropout rate of 30% may indicate that adherence to either treatment may be limited, particularly because this patient group is “often already unwell.
“The persistence of [adverse] effects in the ongoing 18-month extension of the study will be of interest,” he noted (N. Engl. J. Med. 2007;357:2084–6).
Nevertheless, “for patients with low bone mineral density who are receiving long-term low-dose glucocorticoid therapy, the study by Saag et al. suggests that teriparatide should be considered as a potential first-line therapy,” he said.
'Teriparatide may be a rational treatment for glucocorticoid-induced osteoporosis.' DR. SAAG