FDA Eases Access Barriers to Portable Blood Lead Testing

Article Type
Changed
Display Headline
FDA Eases Access Barriers to Portable Blood Lead Testing

The Food and Drug Administration has expanded access to portable lead testing devices, which will allow for rapid screening of children and adults at more than 115,000 community clinics, mobile health units, schools, and work sites across the country.

Until now, the LeadCare II Blood Lead Test System (ESA Biosciences, Chelmsford, Mass.) had been available only at select hospitals and testing facilities with clearance to perform highly complex assays.

The FDA has recategorized the device so that it is waived under the Clinical Laboratory Improvement Amendment (CLIA), permitting it to be distributed to nontraditional sites to allow for more widespread testing.

Easy access to the portable device, which delivers results from either fingerstick or venous blood samples in 3 minutes, now “allows us to overcome the very real logistical challenges of testing children who may have been exposed to lead contamination,” Dr. John Agwunobi said at a press conference.

Access to lead testing has been particularly challenging for children in poor urban communities, where the risk for lead poisoning is highest.

For many of their families, it is a hardship to get to a hospital or physician's office for the initial blood testing.

The need to acquire confirmatory testing and medical follow-up poses further inconveniece and barriers to care, Dr. Agwunobi pointed out.

“Approximately 310,000 U.S. children aged 1–5 years have blood lead levels greater than 10 mcg of lead per deciliter of blood, a level at which harmful health effects are known to occur,” according to estimates from the Centers for Disease Control and Prevention.

About 24 million homes in the United States are believed to have significant lead-based paint hazards, according to estimates from the U.S. Department of Housing and Urban Development.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

The Food and Drug Administration has expanded access to portable lead testing devices, which will allow for rapid screening of children and adults at more than 115,000 community clinics, mobile health units, schools, and work sites across the country.

Until now, the LeadCare II Blood Lead Test System (ESA Biosciences, Chelmsford, Mass.) had been available only at select hospitals and testing facilities with clearance to perform highly complex assays.

The FDA has recategorized the device so that it is waived under the Clinical Laboratory Improvement Amendment (CLIA), permitting it to be distributed to nontraditional sites to allow for more widespread testing.

Easy access to the portable device, which delivers results from either fingerstick or venous blood samples in 3 minutes, now “allows us to overcome the very real logistical challenges of testing children who may have been exposed to lead contamination,” Dr. John Agwunobi said at a press conference.

Access to lead testing has been particularly challenging for children in poor urban communities, where the risk for lead poisoning is highest.

For many of their families, it is a hardship to get to a hospital or physician's office for the initial blood testing.

The need to acquire confirmatory testing and medical follow-up poses further inconveniece and barriers to care, Dr. Agwunobi pointed out.

“Approximately 310,000 U.S. children aged 1–5 years have blood lead levels greater than 10 mcg of lead per deciliter of blood, a level at which harmful health effects are known to occur,” according to estimates from the Centers for Disease Control and Prevention.

About 24 million homes in the United States are believed to have significant lead-based paint hazards, according to estimates from the U.S. Department of Housing and Urban Development.

The Food and Drug Administration has expanded access to portable lead testing devices, which will allow for rapid screening of children and adults at more than 115,000 community clinics, mobile health units, schools, and work sites across the country.

Until now, the LeadCare II Blood Lead Test System (ESA Biosciences, Chelmsford, Mass.) had been available only at select hospitals and testing facilities with clearance to perform highly complex assays.

The FDA has recategorized the device so that it is waived under the Clinical Laboratory Improvement Amendment (CLIA), permitting it to be distributed to nontraditional sites to allow for more widespread testing.

Easy access to the portable device, which delivers results from either fingerstick or venous blood samples in 3 minutes, now “allows us to overcome the very real logistical challenges of testing children who may have been exposed to lead contamination,” Dr. John Agwunobi said at a press conference.

Access to lead testing has been particularly challenging for children in poor urban communities, where the risk for lead poisoning is highest.

For many of their families, it is a hardship to get to a hospital or physician's office for the initial blood testing.

The need to acquire confirmatory testing and medical follow-up poses further inconveniece and barriers to care, Dr. Agwunobi pointed out.

“Approximately 310,000 U.S. children aged 1–5 years have blood lead levels greater than 10 mcg of lead per deciliter of blood, a level at which harmful health effects are known to occur,” according to estimates from the Centers for Disease Control and Prevention.

About 24 million homes in the United States are believed to have significant lead-based paint hazards, according to estimates from the U.S. Department of Housing and Urban Development.

Publications
Publications
Topics
Article Type
Display Headline
FDA Eases Access Barriers to Portable Blood Lead Testing
Display Headline
FDA Eases Access Barriers to Portable Blood Lead Testing
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Low Level of Testosterone May Increase All-Cause Mortality Risk

Article Type
Changed
Display Headline
Low Level of Testosterone May Increase All-Cause Mortality Risk

Men with low testosterone levels seem to be at increased risk of death from all causes and to have shorter survival times than men with normal testosterone levels, said Dr. Molly M. Shores of the departments of psychiatry and behavioral sciences at the University of Washington, Seattle, and associates.

In a recent small study, the researchers had found that men with a low testosterone level had higher 6-month mortality than did those with a normal level who were of similar age and had comparable medical morbidity. “Given these unforeseen preliminary findings, we conducted the present retrospective cohort study to examine if repeatedly low serum testosterone levels were associated with increased mortality in a larger sample of middle-aged and elderly men with a longer follow-up, of up to 8 years,” they said.

Dr. Shores and her associates identified in a clinical database 858 male veterans, aged 40 years and older, who had undergone at least two measures of testosterone levels between 1994 and 1999 and had then been followed for a mean of 4.3 years. They matched the data on these subjects with data in a national Veterans Affairs death registry to obtain mortality information.

The reasons why these men had undergone testosterone testing were not available for analysis, but previous research has shown that, in general, the most common clinical indications are evaluation of sexual dysfunction, osteoporosis, genitourinary conditions, and endocrine conditions, the investigators said (Arch. Intern. Med. 2006;166:1660–5).

A total of 452 men—53% of the study population—had normal serum testosterone levels (defined as 250 ng/dL or higher), or normal free testosterone levels (defined as 0.75 ng/dL or higher). Another 240 men (28%) had equivocal levels, and 166 (19%) had low levels.

Because testosterone levels decrease with acute and chronic illness, the prevalences of chronic obstructive pulmonary disease, HIV infection, coronary artery disease, and hyperlipidemia were noted. There were no significant differences between the men with normal testosterone levels and those with low testosterone levels regarding these disorders or overall medical morbidity.

All-cause mortality was 20% in men with normal testosterone levels and 25% in those with equivocal levels, compared with 35% in men with low levels. After the data were adjusted to account for the covariates of age, race, body mass index, and other clinical factors, “low testosterone level continued to be associated with an increased mortality risk of 88% greater than in men with normal testosterone levels,” the authors wrote.

To control for the confounding influence of possible acute illness, they conducted an analysis excluding all subjects who died within 1 year of having their testosterone levels measured. In this subset of subjects, low testosterone levels were still associated with a 68% greater mortality risk, compared with normal levels.

The findings do not show that low testosterone levels directly raise mortality risk, because “a retrospective cohort study cannot establish a causal relationship.” Large, prospective studies would clarify the issue, they wrote.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Men with low testosterone levels seem to be at increased risk of death from all causes and to have shorter survival times than men with normal testosterone levels, said Dr. Molly M. Shores of the departments of psychiatry and behavioral sciences at the University of Washington, Seattle, and associates.

In a recent small study, the researchers had found that men with a low testosterone level had higher 6-month mortality than did those with a normal level who were of similar age and had comparable medical morbidity. “Given these unforeseen preliminary findings, we conducted the present retrospective cohort study to examine if repeatedly low serum testosterone levels were associated with increased mortality in a larger sample of middle-aged and elderly men with a longer follow-up, of up to 8 years,” they said.

Dr. Shores and her associates identified in a clinical database 858 male veterans, aged 40 years and older, who had undergone at least two measures of testosterone levels between 1994 and 1999 and had then been followed for a mean of 4.3 years. They matched the data on these subjects with data in a national Veterans Affairs death registry to obtain mortality information.

The reasons why these men had undergone testosterone testing were not available for analysis, but previous research has shown that, in general, the most common clinical indications are evaluation of sexual dysfunction, osteoporosis, genitourinary conditions, and endocrine conditions, the investigators said (Arch. Intern. Med. 2006;166:1660–5).

A total of 452 men—53% of the study population—had normal serum testosterone levels (defined as 250 ng/dL or higher), or normal free testosterone levels (defined as 0.75 ng/dL or higher). Another 240 men (28%) had equivocal levels, and 166 (19%) had low levels.

Because testosterone levels decrease with acute and chronic illness, the prevalences of chronic obstructive pulmonary disease, HIV infection, coronary artery disease, and hyperlipidemia were noted. There were no significant differences between the men with normal testosterone levels and those with low testosterone levels regarding these disorders or overall medical morbidity.

All-cause mortality was 20% in men with normal testosterone levels and 25% in those with equivocal levels, compared with 35% in men with low levels. After the data were adjusted to account for the covariates of age, race, body mass index, and other clinical factors, “low testosterone level continued to be associated with an increased mortality risk of 88% greater than in men with normal testosterone levels,” the authors wrote.

To control for the confounding influence of possible acute illness, they conducted an analysis excluding all subjects who died within 1 year of having their testosterone levels measured. In this subset of subjects, low testosterone levels were still associated with a 68% greater mortality risk, compared with normal levels.

The findings do not show that low testosterone levels directly raise mortality risk, because “a retrospective cohort study cannot establish a causal relationship.” Large, prospective studies would clarify the issue, they wrote.

Men with low testosterone levels seem to be at increased risk of death from all causes and to have shorter survival times than men with normal testosterone levels, said Dr. Molly M. Shores of the departments of psychiatry and behavioral sciences at the University of Washington, Seattle, and associates.

In a recent small study, the researchers had found that men with a low testosterone level had higher 6-month mortality than did those with a normal level who were of similar age and had comparable medical morbidity. “Given these unforeseen preliminary findings, we conducted the present retrospective cohort study to examine if repeatedly low serum testosterone levels were associated with increased mortality in a larger sample of middle-aged and elderly men with a longer follow-up, of up to 8 years,” they said.

Dr. Shores and her associates identified in a clinical database 858 male veterans, aged 40 years and older, who had undergone at least two measures of testosterone levels between 1994 and 1999 and had then been followed for a mean of 4.3 years. They matched the data on these subjects with data in a national Veterans Affairs death registry to obtain mortality information.

The reasons why these men had undergone testosterone testing were not available for analysis, but previous research has shown that, in general, the most common clinical indications are evaluation of sexual dysfunction, osteoporosis, genitourinary conditions, and endocrine conditions, the investigators said (Arch. Intern. Med. 2006;166:1660–5).

A total of 452 men—53% of the study population—had normal serum testosterone levels (defined as 250 ng/dL or higher), or normal free testosterone levels (defined as 0.75 ng/dL or higher). Another 240 men (28%) had equivocal levels, and 166 (19%) had low levels.

Because testosterone levels decrease with acute and chronic illness, the prevalences of chronic obstructive pulmonary disease, HIV infection, coronary artery disease, and hyperlipidemia were noted. There were no significant differences between the men with normal testosterone levels and those with low testosterone levels regarding these disorders or overall medical morbidity.

All-cause mortality was 20% in men with normal testosterone levels and 25% in those with equivocal levels, compared with 35% in men with low levels. After the data were adjusted to account for the covariates of age, race, body mass index, and other clinical factors, “low testosterone level continued to be associated with an increased mortality risk of 88% greater than in men with normal testosterone levels,” the authors wrote.

To control for the confounding influence of possible acute illness, they conducted an analysis excluding all subjects who died within 1 year of having their testosterone levels measured. In this subset of subjects, low testosterone levels were still associated with a 68% greater mortality risk, compared with normal levels.

The findings do not show that low testosterone levels directly raise mortality risk, because “a retrospective cohort study cannot establish a causal relationship.” Large, prospective studies would clarify the issue, they wrote.

Publications
Publications
Topics
Article Type
Display Headline
Low Level of Testosterone May Increase All-Cause Mortality Risk
Display Headline
Low Level of Testosterone May Increase All-Cause Mortality Risk
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

ECG's Role in Athletic Screening Protocol Debated

Article Type
Changed
Display Headline
ECG's Role in Athletic Screening Protocol Debated

A national athletic screening program appears to have cut the rate of sudden death by 89% among adolescent and young adult athletes in Italy, according to Domenico Corrado, Ph.D., of the University of Padua, and his associates.

However, U.S. physicians cautioned that the less formal screening programs that are used in this country, and that do not include routine ECGs, may be as effective as the more involved Italian program. They warned against becoming “enamored” of elaborate screening approaches that may overestimate the benefits and minimize the risks and costs of screening.

In 1982, Italian law mandated that all competitive athletes aged 12–35 years undergo preparticipation screening for potentially lethal cardiovascular abnormalities. The CV screening includes a physical exam, family and personal history, and a 12-lead ECG.

Dr. Corrado and his associates analyzed the annual rates of sudden cardiovascular death from 1979 to 2004 in one region of the country with nearly 4,400,000 residents.

The investigators found that the rate decreased after the screening program was initiated, and that the decrease has persisted to the present. Of 42,386 screened athletes, 3,914 (9%) required additional cardiovascular testing and 879 (2%) of those were prohibited from participating in athletics.

The annual rate of sudden cardiac death in young athletes was 3.6 per 100,000 person-years in 1979 and 4.0 per 100,000 person-years in 1981. The rate then dropped precipitously to 1.5 per 100,000 person-years over the next 4 years after the screening program was introduced, and it has decreased more slowly since then to a low of 0.43 per 100,000 person-years in 2004, they reported (JAMA 2006;296:1593–1601).

The investigators attributed most of the reduced incidence to fewer deaths from cardiomyopathies.

This decline was accompanied by an increase—from 4.4% to 9.4%—in the proportion of young athletes who were identified by screening and disqualified from participating in competitive sports because of cardiomyopathies. No deaths occurred among these disqualified athletes, “suggesting that screening may prevent sudden death,” Dr. Corrado and his associates said.

In contrast, the trend for sudden CV death among unscreened, nonathletic people of the same age was relatively unchanged during the same time period, equivalent to a mortality of 0.79 per 100,000 person-years.

The findings “suggest that screening athletes for cardiomyopathies is a life-saving strategy and that 12-lead ECG is a sensitive and powerful [screening] tool,” they noted.

In an editorial comment that accompanied this report, Dr. Paul D. Thompson of Hartford (Conn.) Hospital and Dr. Benjamin D. Levine of the University of Texas, Dallas, wrote, “Although these results are provocative, they do not definitively prove the value of screening or establish the importance of routine ECGs in the screening process.

“This study was not a controlled comparison of the screening vs. nonscreening of athletes, but rather is a population-based observational study,” they noted (JAMA 2006;296:1648–50).

Moreover, the apparent decline in sudden cardiovascular death may reflect an unusually high initial death rate rather than a true decrease.

The lowest death rate reported in this study after the screening program was well established is equivalent to death rates among high school and college athletes in the United States in 1983–1993, the best data available for nontraumatic deaths in U.S. athletes.

This suggests that the less formal U.S. screening process may be as effective as the more involved Italian program, Dr. Thompson and Dr. Levine said.

The screening program appears to have cut the rate of sudden death by 89% in young athletesin Italy. DR. CORRADO

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

A national athletic screening program appears to have cut the rate of sudden death by 89% among adolescent and young adult athletes in Italy, according to Domenico Corrado, Ph.D., of the University of Padua, and his associates.

However, U.S. physicians cautioned that the less formal screening programs that are used in this country, and that do not include routine ECGs, may be as effective as the more involved Italian program. They warned against becoming “enamored” of elaborate screening approaches that may overestimate the benefits and minimize the risks and costs of screening.

In 1982, Italian law mandated that all competitive athletes aged 12–35 years undergo preparticipation screening for potentially lethal cardiovascular abnormalities. The CV screening includes a physical exam, family and personal history, and a 12-lead ECG.

Dr. Corrado and his associates analyzed the annual rates of sudden cardiovascular death from 1979 to 2004 in one region of the country with nearly 4,400,000 residents.

The investigators found that the rate decreased after the screening program was initiated, and that the decrease has persisted to the present. Of 42,386 screened athletes, 3,914 (9%) required additional cardiovascular testing and 879 (2%) of those were prohibited from participating in athletics.

The annual rate of sudden cardiac death in young athletes was 3.6 per 100,000 person-years in 1979 and 4.0 per 100,000 person-years in 1981. The rate then dropped precipitously to 1.5 per 100,000 person-years over the next 4 years after the screening program was introduced, and it has decreased more slowly since then to a low of 0.43 per 100,000 person-years in 2004, they reported (JAMA 2006;296:1593–1601).

The investigators attributed most of the reduced incidence to fewer deaths from cardiomyopathies.

This decline was accompanied by an increase—from 4.4% to 9.4%—in the proportion of young athletes who were identified by screening and disqualified from participating in competitive sports because of cardiomyopathies. No deaths occurred among these disqualified athletes, “suggesting that screening may prevent sudden death,” Dr. Corrado and his associates said.

In contrast, the trend for sudden CV death among unscreened, nonathletic people of the same age was relatively unchanged during the same time period, equivalent to a mortality of 0.79 per 100,000 person-years.

The findings “suggest that screening athletes for cardiomyopathies is a life-saving strategy and that 12-lead ECG is a sensitive and powerful [screening] tool,” they noted.

In an editorial comment that accompanied this report, Dr. Paul D. Thompson of Hartford (Conn.) Hospital and Dr. Benjamin D. Levine of the University of Texas, Dallas, wrote, “Although these results are provocative, they do not definitively prove the value of screening or establish the importance of routine ECGs in the screening process.

“This study was not a controlled comparison of the screening vs. nonscreening of athletes, but rather is a population-based observational study,” they noted (JAMA 2006;296:1648–50).

Moreover, the apparent decline in sudden cardiovascular death may reflect an unusually high initial death rate rather than a true decrease.

The lowest death rate reported in this study after the screening program was well established is equivalent to death rates among high school and college athletes in the United States in 1983–1993, the best data available for nontraumatic deaths in U.S. athletes.

This suggests that the less formal U.S. screening process may be as effective as the more involved Italian program, Dr. Thompson and Dr. Levine said.

The screening program appears to have cut the rate of sudden death by 89% in young athletesin Italy. DR. CORRADO

A national athletic screening program appears to have cut the rate of sudden death by 89% among adolescent and young adult athletes in Italy, according to Domenico Corrado, Ph.D., of the University of Padua, and his associates.

However, U.S. physicians cautioned that the less formal screening programs that are used in this country, and that do not include routine ECGs, may be as effective as the more involved Italian program. They warned against becoming “enamored” of elaborate screening approaches that may overestimate the benefits and minimize the risks and costs of screening.

In 1982, Italian law mandated that all competitive athletes aged 12–35 years undergo preparticipation screening for potentially lethal cardiovascular abnormalities. The CV screening includes a physical exam, family and personal history, and a 12-lead ECG.

Dr. Corrado and his associates analyzed the annual rates of sudden cardiovascular death from 1979 to 2004 in one region of the country with nearly 4,400,000 residents.

The investigators found that the rate decreased after the screening program was initiated, and that the decrease has persisted to the present. Of 42,386 screened athletes, 3,914 (9%) required additional cardiovascular testing and 879 (2%) of those were prohibited from participating in athletics.

The annual rate of sudden cardiac death in young athletes was 3.6 per 100,000 person-years in 1979 and 4.0 per 100,000 person-years in 1981. The rate then dropped precipitously to 1.5 per 100,000 person-years over the next 4 years after the screening program was introduced, and it has decreased more slowly since then to a low of 0.43 per 100,000 person-years in 2004, they reported (JAMA 2006;296:1593–1601).

The investigators attributed most of the reduced incidence to fewer deaths from cardiomyopathies.

This decline was accompanied by an increase—from 4.4% to 9.4%—in the proportion of young athletes who were identified by screening and disqualified from participating in competitive sports because of cardiomyopathies. No deaths occurred among these disqualified athletes, “suggesting that screening may prevent sudden death,” Dr. Corrado and his associates said.

In contrast, the trend for sudden CV death among unscreened, nonathletic people of the same age was relatively unchanged during the same time period, equivalent to a mortality of 0.79 per 100,000 person-years.

The findings “suggest that screening athletes for cardiomyopathies is a life-saving strategy and that 12-lead ECG is a sensitive and powerful [screening] tool,” they noted.

In an editorial comment that accompanied this report, Dr. Paul D. Thompson of Hartford (Conn.) Hospital and Dr. Benjamin D. Levine of the University of Texas, Dallas, wrote, “Although these results are provocative, they do not definitively prove the value of screening or establish the importance of routine ECGs in the screening process.

“This study was not a controlled comparison of the screening vs. nonscreening of athletes, but rather is a population-based observational study,” they noted (JAMA 2006;296:1648–50).

Moreover, the apparent decline in sudden cardiovascular death may reflect an unusually high initial death rate rather than a true decrease.

The lowest death rate reported in this study after the screening program was well established is equivalent to death rates among high school and college athletes in the United States in 1983–1993, the best data available for nontraumatic deaths in U.S. athletes.

This suggests that the less formal U.S. screening process may be as effective as the more involved Italian program, Dr. Thompson and Dr. Levine said.

The screening program appears to have cut the rate of sudden death by 89% in young athletesin Italy. DR. CORRADO

Publications
Publications
Topics
Article Type
Display Headline
ECG's Role in Athletic Screening Protocol Debated
Display Headline
ECG's Role in Athletic Screening Protocol Debated
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Sertraline Maintains Mood in Diabetes

Article Type
Changed
Display Headline
Sertraline Maintains Mood in Diabetes

Maintenance therapy with sertraline prevents a recurrence of major depression in diabetic patients whose mood disorder initially responds well to the drug, reported Patrick J. Lustman, Ph.D., of Washington University in St. Louis.

Clinical depression has been reported to occur in one-fourth of people with diabetes, and recurrent episodes are common. Depression not only impairs their function and quality of life but also increases their risk of death, largely by accelerating coronary heart disease, and their risk of diabetes complications, Dr. Lustman and his associates said (Arch. Gen. Psychiatry 2006;63:521–9).

Pharmacotherapy and psychotherapy improve both mood and glycemic control in depressed diabetic patients, but the benefits appear to be short-lived, with up to 60% of such patients developing a recurrence in the year following successful treatment. Maintenance therapy is known to reduce recurrences in 15%–30% of nondiabetic depressed patients but had not been assessed in diabetic patients until this study was done.

The researchers evaluated maintenance therapy in 152 patients with either type 1 or type 2 diabetes and major depressive disorder. The study subjects had a mean of five previous episodes of depression.

The current episode had resolved with sertraline therapy, at a mean dose of 118 mg per day (range of 50–200 mg per day). Subjects were then randomly assigned to either continue with the same dosage of sertraline that had induced recovery (79 subjects) or to switch to placebo (73 subjects), and were followed for 12 months or until depression recurred.

Depression symptoms and glycemic control were monitored in monthly office visits and via telephone interviews at every midpoint between office visits, to permit rapid detection of recurrences. Both the Beck Depression Inventory and the Hamilton Depression Rating scale were used to measure depression symptoms.

Sertraline was significantly more effective than placebo at prolonging the depression-free interval. At 1 year, the calculated rate of nonrecurrence was 66% in patients treated with sertraline, compared with 48% for those who received placebo, the investigators wrote.

The interval until one-third of the subjects developed a recurrence was 226 days in those taking sertraline, compared with 57 days in those taking placebo. The median time to recurrence exceeded 365 days, the maximum duration of follow-up, for subjects taking sertraline, compared with 251 days for those taking placebo.

Nearly 77% of recurrences developed early, within 4 months of randomization.

Sertraline did not interfere with glycemic control. In fact, glycemic control improved as depression improved with initial therapy, and it was maintained at that improved level throughout the depression-free interval.

“Treatment with sertraline is relatively simple, safe, and widely available, and although it is not curative, it offers patients with diabetes a potentially viable method for ameliorating the suffering, incapacity, and burden associated with recurrent depression,” Dr. Lustman and his associates said.

This study was supported in part by Pfizer Inc., which provided the sertraline for study subjects.

Improvements in glycemic control were maintained throughout the depression-free interval. DR. LUSTMAN

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Maintenance therapy with sertraline prevents a recurrence of major depression in diabetic patients whose mood disorder initially responds well to the drug, reported Patrick J. Lustman, Ph.D., of Washington University in St. Louis.

Clinical depression has been reported to occur in one-fourth of people with diabetes, and recurrent episodes are common. Depression not only impairs their function and quality of life but also increases their risk of death, largely by accelerating coronary heart disease, and their risk of diabetes complications, Dr. Lustman and his associates said (Arch. Gen. Psychiatry 2006;63:521–9).

Pharmacotherapy and psychotherapy improve both mood and glycemic control in depressed diabetic patients, but the benefits appear to be short-lived, with up to 60% of such patients developing a recurrence in the year following successful treatment. Maintenance therapy is known to reduce recurrences in 15%–30% of nondiabetic depressed patients but had not been assessed in diabetic patients until this study was done.

The researchers evaluated maintenance therapy in 152 patients with either type 1 or type 2 diabetes and major depressive disorder. The study subjects had a mean of five previous episodes of depression.

The current episode had resolved with sertraline therapy, at a mean dose of 118 mg per day (range of 50–200 mg per day). Subjects were then randomly assigned to either continue with the same dosage of sertraline that had induced recovery (79 subjects) or to switch to placebo (73 subjects), and were followed for 12 months or until depression recurred.

Depression symptoms and glycemic control were monitored in monthly office visits and via telephone interviews at every midpoint between office visits, to permit rapid detection of recurrences. Both the Beck Depression Inventory and the Hamilton Depression Rating scale were used to measure depression symptoms.

Sertraline was significantly more effective than placebo at prolonging the depression-free interval. At 1 year, the calculated rate of nonrecurrence was 66% in patients treated with sertraline, compared with 48% for those who received placebo, the investigators wrote.

The interval until one-third of the subjects developed a recurrence was 226 days in those taking sertraline, compared with 57 days in those taking placebo. The median time to recurrence exceeded 365 days, the maximum duration of follow-up, for subjects taking sertraline, compared with 251 days for those taking placebo.

Nearly 77% of recurrences developed early, within 4 months of randomization.

Sertraline did not interfere with glycemic control. In fact, glycemic control improved as depression improved with initial therapy, and it was maintained at that improved level throughout the depression-free interval.

“Treatment with sertraline is relatively simple, safe, and widely available, and although it is not curative, it offers patients with diabetes a potentially viable method for ameliorating the suffering, incapacity, and burden associated with recurrent depression,” Dr. Lustman and his associates said.

This study was supported in part by Pfizer Inc., which provided the sertraline for study subjects.

Improvements in glycemic control were maintained throughout the depression-free interval. DR. LUSTMAN

Maintenance therapy with sertraline prevents a recurrence of major depression in diabetic patients whose mood disorder initially responds well to the drug, reported Patrick J. Lustman, Ph.D., of Washington University in St. Louis.

Clinical depression has been reported to occur in one-fourth of people with diabetes, and recurrent episodes are common. Depression not only impairs their function and quality of life but also increases their risk of death, largely by accelerating coronary heart disease, and their risk of diabetes complications, Dr. Lustman and his associates said (Arch. Gen. Psychiatry 2006;63:521–9).

Pharmacotherapy and psychotherapy improve both mood and glycemic control in depressed diabetic patients, but the benefits appear to be short-lived, with up to 60% of such patients developing a recurrence in the year following successful treatment. Maintenance therapy is known to reduce recurrences in 15%–30% of nondiabetic depressed patients but had not been assessed in diabetic patients until this study was done.

The researchers evaluated maintenance therapy in 152 patients with either type 1 or type 2 diabetes and major depressive disorder. The study subjects had a mean of five previous episodes of depression.

The current episode had resolved with sertraline therapy, at a mean dose of 118 mg per day (range of 50–200 mg per day). Subjects were then randomly assigned to either continue with the same dosage of sertraline that had induced recovery (79 subjects) or to switch to placebo (73 subjects), and were followed for 12 months or until depression recurred.

Depression symptoms and glycemic control were monitored in monthly office visits and via telephone interviews at every midpoint between office visits, to permit rapid detection of recurrences. Both the Beck Depression Inventory and the Hamilton Depression Rating scale were used to measure depression symptoms.

Sertraline was significantly more effective than placebo at prolonging the depression-free interval. At 1 year, the calculated rate of nonrecurrence was 66% in patients treated with sertraline, compared with 48% for those who received placebo, the investigators wrote.

The interval until one-third of the subjects developed a recurrence was 226 days in those taking sertraline, compared with 57 days in those taking placebo. The median time to recurrence exceeded 365 days, the maximum duration of follow-up, for subjects taking sertraline, compared with 251 days for those taking placebo.

Nearly 77% of recurrences developed early, within 4 months of randomization.

Sertraline did not interfere with glycemic control. In fact, glycemic control improved as depression improved with initial therapy, and it was maintained at that improved level throughout the depression-free interval.

“Treatment with sertraline is relatively simple, safe, and widely available, and although it is not curative, it offers patients with diabetes a potentially viable method for ameliorating the suffering, incapacity, and burden associated with recurrent depression,” Dr. Lustman and his associates said.

This study was supported in part by Pfizer Inc., which provided the sertraline for study subjects.

Improvements in glycemic control were maintained throughout the depression-free interval. DR. LUSTMAN

Publications
Publications
Topics
Article Type
Display Headline
Sertraline Maintains Mood in Diabetes
Display Headline
Sertraline Maintains Mood in Diabetes
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Depression Risk High in Women Under 60 With MI

Article Type
Changed
Display Headline
Depression Risk High in Women Under 60 With MI

Women younger than age 60 have “a remarkably higher rate of depression at the time of acute myocardial infarction, compared with the other demographic groups,” reported Dr. Susmita Mallik of Emory University, Atlanta, and her associates.

The researchers found that 40% of women aged 60 or younger had moderate or worse clinical depression in the multicenter PREMIER study, which enrolled nearly 2,500 MI patients.

“Even after adjusting for various demographic, behavioral, medical history, and clinical factors,” their odds of depression were significantly higher than for other groups and remained over three times higher than those for the reference group, which was men older than 60 years.

It seems likely that this high rate of depression may explain, at least in part, the higher rate of adverse outcomes after MI that has been noted in women, compared with men. Depression predicts higher mortality in MI, with a clear dose-response relationship between severity of depressive symptoms and mortality risk. Depression also is linked to longer hospitalization and worse symptomatic, psychological, and social outcomes.

To date, no study has evaluated the potential role that depression may play in this MI gender gap, Dr. Mallik and her associates noted (Arch. Intern. Med. 2006;166:876–83).

The PREMIER (Prospective Registry Evaluating Outcomes after Myocardial Infarction: Events and Recovery) study recruited 2,498 MI patients treated at 19 U.S. medical centers during 2003–2004. Participants were interviewed at admission for MI on sociodemographic, behavioral, and psychosocial factors as well as clinical factors.

Depressive symptoms at presentation were assessed using the nine-question Primary Care Evaluation of Mental Disorders Brief Patient Health Questionnaire. Possible scores range from 0 to 27, and a score of 10 or higher indicates major depression of a moderate or worse degree.

Major depression was common, with an overall rate of 22%. Younger age, African American race, unmarried status, low levels of social support, and unfavorable socioeconomic indicators all positively correlated with depression.

Women under 60 years had the highest prevalence of depression, 40%, and the highest depression scores, indicating that they experienced more depressive symptoms as well as more severe symptoms than did men and older patients. After the data were adjusted for numerous potentially confounding factors, the odds of depression for women under 60 remained 3.1 times higher than those for the reference group of men over 60.

The cause remains unknown because the study was not designed to examine possible causes. Similarly, the researchers could not determine whether the depressive symptoms were secondary to MI or began shortly before its onset.

Nevertheless, “clinicians should be aware that younger women have a high susceptibility for being depressed after acute MI. Although screening for depression is warranted in all acute MI patients, screening should be particularly aggressive in younger female patients with acute MI,” the researchers stressed.

“Depression was largely unrecognized and untreated in our study,” Dr. Mallik and her associates noted.

Of the depressed patients, 73% had no history of depression when they presented with MI, which suggests the disorder was either unrecognized or that it developed for the first time in association with MI.

Notably, only 18% of the depressed patients were discharged with prescriptions for antidepressants. Clinicians should be reminded that “depression in patients with MI remains a serious condition and deserves treatment,” the researchers said.

“In addition to being an important illness in its own right, depression during hospitalization with acute MI confers 3–5 times higher adjusted odds of death” within 6 months, they added.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Women younger than age 60 have “a remarkably higher rate of depression at the time of acute myocardial infarction, compared with the other demographic groups,” reported Dr. Susmita Mallik of Emory University, Atlanta, and her associates.

The researchers found that 40% of women aged 60 or younger had moderate or worse clinical depression in the multicenter PREMIER study, which enrolled nearly 2,500 MI patients.

“Even after adjusting for various demographic, behavioral, medical history, and clinical factors,” their odds of depression were significantly higher than for other groups and remained over three times higher than those for the reference group, which was men older than 60 years.

It seems likely that this high rate of depression may explain, at least in part, the higher rate of adverse outcomes after MI that has been noted in women, compared with men. Depression predicts higher mortality in MI, with a clear dose-response relationship between severity of depressive symptoms and mortality risk. Depression also is linked to longer hospitalization and worse symptomatic, psychological, and social outcomes.

To date, no study has evaluated the potential role that depression may play in this MI gender gap, Dr. Mallik and her associates noted (Arch. Intern. Med. 2006;166:876–83).

The PREMIER (Prospective Registry Evaluating Outcomes after Myocardial Infarction: Events and Recovery) study recruited 2,498 MI patients treated at 19 U.S. medical centers during 2003–2004. Participants were interviewed at admission for MI on sociodemographic, behavioral, and psychosocial factors as well as clinical factors.

Depressive symptoms at presentation were assessed using the nine-question Primary Care Evaluation of Mental Disorders Brief Patient Health Questionnaire. Possible scores range from 0 to 27, and a score of 10 or higher indicates major depression of a moderate or worse degree.

Major depression was common, with an overall rate of 22%. Younger age, African American race, unmarried status, low levels of social support, and unfavorable socioeconomic indicators all positively correlated with depression.

Women under 60 years had the highest prevalence of depression, 40%, and the highest depression scores, indicating that they experienced more depressive symptoms as well as more severe symptoms than did men and older patients. After the data were adjusted for numerous potentially confounding factors, the odds of depression for women under 60 remained 3.1 times higher than those for the reference group of men over 60.

The cause remains unknown because the study was not designed to examine possible causes. Similarly, the researchers could not determine whether the depressive symptoms were secondary to MI or began shortly before its onset.

Nevertheless, “clinicians should be aware that younger women have a high susceptibility for being depressed after acute MI. Although screening for depression is warranted in all acute MI patients, screening should be particularly aggressive in younger female patients with acute MI,” the researchers stressed.

“Depression was largely unrecognized and untreated in our study,” Dr. Mallik and her associates noted.

Of the depressed patients, 73% had no history of depression when they presented with MI, which suggests the disorder was either unrecognized or that it developed for the first time in association with MI.

Notably, only 18% of the depressed patients were discharged with prescriptions for antidepressants. Clinicians should be reminded that “depression in patients with MI remains a serious condition and deserves treatment,” the researchers said.

“In addition to being an important illness in its own right, depression during hospitalization with acute MI confers 3–5 times higher adjusted odds of death” within 6 months, they added.

Women younger than age 60 have “a remarkably higher rate of depression at the time of acute myocardial infarction, compared with the other demographic groups,” reported Dr. Susmita Mallik of Emory University, Atlanta, and her associates.

The researchers found that 40% of women aged 60 or younger had moderate or worse clinical depression in the multicenter PREMIER study, which enrolled nearly 2,500 MI patients.

“Even after adjusting for various demographic, behavioral, medical history, and clinical factors,” their odds of depression were significantly higher than for other groups and remained over three times higher than those for the reference group, which was men older than 60 years.

It seems likely that this high rate of depression may explain, at least in part, the higher rate of adverse outcomes after MI that has been noted in women, compared with men. Depression predicts higher mortality in MI, with a clear dose-response relationship between severity of depressive symptoms and mortality risk. Depression also is linked to longer hospitalization and worse symptomatic, psychological, and social outcomes.

To date, no study has evaluated the potential role that depression may play in this MI gender gap, Dr. Mallik and her associates noted (Arch. Intern. Med. 2006;166:876–83).

The PREMIER (Prospective Registry Evaluating Outcomes after Myocardial Infarction: Events and Recovery) study recruited 2,498 MI patients treated at 19 U.S. medical centers during 2003–2004. Participants were interviewed at admission for MI on sociodemographic, behavioral, and psychosocial factors as well as clinical factors.

Depressive symptoms at presentation were assessed using the nine-question Primary Care Evaluation of Mental Disorders Brief Patient Health Questionnaire. Possible scores range from 0 to 27, and a score of 10 or higher indicates major depression of a moderate or worse degree.

Major depression was common, with an overall rate of 22%. Younger age, African American race, unmarried status, low levels of social support, and unfavorable socioeconomic indicators all positively correlated with depression.

Women under 60 years had the highest prevalence of depression, 40%, and the highest depression scores, indicating that they experienced more depressive symptoms as well as more severe symptoms than did men and older patients. After the data were adjusted for numerous potentially confounding factors, the odds of depression for women under 60 remained 3.1 times higher than those for the reference group of men over 60.

The cause remains unknown because the study was not designed to examine possible causes. Similarly, the researchers could not determine whether the depressive symptoms were secondary to MI or began shortly before its onset.

Nevertheless, “clinicians should be aware that younger women have a high susceptibility for being depressed after acute MI. Although screening for depression is warranted in all acute MI patients, screening should be particularly aggressive in younger female patients with acute MI,” the researchers stressed.

“Depression was largely unrecognized and untreated in our study,” Dr. Mallik and her associates noted.

Of the depressed patients, 73% had no history of depression when they presented with MI, which suggests the disorder was either unrecognized or that it developed for the first time in association with MI.

Notably, only 18% of the depressed patients were discharged with prescriptions for antidepressants. Clinicians should be reminded that “depression in patients with MI remains a serious condition and deserves treatment,” the researchers said.

“In addition to being an important illness in its own right, depression during hospitalization with acute MI confers 3–5 times higher adjusted odds of death” within 6 months, they added.

Publications
Publications
Topics
Article Type
Display Headline
Depression Risk High in Women Under 60 With MI
Display Headline
Depression Risk High in Women Under 60 With MI
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Alpha-Synuclein Gene Variation Associated With Parkinson's

Article Type
Changed
Display Headline
Alpha-Synuclein Gene Variation Associated With Parkinson's

Variations in the length of the alpha-synuclein gene promoter's dinucleotide repeat sequence have been linked to Parkinson's disease in a study analyzing DNA samples from more than 5,300 subjects around the world.

Mutations in the alpha-synuclein (SNCA) gene have been implicated before in Parkinson's disease (PD), but only within certain families and rarely among the general population.

“Our study demonstrates that the SNCA gene is not only a rare cause of autosomal dominant Parkinson disease in some families, but also a susceptibility gene for PD at the population level,” reported Dr. Demetrius M. Maraganore of the Mayo Clinic, Rochester, Minn., and his associates.

“Based on our results, we estimate that REP1 [the alpha-synuclein gene promoter's dinucleotide repeat sequence] locus variability may explain approximately 3% of the risk in the general population,” the researchers said (JAMA 2006;296:661–70).

They used data from the Genetic Epidemiology of Parkinson's Disease Consortium to investigate SNCA gene mutations in what they described as “the largest case-control study of PD to date.” The consortium collects and shares biospecimens and data collected at multiple sites worldwide.

For this study, DNA analysis was done on samples from 2,692 patients with PD and 2,652 unrelated control subjects.

Variability in the length of a dinucleotide repeat sequence within the SNCA promoter was found to be associated with PD susceptibility.

If further study finds that this risk is conferred via a mechanism of gene overexpression, interventions targeting SNCA expression may reduce the risk of developing PD in susceptible populations. It remains uncertain, however, whether therapies to reduce SNCA expression would affect the progression of existing PD, Dr. Maraganore and his associates noted.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Variations in the length of the alpha-synuclein gene promoter's dinucleotide repeat sequence have been linked to Parkinson's disease in a study analyzing DNA samples from more than 5,300 subjects around the world.

Mutations in the alpha-synuclein (SNCA) gene have been implicated before in Parkinson's disease (PD), but only within certain families and rarely among the general population.

“Our study demonstrates that the SNCA gene is not only a rare cause of autosomal dominant Parkinson disease in some families, but also a susceptibility gene for PD at the population level,” reported Dr. Demetrius M. Maraganore of the Mayo Clinic, Rochester, Minn., and his associates.

“Based on our results, we estimate that REP1 [the alpha-synuclein gene promoter's dinucleotide repeat sequence] locus variability may explain approximately 3% of the risk in the general population,” the researchers said (JAMA 2006;296:661–70).

They used data from the Genetic Epidemiology of Parkinson's Disease Consortium to investigate SNCA gene mutations in what they described as “the largest case-control study of PD to date.” The consortium collects and shares biospecimens and data collected at multiple sites worldwide.

For this study, DNA analysis was done on samples from 2,692 patients with PD and 2,652 unrelated control subjects.

Variability in the length of a dinucleotide repeat sequence within the SNCA promoter was found to be associated with PD susceptibility.

If further study finds that this risk is conferred via a mechanism of gene overexpression, interventions targeting SNCA expression may reduce the risk of developing PD in susceptible populations. It remains uncertain, however, whether therapies to reduce SNCA expression would affect the progression of existing PD, Dr. Maraganore and his associates noted.

Variations in the length of the alpha-synuclein gene promoter's dinucleotide repeat sequence have been linked to Parkinson's disease in a study analyzing DNA samples from more than 5,300 subjects around the world.

Mutations in the alpha-synuclein (SNCA) gene have been implicated before in Parkinson's disease (PD), but only within certain families and rarely among the general population.

“Our study demonstrates that the SNCA gene is not only a rare cause of autosomal dominant Parkinson disease in some families, but also a susceptibility gene for PD at the population level,” reported Dr. Demetrius M. Maraganore of the Mayo Clinic, Rochester, Minn., and his associates.

“Based on our results, we estimate that REP1 [the alpha-synuclein gene promoter's dinucleotide repeat sequence] locus variability may explain approximately 3% of the risk in the general population,” the researchers said (JAMA 2006;296:661–70).

They used data from the Genetic Epidemiology of Parkinson's Disease Consortium to investigate SNCA gene mutations in what they described as “the largest case-control study of PD to date.” The consortium collects and shares biospecimens and data collected at multiple sites worldwide.

For this study, DNA analysis was done on samples from 2,692 patients with PD and 2,652 unrelated control subjects.

Variability in the length of a dinucleotide repeat sequence within the SNCA promoter was found to be associated with PD susceptibility.

If further study finds that this risk is conferred via a mechanism of gene overexpression, interventions targeting SNCA expression may reduce the risk of developing PD in susceptible populations. It remains uncertain, however, whether therapies to reduce SNCA expression would affect the progression of existing PD, Dr. Maraganore and his associates noted.

Publications
Publications
Topics
Article Type
Display Headline
Alpha-Synuclein Gene Variation Associated With Parkinson's
Display Headline
Alpha-Synuclein Gene Variation Associated With Parkinson's
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Autism Disorders More Prevalent Than Thought in England

Article Type
Changed
Display Headline
Autism Disorders More Prevalent Than Thought in England

The prevalence of autism and related disorders was found to be “substantially greater” than expected in a screening of nearly 57,000 children in England.

The prevalence of autism spectrum disorders was 116 per 10,000 population. Previous estimates from research published in the past 6 years pegged the prevalence at only 30–90 cases per 10,000.

Before that, prevalence was widely accepted to be only four to five cases per 10,000, according to Dr. Gillian Baird, who is affiliated with Guy's and St. Thomas' NHS Foundation Trust, London, and her associates (Lancet 2006;368:210–15).

These findings indicate that autism spectrum disorders are not the rare anomalies that the public has always considered them to be but instead affect about 1% of children aged 9–10 years, they added.

In an editorial comment accompanying their report, Dr. Hiroshi Kurita of Zenkoku Ryoiku Sodan Centre, Tokyo, speculated that the recent surge in prevalence is more likely attributable to improved case ascertainment rather than to a true increase in the disorders (Lancet 2006;368:179–81).

Dr. Baird and her associates screened a population cohort of 56,946 children born 1990–1991 in 12 districts in South Thames, England.

The subjects were aged 9–10 years at assessment, “an age when it is likely that all true cases of autism spectrum disorders, or at least those in whom the condition was causing significant functional impairment, would have come to the attention of health and education services.”

They used data from the special needs register of the department of child health services to identify those who might have an autism spectrum disorder.

The registry also listed all children who attended special schools or mainstream schools and whose files contained a statement of educational needs indicating they had language, learning, behavior, or medical problems requiring intervention.

The researchers also collaborated with local clinicians to search registers of children known to various therapy services for having social or communicative impairment or autism spectrum disorders.

In all, they identified 255 children with a current diagnosis of autism spectrum disorders and another 1,515 considered to be potential candidates for the diagnosis.

The investigators screened these subjects using a parent-report questionnaire on characteristic autistic behavior. They then conducted detailed clinical assessments of a random sample of 255 subjects (223 boys, 32 girls). This included in-person observation and scoring on two diagnostic tools, the autism diagnostic interview-revised (ADI-R) and the autism diagnostic observation schedule-generic (ADOS-G); it also included review of teacher reports, psychometric testing results, and other extensive case material.

Based on these results, Dr. Baird and her associates estimated the prevalence of autism spectrum disorders to be 116 per 10,000 population.

When the data were broken down into subtypes, the prevalence of narrowly defined autism was 39 per 10,000, and the prevalence of other pervasive developmental disorders was 77 per 10,000.

The National Autistic Society, the leading charity for people with autism spectrum disorders in the United Kingdom, says there are about 520,000 people with autism spectrum disorders in the U.K.

Mounting evidence shows that genetic factors may play a prominent role in the causes of autism spectrum disorders. In addition, twin studies have suggested a genetic vulnerability to ASD. The disorders can be diagnosed in young toddlers and even in infants. But screening advances have not yet filtered to clinical practice.

ELSEVIER GLOBAL MEDICAL NEWS

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

The prevalence of autism and related disorders was found to be “substantially greater” than expected in a screening of nearly 57,000 children in England.

The prevalence of autism spectrum disorders was 116 per 10,000 population. Previous estimates from research published in the past 6 years pegged the prevalence at only 30–90 cases per 10,000.

Before that, prevalence was widely accepted to be only four to five cases per 10,000, according to Dr. Gillian Baird, who is affiliated with Guy's and St. Thomas' NHS Foundation Trust, London, and her associates (Lancet 2006;368:210–15).

These findings indicate that autism spectrum disorders are not the rare anomalies that the public has always considered them to be but instead affect about 1% of children aged 9–10 years, they added.

In an editorial comment accompanying their report, Dr. Hiroshi Kurita of Zenkoku Ryoiku Sodan Centre, Tokyo, speculated that the recent surge in prevalence is more likely attributable to improved case ascertainment rather than to a true increase in the disorders (Lancet 2006;368:179–81).

Dr. Baird and her associates screened a population cohort of 56,946 children born 1990–1991 in 12 districts in South Thames, England.

The subjects were aged 9–10 years at assessment, “an age when it is likely that all true cases of autism spectrum disorders, or at least those in whom the condition was causing significant functional impairment, would have come to the attention of health and education services.”

They used data from the special needs register of the department of child health services to identify those who might have an autism spectrum disorder.

The registry also listed all children who attended special schools or mainstream schools and whose files contained a statement of educational needs indicating they had language, learning, behavior, or medical problems requiring intervention.

The researchers also collaborated with local clinicians to search registers of children known to various therapy services for having social or communicative impairment or autism spectrum disorders.

In all, they identified 255 children with a current diagnosis of autism spectrum disorders and another 1,515 considered to be potential candidates for the diagnosis.

The investigators screened these subjects using a parent-report questionnaire on characteristic autistic behavior. They then conducted detailed clinical assessments of a random sample of 255 subjects (223 boys, 32 girls). This included in-person observation and scoring on two diagnostic tools, the autism diagnostic interview-revised (ADI-R) and the autism diagnostic observation schedule-generic (ADOS-G); it also included review of teacher reports, psychometric testing results, and other extensive case material.

Based on these results, Dr. Baird and her associates estimated the prevalence of autism spectrum disorders to be 116 per 10,000 population.

When the data were broken down into subtypes, the prevalence of narrowly defined autism was 39 per 10,000, and the prevalence of other pervasive developmental disorders was 77 per 10,000.

The National Autistic Society, the leading charity for people with autism spectrum disorders in the United Kingdom, says there are about 520,000 people with autism spectrum disorders in the U.K.

Mounting evidence shows that genetic factors may play a prominent role in the causes of autism spectrum disorders. In addition, twin studies have suggested a genetic vulnerability to ASD. The disorders can be diagnosed in young toddlers and even in infants. But screening advances have not yet filtered to clinical practice.

ELSEVIER GLOBAL MEDICAL NEWS

The prevalence of autism and related disorders was found to be “substantially greater” than expected in a screening of nearly 57,000 children in England.

The prevalence of autism spectrum disorders was 116 per 10,000 population. Previous estimates from research published in the past 6 years pegged the prevalence at only 30–90 cases per 10,000.

Before that, prevalence was widely accepted to be only four to five cases per 10,000, according to Dr. Gillian Baird, who is affiliated with Guy's and St. Thomas' NHS Foundation Trust, London, and her associates (Lancet 2006;368:210–15).

These findings indicate that autism spectrum disorders are not the rare anomalies that the public has always considered them to be but instead affect about 1% of children aged 9–10 years, they added.

In an editorial comment accompanying their report, Dr. Hiroshi Kurita of Zenkoku Ryoiku Sodan Centre, Tokyo, speculated that the recent surge in prevalence is more likely attributable to improved case ascertainment rather than to a true increase in the disorders (Lancet 2006;368:179–81).

Dr. Baird and her associates screened a population cohort of 56,946 children born 1990–1991 in 12 districts in South Thames, England.

The subjects were aged 9–10 years at assessment, “an age when it is likely that all true cases of autism spectrum disorders, or at least those in whom the condition was causing significant functional impairment, would have come to the attention of health and education services.”

They used data from the special needs register of the department of child health services to identify those who might have an autism spectrum disorder.

The registry also listed all children who attended special schools or mainstream schools and whose files contained a statement of educational needs indicating they had language, learning, behavior, or medical problems requiring intervention.

The researchers also collaborated with local clinicians to search registers of children known to various therapy services for having social or communicative impairment or autism spectrum disorders.

In all, they identified 255 children with a current diagnosis of autism spectrum disorders and another 1,515 considered to be potential candidates for the diagnosis.

The investigators screened these subjects using a parent-report questionnaire on characteristic autistic behavior. They then conducted detailed clinical assessments of a random sample of 255 subjects (223 boys, 32 girls). This included in-person observation and scoring on two diagnostic tools, the autism diagnostic interview-revised (ADI-R) and the autism diagnostic observation schedule-generic (ADOS-G); it also included review of teacher reports, psychometric testing results, and other extensive case material.

Based on these results, Dr. Baird and her associates estimated the prevalence of autism spectrum disorders to be 116 per 10,000 population.

When the data were broken down into subtypes, the prevalence of narrowly defined autism was 39 per 10,000, and the prevalence of other pervasive developmental disorders was 77 per 10,000.

The National Autistic Society, the leading charity for people with autism spectrum disorders in the United Kingdom, says there are about 520,000 people with autism spectrum disorders in the U.K.

Mounting evidence shows that genetic factors may play a prominent role in the causes of autism spectrum disorders. In addition, twin studies have suggested a genetic vulnerability to ASD. The disorders can be diagnosed in young toddlers and even in infants. But screening advances have not yet filtered to clinical practice.

ELSEVIER GLOBAL MEDICAL NEWS

Publications
Publications
Topics
Article Type
Display Headline
Autism Disorders More Prevalent Than Thought in England
Display Headline
Autism Disorders More Prevalent Than Thought in England
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Colchicine Delays HCC in Hepatitis-Related Cirrhosis

Article Type
Changed
Display Headline
Colchicine Delays HCC in Hepatitis-Related Cirrhosis

Colchicine therapy prevents or delays the development of hepatocellular carcinoma in patients who have cirrhosis related to viral hepatitis, reported Dr. Oscar Arrieta of the National Cancer Institute, Mexico City, and his associates.

Until now, no treatment had been found effective in preventing hepatocellular carcinoma (HCC) from developing in patients with cirrhosis of any etiology.

The alkaloid and antimitotic agent colchicine has shown mixed effects on the progression of fibrosis, ascites, esophageal varices, portal vein pressure, functional status, and mortality in cirrhosis patients, but no studies had assessed its effect against HCC, the investigators said.

They evaluated colchicine in a retrospective cohort study involving 186 patients with hepatitis virus-related cirrhosis who were treated between 1980 and 2000 and who were followed every 3–6 months for a minimum of 3 years. A total of 116 of these subjects (62%) received 1 mg colchicine 5 days per week for a mean of 63 months (range 6–168 months). Almost all of these subjects (96%) were treated for at least 1 year. None discontinued the drug because of adverse effects.

Of the subjects who took colchicine, 9% developed HCC compared with 29% of the subjects who did not take the drug, a significant difference, Dr. Arrieta and his associates said (Cancer 2006 Sept. 11 [Epub doi:10.1002/cncr.22198]).

Moreover, among subjects who did develop HCC, the cancer-free interval was significantly longer in those treated with colchicine (222 months) than in those who did not take the drug (150 months).

The exact oncogenic mechanism of viral-related HCC is not known, but virus-induced inflammation is thought to lead to hepatocyte destruction and liver fibrosis. Colchicine may decrease inflammation and may also have antimitotic properties that reduce cellular proliferation, “thereby interrupting the hyperplasia-dysplasia-metaplasia sequence of HCC and preventing mutations leading to HCC,” the researchers said.

In this study, as in previous studies, colchicine showed no direct beneficial effect on the progression of cirrhosis. With colchicine, 9% of patients showed improvement on their Child-Turcotte-Pugh score during follow-up, 35% showed no change, and 56% showed disease progression. The corresponding numbers in the subjects who didn't take colchicine were 2.5%, 37%, and 60%—progression rates that were not significantly different.

The study findings indicate that colchicine prevents or delays the development of HCC independent of factors such as age, platelet count, alpha fetoprotein levels, and transaminase levels.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Colchicine therapy prevents or delays the development of hepatocellular carcinoma in patients who have cirrhosis related to viral hepatitis, reported Dr. Oscar Arrieta of the National Cancer Institute, Mexico City, and his associates.

Until now, no treatment had been found effective in preventing hepatocellular carcinoma (HCC) from developing in patients with cirrhosis of any etiology.

The alkaloid and antimitotic agent colchicine has shown mixed effects on the progression of fibrosis, ascites, esophageal varices, portal vein pressure, functional status, and mortality in cirrhosis patients, but no studies had assessed its effect against HCC, the investigators said.

They evaluated colchicine in a retrospective cohort study involving 186 patients with hepatitis virus-related cirrhosis who were treated between 1980 and 2000 and who were followed every 3–6 months for a minimum of 3 years. A total of 116 of these subjects (62%) received 1 mg colchicine 5 days per week for a mean of 63 months (range 6–168 months). Almost all of these subjects (96%) were treated for at least 1 year. None discontinued the drug because of adverse effects.

Of the subjects who took colchicine, 9% developed HCC compared with 29% of the subjects who did not take the drug, a significant difference, Dr. Arrieta and his associates said (Cancer 2006 Sept. 11 [Epub doi:10.1002/cncr.22198]).

Moreover, among subjects who did develop HCC, the cancer-free interval was significantly longer in those treated with colchicine (222 months) than in those who did not take the drug (150 months).

The exact oncogenic mechanism of viral-related HCC is not known, but virus-induced inflammation is thought to lead to hepatocyte destruction and liver fibrosis. Colchicine may decrease inflammation and may also have antimitotic properties that reduce cellular proliferation, “thereby interrupting the hyperplasia-dysplasia-metaplasia sequence of HCC and preventing mutations leading to HCC,” the researchers said.

In this study, as in previous studies, colchicine showed no direct beneficial effect on the progression of cirrhosis. With colchicine, 9% of patients showed improvement on their Child-Turcotte-Pugh score during follow-up, 35% showed no change, and 56% showed disease progression. The corresponding numbers in the subjects who didn't take colchicine were 2.5%, 37%, and 60%—progression rates that were not significantly different.

The study findings indicate that colchicine prevents or delays the development of HCC independent of factors such as age, platelet count, alpha fetoprotein levels, and transaminase levels.

Colchicine therapy prevents or delays the development of hepatocellular carcinoma in patients who have cirrhosis related to viral hepatitis, reported Dr. Oscar Arrieta of the National Cancer Institute, Mexico City, and his associates.

Until now, no treatment had been found effective in preventing hepatocellular carcinoma (HCC) from developing in patients with cirrhosis of any etiology.

The alkaloid and antimitotic agent colchicine has shown mixed effects on the progression of fibrosis, ascites, esophageal varices, portal vein pressure, functional status, and mortality in cirrhosis patients, but no studies had assessed its effect against HCC, the investigators said.

They evaluated colchicine in a retrospective cohort study involving 186 patients with hepatitis virus-related cirrhosis who were treated between 1980 and 2000 and who were followed every 3–6 months for a minimum of 3 years. A total of 116 of these subjects (62%) received 1 mg colchicine 5 days per week for a mean of 63 months (range 6–168 months). Almost all of these subjects (96%) were treated for at least 1 year. None discontinued the drug because of adverse effects.

Of the subjects who took colchicine, 9% developed HCC compared with 29% of the subjects who did not take the drug, a significant difference, Dr. Arrieta and his associates said (Cancer 2006 Sept. 11 [Epub doi:10.1002/cncr.22198]).

Moreover, among subjects who did develop HCC, the cancer-free interval was significantly longer in those treated with colchicine (222 months) than in those who did not take the drug (150 months).

The exact oncogenic mechanism of viral-related HCC is not known, but virus-induced inflammation is thought to lead to hepatocyte destruction and liver fibrosis. Colchicine may decrease inflammation and may also have antimitotic properties that reduce cellular proliferation, “thereby interrupting the hyperplasia-dysplasia-metaplasia sequence of HCC and preventing mutations leading to HCC,” the researchers said.

In this study, as in previous studies, colchicine showed no direct beneficial effect on the progression of cirrhosis. With colchicine, 9% of patients showed improvement on their Child-Turcotte-Pugh score during follow-up, 35% showed no change, and 56% showed disease progression. The corresponding numbers in the subjects who didn't take colchicine were 2.5%, 37%, and 60%—progression rates that were not significantly different.

The study findings indicate that colchicine prevents or delays the development of HCC independent of factors such as age, platelet count, alpha fetoprotein levels, and transaminase levels.

Publications
Publications
Topics
Article Type
Display Headline
Colchicine Delays HCC in Hepatitis-Related Cirrhosis
Display Headline
Colchicine Delays HCC in Hepatitis-Related Cirrhosis
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Predicting Lynch Syndrome Propensity to Cancer : Two new models help sort out which patients need extensive genetic testing for the hereditary mutation.

Article Type
Changed
Display Headline
Predicting Lynch Syndrome Propensity to Cancer : Two new models help sort out which patients need extensive genetic testing for the hereditary mutation.

Two new prediction models help identify which patients suspected of having Lynch syndrome should undergo extensive genetic testing for the mutations associated with colorectal cancer.

Lynch syndrome, also known as hereditary nonpolyposis colorectal cancer, is characterized by the predisposition to develop early-onset colorectal cancer as well as cancers of the endometrium, gastrointestinal tract, ovary, hepatobiliary system, urinary tract, brain, and other sites. It is almost always associated with underlying mutations in the mismatch DNA repair system, most often in the MLH1 and MSH2 genes.

Currently, screening for Lynch syndrome is fraught with challenges. Clinical criteria for identifying which patients are likely to have the syndrome are restrictive and don't take into account several variants of the disease. They also rely on detailed family histories or on tumor samples, which often are not available. Further genetic testing is not very sensitive or specific and is expensive.

Two research groups have developed different models to predict the likelihood that patients have Lynch syndrome and should undergo genetic testing, much like the models that are widely used by health care professionals to predict mutations in the BRCA1 and BRCA2 genes in assessing patients' breast cancer risk.

The PREMM1,2 (Prediction of Mutations in MLH1 and MSH2) model was developed using a cohort of 898 probands and 1,618 first- or second-degree relatives who submitted blood samples for sequencing of the two genes, then validated in another cohort of 1,016 probands. This genetic testing had been ordered by the probands' health care providers—chiefly geneticists, oncologists, gastroenterologists, and gynecologists—who suspected Lynch syndrome because the patients' personal or family histories were suggestive, according to Dr. Judith Balmana of Dana-Farber Cancer Institute, Boston, and her associates.

These large, diverse, national cohorts allowed the investigators to incorporate great detail into their prediction model, including the age at diagnosis of probands and their relatives, the presence of colonic adenomas, and the different degrees of risk for different cancers.

The PREMM1,2 model thus is more sensitive and specific than clinical criteria in determining which patients should undergo extensive genetic testing. It also helps decide which approach to genetic testing will be most useful (JAMA 2006;296:1469–78). The PREMM1,2 model is available through the Dana-Farber Web site (www.dfci.org/premm

The MMRpro (Mutations of Mismatch Repair) model also is more sensitive and specific than existing clinical guidelines for identifying patients who may benefit from genetic testing, reported Sining Chen, Ph.D., of Johns Hopkins Bloomberg School of Public Health, Baltimore, and associates.

In particular, this statistical model estimates the likelihood that a patient carries deleterious mutations of the MLH1, MSH2, or MSH6 genes in cases in which tumor tissue is not available for analysis or commercial germline testing techniques have been insufficiently sensitive to detect a mutation.

The MMRpro model was developed using a meta-analysis of studies that provided risk estimates for colorectal and endometrial cancers. It was then validated in a cohort of 279 patients who had undergone germline testing and who were from 226 families believed to be affected by Lynch syndrome.

The MMRpro model incorporates both a mutation-prediction algorithm and a cancer-risk prediction algorithm. The latter allows clinicians to estimate the likelihood that cancer will develop in patients who have strong evidence of Lynch syndrome but in whom no mutation has been found. “This feature is also valuable for [patients] who do not wish to be genotyped but would still like to consider preventative measures,” Dr. Chen and associates said (JAMA 2006;296:1479–87).

“Software for performing MMRpro calculations is open source and available free of charge via either the mendelian risk prediction package Bayes Mendel at www.astor.som.jhmi.edu/BayesMendelwww.utsouthwestern.edu/breasthealth/cagene

In an editorial comment accompanying these reports, Dr. James M. Ford and Dr. Alice S. Whittemore of Stanford (Calif.) University's clinical cancer genetics program said that both prediction models should prove to be “very useful tools for clinicians and their patients, as well as for epidemiologists.”

These models should improve clinicians' ability to identify patients at risk for Lynch syndrome “and hopefully to prevent cancer from occurring using intensive surveillance techniques and prevention schemes,” Dr. Ford and Dr. Whittemore said (JAMA 2006;296:1521–3).

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Two new prediction models help identify which patients suspected of having Lynch syndrome should undergo extensive genetic testing for the mutations associated with colorectal cancer.

Lynch syndrome, also known as hereditary nonpolyposis colorectal cancer, is characterized by the predisposition to develop early-onset colorectal cancer as well as cancers of the endometrium, gastrointestinal tract, ovary, hepatobiliary system, urinary tract, brain, and other sites. It is almost always associated with underlying mutations in the mismatch DNA repair system, most often in the MLH1 and MSH2 genes.

Currently, screening for Lynch syndrome is fraught with challenges. Clinical criteria for identifying which patients are likely to have the syndrome are restrictive and don't take into account several variants of the disease. They also rely on detailed family histories or on tumor samples, which often are not available. Further genetic testing is not very sensitive or specific and is expensive.

Two research groups have developed different models to predict the likelihood that patients have Lynch syndrome and should undergo genetic testing, much like the models that are widely used by health care professionals to predict mutations in the BRCA1 and BRCA2 genes in assessing patients' breast cancer risk.

The PREMM1,2 (Prediction of Mutations in MLH1 and MSH2) model was developed using a cohort of 898 probands and 1,618 first- or second-degree relatives who submitted blood samples for sequencing of the two genes, then validated in another cohort of 1,016 probands. This genetic testing had been ordered by the probands' health care providers—chiefly geneticists, oncologists, gastroenterologists, and gynecologists—who suspected Lynch syndrome because the patients' personal or family histories were suggestive, according to Dr. Judith Balmana of Dana-Farber Cancer Institute, Boston, and her associates.

These large, diverse, national cohorts allowed the investigators to incorporate great detail into their prediction model, including the age at diagnosis of probands and their relatives, the presence of colonic adenomas, and the different degrees of risk for different cancers.

The PREMM1,2 model thus is more sensitive and specific than clinical criteria in determining which patients should undergo extensive genetic testing. It also helps decide which approach to genetic testing will be most useful (JAMA 2006;296:1469–78). The PREMM1,2 model is available through the Dana-Farber Web site (www.dfci.org/premm

The MMRpro (Mutations of Mismatch Repair) model also is more sensitive and specific than existing clinical guidelines for identifying patients who may benefit from genetic testing, reported Sining Chen, Ph.D., of Johns Hopkins Bloomberg School of Public Health, Baltimore, and associates.

In particular, this statistical model estimates the likelihood that a patient carries deleterious mutations of the MLH1, MSH2, or MSH6 genes in cases in which tumor tissue is not available for analysis or commercial germline testing techniques have been insufficiently sensitive to detect a mutation.

The MMRpro model was developed using a meta-analysis of studies that provided risk estimates for colorectal and endometrial cancers. It was then validated in a cohort of 279 patients who had undergone germline testing and who were from 226 families believed to be affected by Lynch syndrome.

The MMRpro model incorporates both a mutation-prediction algorithm and a cancer-risk prediction algorithm. The latter allows clinicians to estimate the likelihood that cancer will develop in patients who have strong evidence of Lynch syndrome but in whom no mutation has been found. “This feature is also valuable for [patients] who do not wish to be genotyped but would still like to consider preventative measures,” Dr. Chen and associates said (JAMA 2006;296:1479–87).

“Software for performing MMRpro calculations is open source and available free of charge via either the mendelian risk prediction package Bayes Mendel at www.astor.som.jhmi.edu/BayesMendelwww.utsouthwestern.edu/breasthealth/cagene

In an editorial comment accompanying these reports, Dr. James M. Ford and Dr. Alice S. Whittemore of Stanford (Calif.) University's clinical cancer genetics program said that both prediction models should prove to be “very useful tools for clinicians and their patients, as well as for epidemiologists.”

These models should improve clinicians' ability to identify patients at risk for Lynch syndrome “and hopefully to prevent cancer from occurring using intensive surveillance techniques and prevention schemes,” Dr. Ford and Dr. Whittemore said (JAMA 2006;296:1521–3).

Two new prediction models help identify which patients suspected of having Lynch syndrome should undergo extensive genetic testing for the mutations associated with colorectal cancer.

Lynch syndrome, also known as hereditary nonpolyposis colorectal cancer, is characterized by the predisposition to develop early-onset colorectal cancer as well as cancers of the endometrium, gastrointestinal tract, ovary, hepatobiliary system, urinary tract, brain, and other sites. It is almost always associated with underlying mutations in the mismatch DNA repair system, most often in the MLH1 and MSH2 genes.

Currently, screening for Lynch syndrome is fraught with challenges. Clinical criteria for identifying which patients are likely to have the syndrome are restrictive and don't take into account several variants of the disease. They also rely on detailed family histories or on tumor samples, which often are not available. Further genetic testing is not very sensitive or specific and is expensive.

Two research groups have developed different models to predict the likelihood that patients have Lynch syndrome and should undergo genetic testing, much like the models that are widely used by health care professionals to predict mutations in the BRCA1 and BRCA2 genes in assessing patients' breast cancer risk.

The PREMM1,2 (Prediction of Mutations in MLH1 and MSH2) model was developed using a cohort of 898 probands and 1,618 first- or second-degree relatives who submitted blood samples for sequencing of the two genes, then validated in another cohort of 1,016 probands. This genetic testing had been ordered by the probands' health care providers—chiefly geneticists, oncologists, gastroenterologists, and gynecologists—who suspected Lynch syndrome because the patients' personal or family histories were suggestive, according to Dr. Judith Balmana of Dana-Farber Cancer Institute, Boston, and her associates.

These large, diverse, national cohorts allowed the investigators to incorporate great detail into their prediction model, including the age at diagnosis of probands and their relatives, the presence of colonic adenomas, and the different degrees of risk for different cancers.

The PREMM1,2 model thus is more sensitive and specific than clinical criteria in determining which patients should undergo extensive genetic testing. It also helps decide which approach to genetic testing will be most useful (JAMA 2006;296:1469–78). The PREMM1,2 model is available through the Dana-Farber Web site (www.dfci.org/premm

The MMRpro (Mutations of Mismatch Repair) model also is more sensitive and specific than existing clinical guidelines for identifying patients who may benefit from genetic testing, reported Sining Chen, Ph.D., of Johns Hopkins Bloomberg School of Public Health, Baltimore, and associates.

In particular, this statistical model estimates the likelihood that a patient carries deleterious mutations of the MLH1, MSH2, or MSH6 genes in cases in which tumor tissue is not available for analysis or commercial germline testing techniques have been insufficiently sensitive to detect a mutation.

The MMRpro model was developed using a meta-analysis of studies that provided risk estimates for colorectal and endometrial cancers. It was then validated in a cohort of 279 patients who had undergone germline testing and who were from 226 families believed to be affected by Lynch syndrome.

The MMRpro model incorporates both a mutation-prediction algorithm and a cancer-risk prediction algorithm. The latter allows clinicians to estimate the likelihood that cancer will develop in patients who have strong evidence of Lynch syndrome but in whom no mutation has been found. “This feature is also valuable for [patients] who do not wish to be genotyped but would still like to consider preventative measures,” Dr. Chen and associates said (JAMA 2006;296:1479–87).

“Software for performing MMRpro calculations is open source and available free of charge via either the mendelian risk prediction package Bayes Mendel at www.astor.som.jhmi.edu/BayesMendelwww.utsouthwestern.edu/breasthealth/cagene

In an editorial comment accompanying these reports, Dr. James M. Ford and Dr. Alice S. Whittemore of Stanford (Calif.) University's clinical cancer genetics program said that both prediction models should prove to be “very useful tools for clinicians and their patients, as well as for epidemiologists.”

These models should improve clinicians' ability to identify patients at risk for Lynch syndrome “and hopefully to prevent cancer from occurring using intensive surveillance techniques and prevention schemes,” Dr. Ford and Dr. Whittemore said (JAMA 2006;296:1521–3).

Publications
Publications
Topics
Article Type
Display Headline
Predicting Lynch Syndrome Propensity to Cancer : Two new models help sort out which patients need extensive genetic testing for the hereditary mutation.
Display Headline
Predicting Lynch Syndrome Propensity to Cancer : Two new models help sort out which patients need extensive genetic testing for the hereditary mutation.
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

MRSA Is Most Common Cause of Skin Infections in Many EDs

Article Type
Changed
Display Headline
MRSA Is Most Common Cause of Skin Infections in Many EDs

Methicillin-resistant Staphylococcus aureus is now the most common identifiable cause of skin and soft-tissue infection seen in patients presenting to emergency departments in many U.S. cities.

Clinicians now should reconsider standard empirical antibiotic therapy in regions where methicillin-resistant Staphylococcus aureus (MRSA) is prevalent, and perhaps switch to drugs that provide MRSA coverage. And health care workers should take precautions such as using gowns and gloves when treating any patient with purulent skin or soft-tissue infection, according to Dr. Gregory J. Moran of the departments of emergency medicine and infectious diseases at Olive View-UCLA Medical Center, Sylmar, Calif., and his associates.

Since data concerning the prevalence of MRSA skin and soft-tissue infections have been scarce, Dr. Moran and his associates investigated the issue among 422 adults presenting to university-affiliated emergency departments in August 2004, in 11 cities in geographically diverse regions throughout the country. The median patient age was 39 years (range 18–79 years), and 62% of the subjects were men. Approximately half of the group was black, one-fourth was white, 22% were Hispanic, and the rest belonged to other racial groups.

The infections involved the upper extremities (29%), lower extremities (27%), torso (17%), perineum (14%), or head and neck (13%). They were classified as abscesses in 81% of patients, infected wounds in 11%, and cellulitis with purulent exudates in 8%. S. aureus was isolated in 320 patients, and 249 (78%) of these were MRSA isolates. “MRSA was the most common identifiable cause of skin and soft-tissue infections in 10 of the 11 emergency departments,” and the prevalence ranged from 15% to 74%, the researchers said (N. Engl. J. Med. 2006;355:666–74).

MRSA was isolated from 61% of abscesses, 53% of purulent wounds, and 47% of cellulitis cases, and 99% of the strains were community acquired rather than health care related. This is consistent with reports of dramatic rises in community-associated MRSA (CA-MRSA) in the past few years, the investigators said.

“Although more than 80% of patients with skin and soft-tissue infections associated with MRSA in this study received empirical antimicrobial therapy for their infections, the infecting isolate was resistant to the agent prescribed for 57% of these patients. This finding suggests a need to reconsider empirical antimicrobial choices for skin and soft-tissue infections in areas where MRSA is prevalent in the community,” they noted.

Of the MRSA isolates tested for drug susceptibility, 100% were susceptible to trimethoprim-sulfamethoxazole and rifampin, 95% were susceptible to clindamycin, 92% to tetracycline, 60% to fluoroquinolones, and 6% to erythromycin.

Even though most patients with MRSA abscesses were treated with β-lactam agents such as cephalexin and dicloxacillin, which are ineffective against MRSA isolates, there were no significant differences in outcomes between them and the patients whose isolates were susceptible to the drug they received.

The patients who received inappropriate antibiotics probably were cured by the drainage of the abscess and other wound care they received along with the drugs, suggesting that “most simple skin abscesses, even when caused by MRSA, can be cured with adequate drainage alone,” Dr. Moran and his associates said.

“The susceptibility of a given pathogen to prescribed antimicrobial agents may be more likely to affect the outcome among patients with cellulitis or purulent wounds. Unfortunately, there were insufficient numbers of these patients with follow-up information in our study to assess this relationship,” they added.

Patients with MRSA infection were more likely than were those with other bacterial infections to report that they believed their lesions resulted from spider bites, perhaps because these MRSA strains cause unusually painful lesions.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Methicillin-resistant Staphylococcus aureus is now the most common identifiable cause of skin and soft-tissue infection seen in patients presenting to emergency departments in many U.S. cities.

Clinicians now should reconsider standard empirical antibiotic therapy in regions where methicillin-resistant Staphylococcus aureus (MRSA) is prevalent, and perhaps switch to drugs that provide MRSA coverage. And health care workers should take precautions such as using gowns and gloves when treating any patient with purulent skin or soft-tissue infection, according to Dr. Gregory J. Moran of the departments of emergency medicine and infectious diseases at Olive View-UCLA Medical Center, Sylmar, Calif., and his associates.

Since data concerning the prevalence of MRSA skin and soft-tissue infections have been scarce, Dr. Moran and his associates investigated the issue among 422 adults presenting to university-affiliated emergency departments in August 2004, in 11 cities in geographically diverse regions throughout the country. The median patient age was 39 years (range 18–79 years), and 62% of the subjects were men. Approximately half of the group was black, one-fourth was white, 22% were Hispanic, and the rest belonged to other racial groups.

The infections involved the upper extremities (29%), lower extremities (27%), torso (17%), perineum (14%), or head and neck (13%). They were classified as abscesses in 81% of patients, infected wounds in 11%, and cellulitis with purulent exudates in 8%. S. aureus was isolated in 320 patients, and 249 (78%) of these were MRSA isolates. “MRSA was the most common identifiable cause of skin and soft-tissue infections in 10 of the 11 emergency departments,” and the prevalence ranged from 15% to 74%, the researchers said (N. Engl. J. Med. 2006;355:666–74).

MRSA was isolated from 61% of abscesses, 53% of purulent wounds, and 47% of cellulitis cases, and 99% of the strains were community acquired rather than health care related. This is consistent with reports of dramatic rises in community-associated MRSA (CA-MRSA) in the past few years, the investigators said.

“Although more than 80% of patients with skin and soft-tissue infections associated with MRSA in this study received empirical antimicrobial therapy for their infections, the infecting isolate was resistant to the agent prescribed for 57% of these patients. This finding suggests a need to reconsider empirical antimicrobial choices for skin and soft-tissue infections in areas where MRSA is prevalent in the community,” they noted.

Of the MRSA isolates tested for drug susceptibility, 100% were susceptible to trimethoprim-sulfamethoxazole and rifampin, 95% were susceptible to clindamycin, 92% to tetracycline, 60% to fluoroquinolones, and 6% to erythromycin.

Even though most patients with MRSA abscesses were treated with β-lactam agents such as cephalexin and dicloxacillin, which are ineffective against MRSA isolates, there were no significant differences in outcomes between them and the patients whose isolates were susceptible to the drug they received.

The patients who received inappropriate antibiotics probably were cured by the drainage of the abscess and other wound care they received along with the drugs, suggesting that “most simple skin abscesses, even when caused by MRSA, can be cured with adequate drainage alone,” Dr. Moran and his associates said.

“The susceptibility of a given pathogen to prescribed antimicrobial agents may be more likely to affect the outcome among patients with cellulitis or purulent wounds. Unfortunately, there were insufficient numbers of these patients with follow-up information in our study to assess this relationship,” they added.

Patients with MRSA infection were more likely than were those with other bacterial infections to report that they believed their lesions resulted from spider bites, perhaps because these MRSA strains cause unusually painful lesions.

Methicillin-resistant Staphylococcus aureus is now the most common identifiable cause of skin and soft-tissue infection seen in patients presenting to emergency departments in many U.S. cities.

Clinicians now should reconsider standard empirical antibiotic therapy in regions where methicillin-resistant Staphylococcus aureus (MRSA) is prevalent, and perhaps switch to drugs that provide MRSA coverage. And health care workers should take precautions such as using gowns and gloves when treating any patient with purulent skin or soft-tissue infection, according to Dr. Gregory J. Moran of the departments of emergency medicine and infectious diseases at Olive View-UCLA Medical Center, Sylmar, Calif., and his associates.

Since data concerning the prevalence of MRSA skin and soft-tissue infections have been scarce, Dr. Moran and his associates investigated the issue among 422 adults presenting to university-affiliated emergency departments in August 2004, in 11 cities in geographically diverse regions throughout the country. The median patient age was 39 years (range 18–79 years), and 62% of the subjects were men. Approximately half of the group was black, one-fourth was white, 22% were Hispanic, and the rest belonged to other racial groups.

The infections involved the upper extremities (29%), lower extremities (27%), torso (17%), perineum (14%), or head and neck (13%). They were classified as abscesses in 81% of patients, infected wounds in 11%, and cellulitis with purulent exudates in 8%. S. aureus was isolated in 320 patients, and 249 (78%) of these were MRSA isolates. “MRSA was the most common identifiable cause of skin and soft-tissue infections in 10 of the 11 emergency departments,” and the prevalence ranged from 15% to 74%, the researchers said (N. Engl. J. Med. 2006;355:666–74).

MRSA was isolated from 61% of abscesses, 53% of purulent wounds, and 47% of cellulitis cases, and 99% of the strains were community acquired rather than health care related. This is consistent with reports of dramatic rises in community-associated MRSA (CA-MRSA) in the past few years, the investigators said.

“Although more than 80% of patients with skin and soft-tissue infections associated with MRSA in this study received empirical antimicrobial therapy for their infections, the infecting isolate was resistant to the agent prescribed for 57% of these patients. This finding suggests a need to reconsider empirical antimicrobial choices for skin and soft-tissue infections in areas where MRSA is prevalent in the community,” they noted.

Of the MRSA isolates tested for drug susceptibility, 100% were susceptible to trimethoprim-sulfamethoxazole and rifampin, 95% were susceptible to clindamycin, 92% to tetracycline, 60% to fluoroquinolones, and 6% to erythromycin.

Even though most patients with MRSA abscesses were treated with β-lactam agents such as cephalexin and dicloxacillin, which are ineffective against MRSA isolates, there were no significant differences in outcomes between them and the patients whose isolates were susceptible to the drug they received.

The patients who received inappropriate antibiotics probably were cured by the drainage of the abscess and other wound care they received along with the drugs, suggesting that “most simple skin abscesses, even when caused by MRSA, can be cured with adequate drainage alone,” Dr. Moran and his associates said.

“The susceptibility of a given pathogen to prescribed antimicrobial agents may be more likely to affect the outcome among patients with cellulitis or purulent wounds. Unfortunately, there were insufficient numbers of these patients with follow-up information in our study to assess this relationship,” they added.

Patients with MRSA infection were more likely than were those with other bacterial infections to report that they believed their lesions resulted from spider bites, perhaps because these MRSA strains cause unusually painful lesions.

Publications
Publications
Topics
Article Type
Display Headline
MRSA Is Most Common Cause of Skin Infections in Many EDs
Display Headline
MRSA Is Most Common Cause of Skin Infections in Many EDs
Article Source

PURLs Copyright

Inside the Article

Article PDF Media