Follow mild, inactive lupus patients every 3-4 months

Article Type
Changed
Display Headline
Follow mild, inactive lupus patients every 3-4 months

Systemic lupus erythematosus patients with mild or inactive disease should receive checkups every 3-4 months based on evidence of how frequently they may experience disease features they would not otherwise recognize on their own, a study has shown.

About a quarter of the patients in the study who were seen at a single center during a follow-up period of at least 18 months had 1 feature from a group of up to 10 different variables associated with systemic lupus erythematosus (SLE) that "triggered either further investigation or a change in therapy, or suggested more frequent follow-up," the investigators found.

Dr. Dafna D. Gladman and her associates at the University of Toronto lupus clinic at Toronto Western Hospital aimed to establish the "optimal frequency of follow-up visits" in SLE patients with low disease activity, given that the American College of Rheumatology (ACR) and the European League Against Rheumatism (EULAR) have different recommendations.

The authors noted that ACR recommendations – follow-up every 3-6 months for those with very mild stable disease – rely on "the nature of the protean clinical and laboratory features of SLE and the variety of treatments required to control these features." Meanwhile, EULAR recommends asymptomatic patients be clinically assessed every 6-12 months based on expert opinion on quality indicators, including disease activity, damage accumulation, quality of life, drug toxicity, and comorbidities.

With an Oxford Center for Evidence Based Medicine category 2b level of evidence and a B grade of recommendation, Dr. Gladman and her colleagues suggested that "ACR and EULAR recommendations be amended to reflect" the evidence-based finding that 3- to 4-month follow-up intervals are most appropriate for patients with mild or inactive disease.

The researchers tracked 515 SLE patients (89%, female; 61%, white; mean age, 42.2) from Jan. 1, 2009, to Dec. 31, 2010, if they had at least three visits and at least 18 months of follow-up. The patients had a mean disease duration of 14.2 years and a mean SLE Disease Activity Index 2000 (SLEDAI-2K) score of 4.1 at study baseline (J. Rheumatol. 2013 March 1 [doi:10.3899/jrheum.121094]).

Outcomes of interest were the following "solitary silent new features" of disease activity, recorded as such in the study if they were new to the patient:

• Proteinuria (greater than 500 mg per 24 hours).

• Hematuria (greater than five red blood cells per high power field).

• Pyuria (greater than five white blood cells per high power field).

• Both hematuria and pyuria in the absence of infection, menses, or stones.

• Heme granular or red blood cell casts.

• Low hemoglobin level (less than 100 g/L).

• Leukopenia (less than 3,000/mm3).

• Thrombocytopenia (less than 100,000/mm3).

• Elevated serum creatinine (greater than 120 mg/dL).

• Positive anti-DNA antibodies (greater than 7 U by Farr).

• Low complement (less than 0.10 g/L for C4 and less than 0.9 g/L for C3).

The 515 patients in the study made 3,126 visits during the 2-year period. Overall, 126 (25%) had at least one solitary silent new feature found at 175 (5.6%) of the clinic visits. During the study, patients averaged 6.1 visits with a mean follow-up of 1.8 years and an average 3.8 months between clinic visits.

The most frequent features were low complement (45 patients), pyuria (35 patients), positive anti-DNA antibodies (32 patients), casts (16 patients), and proteinuria (15 patients). These and the less frequently occurring features – low hemoglobin, elevated serum creatinine, leukopenia, thrombocytopenia, and hematuria – led to a variety of different treatment or management changes.

"In the majority of cases, concern was expressed and further laboratory tests were undertaken," the authors wrote. "In 18 patients, steroids, antimalarials, and/or immunosuppressives were added or doses increased within the 12 months following the identification of a silent solitary new feature." Patients with anemia, leukopenia, or thrombocytopenia received second lab tests, which sometimes led to discontinuation of their cytotoxic drugs.

At the start of the study, the SLEDAI-2K score for those with no solitary silent new features during the study was 4.8, compared with 2.1 for those who had solitary silent new features (P less than .0001). The Systemic Lupus International Collaborating Clinics Damage Index for the 389 patients without the features during the study was 1.41, compared with 1.83 in those who had the features (P = .05).

The study was funded by the Lupus Flare Foundation, the Toronto General and Toronto Western Hospital Foundation, and the Arthritis and Autoimmune Research Centre Foundation. Disclosures were not noted in the study.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Systemic lupus erythematosus, inactive
Author and Disclosure Information

Author and Disclosure Information

Systemic lupus erythematosus patients with mild or inactive disease should receive checkups every 3-4 months based on evidence of how frequently they may experience disease features they would not otherwise recognize on their own, a study has shown.

About a quarter of the patients in the study who were seen at a single center during a follow-up period of at least 18 months had 1 feature from a group of up to 10 different variables associated with systemic lupus erythematosus (SLE) that "triggered either further investigation or a change in therapy, or suggested more frequent follow-up," the investigators found.

Dr. Dafna D. Gladman and her associates at the University of Toronto lupus clinic at Toronto Western Hospital aimed to establish the "optimal frequency of follow-up visits" in SLE patients with low disease activity, given that the American College of Rheumatology (ACR) and the European League Against Rheumatism (EULAR) have different recommendations.

The authors noted that ACR recommendations – follow-up every 3-6 months for those with very mild stable disease – rely on "the nature of the protean clinical and laboratory features of SLE and the variety of treatments required to control these features." Meanwhile, EULAR recommends asymptomatic patients be clinically assessed every 6-12 months based on expert opinion on quality indicators, including disease activity, damage accumulation, quality of life, drug toxicity, and comorbidities.

With an Oxford Center for Evidence Based Medicine category 2b level of evidence and a B grade of recommendation, Dr. Gladman and her colleagues suggested that "ACR and EULAR recommendations be amended to reflect" the evidence-based finding that 3- to 4-month follow-up intervals are most appropriate for patients with mild or inactive disease.

The researchers tracked 515 SLE patients (89%, female; 61%, white; mean age, 42.2) from Jan. 1, 2009, to Dec. 31, 2010, if they had at least three visits and at least 18 months of follow-up. The patients had a mean disease duration of 14.2 years and a mean SLE Disease Activity Index 2000 (SLEDAI-2K) score of 4.1 at study baseline (J. Rheumatol. 2013 March 1 [doi:10.3899/jrheum.121094]).

Outcomes of interest were the following "solitary silent new features" of disease activity, recorded as such in the study if they were new to the patient:

• Proteinuria (greater than 500 mg per 24 hours).

• Hematuria (greater than five red blood cells per high power field).

• Pyuria (greater than five white blood cells per high power field).

• Both hematuria and pyuria in the absence of infection, menses, or stones.

• Heme granular or red blood cell casts.

• Low hemoglobin level (less than 100 g/L).

• Leukopenia (less than 3,000/mm3).

• Thrombocytopenia (less than 100,000/mm3).

• Elevated serum creatinine (greater than 120 mg/dL).

• Positive anti-DNA antibodies (greater than 7 U by Farr).

• Low complement (less than 0.10 g/L for C4 and less than 0.9 g/L for C3).

The 515 patients in the study made 3,126 visits during the 2-year period. Overall, 126 (25%) had at least one solitary silent new feature found at 175 (5.6%) of the clinic visits. During the study, patients averaged 6.1 visits with a mean follow-up of 1.8 years and an average 3.8 months between clinic visits.

The most frequent features were low complement (45 patients), pyuria (35 patients), positive anti-DNA antibodies (32 patients), casts (16 patients), and proteinuria (15 patients). These and the less frequently occurring features – low hemoglobin, elevated serum creatinine, leukopenia, thrombocytopenia, and hematuria – led to a variety of different treatment or management changes.

"In the majority of cases, concern was expressed and further laboratory tests were undertaken," the authors wrote. "In 18 patients, steroids, antimalarials, and/or immunosuppressives were added or doses increased within the 12 months following the identification of a silent solitary new feature." Patients with anemia, leukopenia, or thrombocytopenia received second lab tests, which sometimes led to discontinuation of their cytotoxic drugs.

At the start of the study, the SLEDAI-2K score for those with no solitary silent new features during the study was 4.8, compared with 2.1 for those who had solitary silent new features (P less than .0001). The Systemic Lupus International Collaborating Clinics Damage Index for the 389 patients without the features during the study was 1.41, compared with 1.83 in those who had the features (P = .05).

The study was funded by the Lupus Flare Foundation, the Toronto General and Toronto Western Hospital Foundation, and the Arthritis and Autoimmune Research Centre Foundation. Disclosures were not noted in the study.

Systemic lupus erythematosus patients with mild or inactive disease should receive checkups every 3-4 months based on evidence of how frequently they may experience disease features they would not otherwise recognize on their own, a study has shown.

About a quarter of the patients in the study who were seen at a single center during a follow-up period of at least 18 months had 1 feature from a group of up to 10 different variables associated with systemic lupus erythematosus (SLE) that "triggered either further investigation or a change in therapy, or suggested more frequent follow-up," the investigators found.

Dr. Dafna D. Gladman and her associates at the University of Toronto lupus clinic at Toronto Western Hospital aimed to establish the "optimal frequency of follow-up visits" in SLE patients with low disease activity, given that the American College of Rheumatology (ACR) and the European League Against Rheumatism (EULAR) have different recommendations.

The authors noted that ACR recommendations – follow-up every 3-6 months for those with very mild stable disease – rely on "the nature of the protean clinical and laboratory features of SLE and the variety of treatments required to control these features." Meanwhile, EULAR recommends asymptomatic patients be clinically assessed every 6-12 months based on expert opinion on quality indicators, including disease activity, damage accumulation, quality of life, drug toxicity, and comorbidities.

With an Oxford Center for Evidence Based Medicine category 2b level of evidence and a B grade of recommendation, Dr. Gladman and her colleagues suggested that "ACR and EULAR recommendations be amended to reflect" the evidence-based finding that 3- to 4-month follow-up intervals are most appropriate for patients with mild or inactive disease.

The researchers tracked 515 SLE patients (89%, female; 61%, white; mean age, 42.2) from Jan. 1, 2009, to Dec. 31, 2010, if they had at least three visits and at least 18 months of follow-up. The patients had a mean disease duration of 14.2 years and a mean SLE Disease Activity Index 2000 (SLEDAI-2K) score of 4.1 at study baseline (J. Rheumatol. 2013 March 1 [doi:10.3899/jrheum.121094]).

Outcomes of interest were the following "solitary silent new features" of disease activity, recorded as such in the study if they were new to the patient:

• Proteinuria (greater than 500 mg per 24 hours).

• Hematuria (greater than five red blood cells per high power field).

• Pyuria (greater than five white blood cells per high power field).

• Both hematuria and pyuria in the absence of infection, menses, or stones.

• Heme granular or red blood cell casts.

• Low hemoglobin level (less than 100 g/L).

• Leukopenia (less than 3,000/mm3).

• Thrombocytopenia (less than 100,000/mm3).

• Elevated serum creatinine (greater than 120 mg/dL).

• Positive anti-DNA antibodies (greater than 7 U by Farr).

• Low complement (less than 0.10 g/L for C4 and less than 0.9 g/L for C3).

The 515 patients in the study made 3,126 visits during the 2-year period. Overall, 126 (25%) had at least one solitary silent new feature found at 175 (5.6%) of the clinic visits. During the study, patients averaged 6.1 visits with a mean follow-up of 1.8 years and an average 3.8 months between clinic visits.

The most frequent features were low complement (45 patients), pyuria (35 patients), positive anti-DNA antibodies (32 patients), casts (16 patients), and proteinuria (15 patients). These and the less frequently occurring features – low hemoglobin, elevated serum creatinine, leukopenia, thrombocytopenia, and hematuria – led to a variety of different treatment or management changes.

"In the majority of cases, concern was expressed and further laboratory tests were undertaken," the authors wrote. "In 18 patients, steroids, antimalarials, and/or immunosuppressives were added or doses increased within the 12 months following the identification of a silent solitary new feature." Patients with anemia, leukopenia, or thrombocytopenia received second lab tests, which sometimes led to discontinuation of their cytotoxic drugs.

At the start of the study, the SLEDAI-2K score for those with no solitary silent new features during the study was 4.8, compared with 2.1 for those who had solitary silent new features (P less than .0001). The Systemic Lupus International Collaborating Clinics Damage Index for the 389 patients without the features during the study was 1.41, compared with 1.83 in those who had the features (P = .05).

The study was funded by the Lupus Flare Foundation, the Toronto General and Toronto Western Hospital Foundation, and the Arthritis and Autoimmune Research Centre Foundation. Disclosures were not noted in the study.

Publications
Publications
Topics
Article Type
Display Headline
Follow mild, inactive lupus patients every 3-4 months
Display Headline
Follow mild, inactive lupus patients every 3-4 months
Legacy Keywords
Systemic lupus erythematosus, inactive
Legacy Keywords
Systemic lupus erythematosus, inactive
Article Source

FROM THE JOURNAL OF RHEUMATOLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: A total of 25% of SLE patients with mild or inactive disease had at least 1 of 10 disease-related "silent" features during a 2-year period.

Data Source: A prospective observational cohort study of 515 SLE patients with mild or inactive disease who had at least three visits and at least 18 months of follow-up during a 2-year period at one center.

Disclosures: The study was funded by the Lupus Flare Foundation, the Toronto General and Toronto Western Hospital Foundation, and the Arthritis and Autoimmune Research Centre Foundation. Disclosures were not noted in the study.

New Alzheimer's drug yields modest memory improvements

Article Type
Changed
Display Headline
New Alzheimer's drug yields modest memory improvements

SAN DIEGO – A new drug under investigation for Alzheimer’s disease showed modest improvements in patients’ episodic memory in a 12-week, phase IIa trial.*

The patients taking the drug, called ORM-12741, saw a 4% increase in their episodic memory performance while the placebo patients’ episodic memory performance declined 33%, Dr. Juha Rouru of Orion Pharmaceuticals, Turku, Finland, and his associates reported at the annual meeting of the American Academy of Neurology.

The proof-of-concept trial was a randomized, double-blind, placebo-controlled, multicenter trial that compared two different dosage levels of ORM-12741 against placebo in 100 patients. The patients all had moderate Alzheimer’s disease, with a score between 12 and 21 on the Mini Mental State Examination (MMSE). The patents also had behavioral symptoms with a Neuropsychiatric Inventory (NPI) score of 15 or greater.

The patients received either 30-60 mg or 100-300 mg of ORM-12741, or a matching placebo, twice a day for 12 weeks. The patients were already taking a cholinesterase inhibitor and were allowed to take memantine (Namenda) as well. Dr. Rouru said the ORM-12741 dosage flexibility was built into the study because previous human subjects receiving the drug did not have Alzheimer’s, so the flexibility allowed for safety adjustments if necessary.

The battery of tests to assess cognitive function in the study participants included the Quality of Episodic Memory (QEM), Quality of Working Memory (QWM), Quality of Memory (QM), Speed of Memory, and Power of Attention. The NPI was also used to assess other potential behavioral and psychological symptoms during the trial.

At follow-up, the patients receiving ORM-12741 scored a mean 4% higher on the QEM composite score, which combines both episodic and working memory. No significant difference was noted between dosage amounts. Patients in the placebo group scored a mean 33% lower on the QEM composite. In addition, the researchers reported a positive trend for both the QWM score and NPI total score in the low-dose group. No other significant differences were noted from the other assessments.

ORM-12741 differs from other Alzheimer’s drugs on the market, such as memantine or cholinesterase inhibitors, by acting on a completely different target, Dr. Rouru said. The drug targets a specific subtype of adrenergic receptors in the brain called alpha-2C.

"Quite little has been known about those receptors, and a big part of the work that has been done on them has been done in our company," Dr. Rouru said in an interview. "In animal models, we see very clearly they are involved in memory, but they are also fine tuners for many behavioral things. We have been able to demonstrate in animal models not only that drugs that target to these receptors improve memory, but that they have also beneficial effect on depressive and psychotic symptoms."

The different target offers clinical advantages in terms of treatment combinations, Dr. Rouru said. "The bottom line is that because the target is different, we can very easily use this drug in combination with other medications so that we are adding effect," he said.

Dr. Rouru said the drug was well tolerated among the participants and that adverse events were similar in both the intervention and placebo groups. It is too early to say what adverse events may have been attributed in the intervention group to ORM-12741, he said.

The most commonly reported adverse events were headache in 12% of placebo-treated patients and 5% of patients taking ORM-12741; urinary tract infection in 9% taking placebo and 9% taking ORM-12741; nausea in 9% receiving placebo and 5% receiving ORM-12741; vomiting in 3% taking placebo and 8% taking ORM-12741; diarrhea in 6% on placebo and 5% on ORM-12741; and irritability in 9% on placebo and 3% on ORM-12741.

Dr. Rouru said he is very pleased with the results of this trial, and he and his associates at Orion Pharma are now in the planning stages of the next clinical trial.

"It was very nice that the animal effect we saw, we saw in humans as well," Dr. Rouru said. "Also, the effect that we saw in humans was very clear, which was really encouraging for the future of this compound."

The study was funded by Orion Pharma. Two authors are Orion employees, and two authors work for World Wide Clinical Trials, the contract research organization involved in the trial. One author works for the company providing the computerized memory measurement system.

* This article was revised on 3/27/13.

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
Alzheimer’s, American Academy of Neurology, ORM-12741, episodic memory, Dr. Juha Rouru, Orion Pharmaceuticals
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

SAN DIEGO – A new drug under investigation for Alzheimer’s disease showed modest improvements in patients’ episodic memory in a 12-week, phase IIa trial.*

The patients taking the drug, called ORM-12741, saw a 4% increase in their episodic memory performance while the placebo patients’ episodic memory performance declined 33%, Dr. Juha Rouru of Orion Pharmaceuticals, Turku, Finland, and his associates reported at the annual meeting of the American Academy of Neurology.

The proof-of-concept trial was a randomized, double-blind, placebo-controlled, multicenter trial that compared two different dosage levels of ORM-12741 against placebo in 100 patients. The patients all had moderate Alzheimer’s disease, with a score between 12 and 21 on the Mini Mental State Examination (MMSE). The patents also had behavioral symptoms with a Neuropsychiatric Inventory (NPI) score of 15 or greater.

The patients received either 30-60 mg or 100-300 mg of ORM-12741, or a matching placebo, twice a day for 12 weeks. The patients were already taking a cholinesterase inhibitor and were allowed to take memantine (Namenda) as well. Dr. Rouru said the ORM-12741 dosage flexibility was built into the study because previous human subjects receiving the drug did not have Alzheimer’s, so the flexibility allowed for safety adjustments if necessary.

The battery of tests to assess cognitive function in the study participants included the Quality of Episodic Memory (QEM), Quality of Working Memory (QWM), Quality of Memory (QM), Speed of Memory, and Power of Attention. The NPI was also used to assess other potential behavioral and psychological symptoms during the trial.

At follow-up, the patients receiving ORM-12741 scored a mean 4% higher on the QEM composite score, which combines both episodic and working memory. No significant difference was noted between dosage amounts. Patients in the placebo group scored a mean 33% lower on the QEM composite. In addition, the researchers reported a positive trend for both the QWM score and NPI total score in the low-dose group. No other significant differences were noted from the other assessments.

ORM-12741 differs from other Alzheimer’s drugs on the market, such as memantine or cholinesterase inhibitors, by acting on a completely different target, Dr. Rouru said. The drug targets a specific subtype of adrenergic receptors in the brain called alpha-2C.

"Quite little has been known about those receptors, and a big part of the work that has been done on them has been done in our company," Dr. Rouru said in an interview. "In animal models, we see very clearly they are involved in memory, but they are also fine tuners for many behavioral things. We have been able to demonstrate in animal models not only that drugs that target to these receptors improve memory, but that they have also beneficial effect on depressive and psychotic symptoms."

The different target offers clinical advantages in terms of treatment combinations, Dr. Rouru said. "The bottom line is that because the target is different, we can very easily use this drug in combination with other medications so that we are adding effect," he said.

Dr. Rouru said the drug was well tolerated among the participants and that adverse events were similar in both the intervention and placebo groups. It is too early to say what adverse events may have been attributed in the intervention group to ORM-12741, he said.

The most commonly reported adverse events were headache in 12% of placebo-treated patients and 5% of patients taking ORM-12741; urinary tract infection in 9% taking placebo and 9% taking ORM-12741; nausea in 9% receiving placebo and 5% receiving ORM-12741; vomiting in 3% taking placebo and 8% taking ORM-12741; diarrhea in 6% on placebo and 5% on ORM-12741; and irritability in 9% on placebo and 3% on ORM-12741.

Dr. Rouru said he is very pleased with the results of this trial, and he and his associates at Orion Pharma are now in the planning stages of the next clinical trial.

"It was very nice that the animal effect we saw, we saw in humans as well," Dr. Rouru said. "Also, the effect that we saw in humans was very clear, which was really encouraging for the future of this compound."

The study was funded by Orion Pharma. Two authors are Orion employees, and two authors work for World Wide Clinical Trials, the contract research organization involved in the trial. One author works for the company providing the computerized memory measurement system.

* This article was revised on 3/27/13.

SAN DIEGO – A new drug under investigation for Alzheimer’s disease showed modest improvements in patients’ episodic memory in a 12-week, phase IIa trial.*

The patients taking the drug, called ORM-12741, saw a 4% increase in their episodic memory performance while the placebo patients’ episodic memory performance declined 33%, Dr. Juha Rouru of Orion Pharmaceuticals, Turku, Finland, and his associates reported at the annual meeting of the American Academy of Neurology.

The proof-of-concept trial was a randomized, double-blind, placebo-controlled, multicenter trial that compared two different dosage levels of ORM-12741 against placebo in 100 patients. The patients all had moderate Alzheimer’s disease, with a score between 12 and 21 on the Mini Mental State Examination (MMSE). The patents also had behavioral symptoms with a Neuropsychiatric Inventory (NPI) score of 15 or greater.

The patients received either 30-60 mg or 100-300 mg of ORM-12741, or a matching placebo, twice a day for 12 weeks. The patients were already taking a cholinesterase inhibitor and were allowed to take memantine (Namenda) as well. Dr. Rouru said the ORM-12741 dosage flexibility was built into the study because previous human subjects receiving the drug did not have Alzheimer’s, so the flexibility allowed for safety adjustments if necessary.

The battery of tests to assess cognitive function in the study participants included the Quality of Episodic Memory (QEM), Quality of Working Memory (QWM), Quality of Memory (QM), Speed of Memory, and Power of Attention. The NPI was also used to assess other potential behavioral and psychological symptoms during the trial.

At follow-up, the patients receiving ORM-12741 scored a mean 4% higher on the QEM composite score, which combines both episodic and working memory. No significant difference was noted between dosage amounts. Patients in the placebo group scored a mean 33% lower on the QEM composite. In addition, the researchers reported a positive trend for both the QWM score and NPI total score in the low-dose group. No other significant differences were noted from the other assessments.

ORM-12741 differs from other Alzheimer’s drugs on the market, such as memantine or cholinesterase inhibitors, by acting on a completely different target, Dr. Rouru said. The drug targets a specific subtype of adrenergic receptors in the brain called alpha-2C.

"Quite little has been known about those receptors, and a big part of the work that has been done on them has been done in our company," Dr. Rouru said in an interview. "In animal models, we see very clearly they are involved in memory, but they are also fine tuners for many behavioral things. We have been able to demonstrate in animal models not only that drugs that target to these receptors improve memory, but that they have also beneficial effect on depressive and psychotic symptoms."

The different target offers clinical advantages in terms of treatment combinations, Dr. Rouru said. "The bottom line is that because the target is different, we can very easily use this drug in combination with other medications so that we are adding effect," he said.

Dr. Rouru said the drug was well tolerated among the participants and that adverse events were similar in both the intervention and placebo groups. It is too early to say what adverse events may have been attributed in the intervention group to ORM-12741, he said.

The most commonly reported adverse events were headache in 12% of placebo-treated patients and 5% of patients taking ORM-12741; urinary tract infection in 9% taking placebo and 9% taking ORM-12741; nausea in 9% receiving placebo and 5% receiving ORM-12741; vomiting in 3% taking placebo and 8% taking ORM-12741; diarrhea in 6% on placebo and 5% on ORM-12741; and irritability in 9% on placebo and 3% on ORM-12741.

Dr. Rouru said he is very pleased with the results of this trial, and he and his associates at Orion Pharma are now in the planning stages of the next clinical trial.

"It was very nice that the animal effect we saw, we saw in humans as well," Dr. Rouru said. "Also, the effect that we saw in humans was very clear, which was really encouraging for the future of this compound."

The study was funded by Orion Pharma. Two authors are Orion employees, and two authors work for World Wide Clinical Trials, the contract research organization involved in the trial. One author works for the company providing the computerized memory measurement system.

* This article was revised on 3/27/13.

Publications
Publications
Topics
Article Type
Display Headline
New Alzheimer's drug yields modest memory improvements
Display Headline
New Alzheimer's drug yields modest memory improvements
Legacy Keywords
Alzheimer’s, American Academy of Neurology, ORM-12741, episodic memory, Dr. Juha Rouru, Orion Pharmaceuticals
Legacy Keywords
Alzheimer’s, American Academy of Neurology, ORM-12741, episodic memory, Dr. Juha Rouru, Orion Pharmaceuticals
Sections
Article Source

AT THE 2013 AAN ANNUAL MEETING

PURLs Copyright

Inside the Article

Vitals

Major Finding: Patients with moderate Alzheimer’s disease receiving either 30-60 mg or 100-200 mg of trial drug ORM-12741 twice daily saw a 4% improvement of their episodic memory within 12 weeks, compared with a 33% decrease of episodic memory performance in patients receiving a placebo (P = .03).

Data Source: The findings are based on a phase IIa, randomized, double-blind, placebo-controlled parallel group, multicenter, proof-of-concept 12-week study involving 100 patients with moderate Alzheimer’s disease.

Disclosures: The study was funded by Orion Pharma. Two authors are Orion employees, and two authors work for World Wide Clinical Trials, the contract research organization involved in the trial. One author works for the company providing the computerized memory measurement system.

Recommendations for gestational diabetes mellitus screening remain unchanged

Article Type
Changed
Display Headline
Recommendations for gestational diabetes mellitus screening remain unchanged

BETHESDA, MD. – The current two-step method of diagnosing gestational diabetes mellitus in U.S. pregnant women will not change, based on the recommendations of an independent, voluntary panel at a National Institutes of Health Consensus Development Conference.

The panel released their statement March 6, following the 3-day NIH Consensus Development Conference on Diagnosing Gestational Diabetes Mellitus, during which expert and public comments were incorporated into the draft consensus statement.

The conference convened to review the evidence on gestational diabetes mellitus (GDM) diagnosis methods and discuss seven questions regarding the possible adoption of a recommendation for the single-step approach to diagnosing the condition rather than the two-step method currently used most commonly in the United States.

The one-step approach was proposed by the International Association of the Diabetes and Pregnancy Study Groups (IADPSG) following a 2008 study suggesting that thousands of women are adversely affected by subclinical hyperglycemia. The independent NIH panel, however, determined that additional research is necessary before recommending the single-step method.

"At present, the panel believes that there is not sufficient evidence to adopt a one-step approach, such as that proposed by the IADPSG," the panel wrote. "The panel is particularly concerned about the adoption of new criteria that would increase the prevalence of GDM, and the corresponding costs and interventions, without clear demonstration of improvements in the most clinically important health and patient-centered outcomes."

GDM currently affects approximately 5%-6% of all U.S. pregnancies, including more than 240,000 pregnant women, according to the NIH. This prevalence, however, is based on the use of the current two-step test, and widespread implementation of the single-step test under consideration would likely increase the number of women diagnosed with GDM by two to three times.

A variety of methods exist for screening women for GDM, depending on whether fasting is required, how many grams of glucose the woman consumes for the test, how many appointments the screening requires, and what glucose threshold is used for diagnosis.

The most commonly used method in the United States, recommended by the American College of Obstetricians and Gynecologists, is a two-step method conducted when women are 24-28 weeks pregnant. Women’s blood glucose levels are initially tested 1 hour after consumption of a 50-g glucose drink. If the test indicates a blood glucose level of 130 mg/dL or greater on that test, she undergoes a 3-hour 100-g glucose tolerance test. GDM is diagnosed if a woman’s blood glucose levels reach at least two of the following thresholds: 95 mg/dL after fasting, 180 mg/dL at 1 hour, 155 mg/dL at 2 hours, or 140 mg/dL at 3 hours.

The single-step method involves a fasting plasma glucose and a 75-g 2-hour test between 24 and 28 weeks of pregnancy. A result of at least 92 mg/dL at fasting, 180 mg/dL at 1 hour, or 153 mg/dL at 2 hours would be the threshold for diagnosis of GDM.

The single-step approach is supported by the American Diabetes Association and the World Health Organization (with 110 mg/dL at fasting and 140 mg/dL at 2 hours) and is used in a number of other countries.

However, several doctors have raised concerns that moving to the single-step approach could lead to more interventions for a much larger number of pregnant women who would now be diagnosed, increasing possible harms and costs.

Dr. Peter VanDorsten, the conference panel chairperson and Lawrence L. Hester, Jr. Professor at the Medical University of South Carolina, Charleston, said the research indicates that using the single-step method would increase the number of women diagnosed with GDM to 15%-20% of all pregnancies.

On the plus side, more diagnoses would result in more treatment for these women, which could include nutritional and lifestyle counseling, more clinic visits, and possible insulin therapy. Possible complications associated with GDM include preeclampsia, cesarean delivery, macrosomia, shoulder dystocia, and birth injuries to the mother. Women diagnosed with GDM are also 35%-60% more likely to develop type 2 diabetes later. Babies born to women with GDM are also at a higher risk for hypoglycemia, jaundice, and having difficulty breathing at birth.

On the other hand, more diagnoses would result in higher health care costs and more interventions for women, which could lead to possible harms.

"There is also evidence in some studies that the labeling of these women may have unintended consequences, such as an increase in cesarean delivery and more intensive newborn assessments," the panel wrote in their statement. "In addition, increased patient costs, life disruptions, and psychosocial burdens have been identified. Currently available studies do not provide clear evidence that a one-step approach is cost-effective in comparison with the current two-step approach."

 

 

During a teleconference about the panel’s statement, Dr. VanDorsten said it operationally makes sense to get in line with what others are doing in using the 75-g one-time glucose challenge, the same test used in the nonpregnant population – but not yet.

"Until we have evidence that the benefits of extending the possible diagnosis outweigh the harms," he said, the panel did not find it is appropriate to recommend the single-step approach currently.

"We left the door ajar for reconsideration should these data be forthcoming," Dr. VanDorsten added. He noted that funding agencies will often follow with money for research after the NIH has identified research that is needed.

The panel agreed that "a single standard for screening and diagnostic thresholds for GDM should be established by professional organizations" but identified nine major research gaps that must be addressed in determining what this standard should be. These areas include the following:

• Defining the best strategy in developing a diagnostic approach that aligns more closely with international approaches in the most cost-effective manner possible.

• Determining whether women who would be diagnosed with GDM in the single-step – but not two-step – approach would gain benefit from the diagnosis and treatment.

• Understanding the cost-benefit implications of changing the diagnostic standard.

• Understanding the psychological consequences of a GDM diagnosis on women.

• Conducting cohort studies to show the "real-world" impact that GDM treatment has on practices and care utilization.

• Determining what lifestyle interventions might improve outcomes for pregnant women and their children.

• Assessing long-term impacts of changing the GDM diagnostic criteria.

• Understanding the "long-term metabolic, cardiovascular, developmental, and epigenetic impact on offspring whose mothers have been treated for GDM."

• Assessing what interventions might decrease GDM-diagnosed women’s risk of metabolic syndrome, diabetes, and cardiovascular disease.

Dr. VanDorsten did not define a specific timeline regarding when the NIH would revisit this issue, but he noted that as more evidence becomes available from cohort studies and randomized trials reassessing diagnostic screening methods, a recommendation of the single-step method is possible in the future.

The 15 members of the panel include experts from maternal-fetal medicine, obstetrics and gynecology, endocrinology and infertility, pediatrics, nutrition, epidemiology, economics, and statistics. The panel is an independent group whose members’ travel expenses are paid for by the NIH but who do not receive other compensation for serving on the panel.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
gestational diabetes mellitus, National Institutes of Health Consensus Development Conference, International Association of the Diabetes and Pregnancy Study Groups, American College of Obstetricians and Gynecologists
Author and Disclosure Information

Author and Disclosure Information

Related Articles

BETHESDA, MD. – The current two-step method of diagnosing gestational diabetes mellitus in U.S. pregnant women will not change, based on the recommendations of an independent, voluntary panel at a National Institutes of Health Consensus Development Conference.

The panel released their statement March 6, following the 3-day NIH Consensus Development Conference on Diagnosing Gestational Diabetes Mellitus, during which expert and public comments were incorporated into the draft consensus statement.

The conference convened to review the evidence on gestational diabetes mellitus (GDM) diagnosis methods and discuss seven questions regarding the possible adoption of a recommendation for the single-step approach to diagnosing the condition rather than the two-step method currently used most commonly in the United States.

The one-step approach was proposed by the International Association of the Diabetes and Pregnancy Study Groups (IADPSG) following a 2008 study suggesting that thousands of women are adversely affected by subclinical hyperglycemia. The independent NIH panel, however, determined that additional research is necessary before recommending the single-step method.

"At present, the panel believes that there is not sufficient evidence to adopt a one-step approach, such as that proposed by the IADPSG," the panel wrote. "The panel is particularly concerned about the adoption of new criteria that would increase the prevalence of GDM, and the corresponding costs and interventions, without clear demonstration of improvements in the most clinically important health and patient-centered outcomes."

GDM currently affects approximately 5%-6% of all U.S. pregnancies, including more than 240,000 pregnant women, according to the NIH. This prevalence, however, is based on the use of the current two-step test, and widespread implementation of the single-step test under consideration would likely increase the number of women diagnosed with GDM by two to three times.

A variety of methods exist for screening women for GDM, depending on whether fasting is required, how many grams of glucose the woman consumes for the test, how many appointments the screening requires, and what glucose threshold is used for diagnosis.

The most commonly used method in the United States, recommended by the American College of Obstetricians and Gynecologists, is a two-step method conducted when women are 24-28 weeks pregnant. Women’s blood glucose levels are initially tested 1 hour after consumption of a 50-g glucose drink. If the test indicates a blood glucose level of 130 mg/dL or greater on that test, she undergoes a 3-hour 100-g glucose tolerance test. GDM is diagnosed if a woman’s blood glucose levels reach at least two of the following thresholds: 95 mg/dL after fasting, 180 mg/dL at 1 hour, 155 mg/dL at 2 hours, or 140 mg/dL at 3 hours.

The single-step method involves a fasting plasma glucose and a 75-g 2-hour test between 24 and 28 weeks of pregnancy. A result of at least 92 mg/dL at fasting, 180 mg/dL at 1 hour, or 153 mg/dL at 2 hours would be the threshold for diagnosis of GDM.

The single-step approach is supported by the American Diabetes Association and the World Health Organization (with 110 mg/dL at fasting and 140 mg/dL at 2 hours) and is used in a number of other countries.

However, several doctors have raised concerns that moving to the single-step approach could lead to more interventions for a much larger number of pregnant women who would now be diagnosed, increasing possible harms and costs.

Dr. Peter VanDorsten, the conference panel chairperson and Lawrence L. Hester, Jr. Professor at the Medical University of South Carolina, Charleston, said the research indicates that using the single-step method would increase the number of women diagnosed with GDM to 15%-20% of all pregnancies.

On the plus side, more diagnoses would result in more treatment for these women, which could include nutritional and lifestyle counseling, more clinic visits, and possible insulin therapy. Possible complications associated with GDM include preeclampsia, cesarean delivery, macrosomia, shoulder dystocia, and birth injuries to the mother. Women diagnosed with GDM are also 35%-60% more likely to develop type 2 diabetes later. Babies born to women with GDM are also at a higher risk for hypoglycemia, jaundice, and having difficulty breathing at birth.

On the other hand, more diagnoses would result in higher health care costs and more interventions for women, which could lead to possible harms.

"There is also evidence in some studies that the labeling of these women may have unintended consequences, such as an increase in cesarean delivery and more intensive newborn assessments," the panel wrote in their statement. "In addition, increased patient costs, life disruptions, and psychosocial burdens have been identified. Currently available studies do not provide clear evidence that a one-step approach is cost-effective in comparison with the current two-step approach."

 

 

During a teleconference about the panel’s statement, Dr. VanDorsten said it operationally makes sense to get in line with what others are doing in using the 75-g one-time glucose challenge, the same test used in the nonpregnant population – but not yet.

"Until we have evidence that the benefits of extending the possible diagnosis outweigh the harms," he said, the panel did not find it is appropriate to recommend the single-step approach currently.

"We left the door ajar for reconsideration should these data be forthcoming," Dr. VanDorsten added. He noted that funding agencies will often follow with money for research after the NIH has identified research that is needed.

The panel agreed that "a single standard for screening and diagnostic thresholds for GDM should be established by professional organizations" but identified nine major research gaps that must be addressed in determining what this standard should be. These areas include the following:

• Defining the best strategy in developing a diagnostic approach that aligns more closely with international approaches in the most cost-effective manner possible.

• Determining whether women who would be diagnosed with GDM in the single-step – but not two-step – approach would gain benefit from the diagnosis and treatment.

• Understanding the cost-benefit implications of changing the diagnostic standard.

• Understanding the psychological consequences of a GDM diagnosis on women.

• Conducting cohort studies to show the "real-world" impact that GDM treatment has on practices and care utilization.

• Determining what lifestyle interventions might improve outcomes for pregnant women and their children.

• Assessing long-term impacts of changing the GDM diagnostic criteria.

• Understanding the "long-term metabolic, cardiovascular, developmental, and epigenetic impact on offspring whose mothers have been treated for GDM."

• Assessing what interventions might decrease GDM-diagnosed women’s risk of metabolic syndrome, diabetes, and cardiovascular disease.

Dr. VanDorsten did not define a specific timeline regarding when the NIH would revisit this issue, but he noted that as more evidence becomes available from cohort studies and randomized trials reassessing diagnostic screening methods, a recommendation of the single-step method is possible in the future.

The 15 members of the panel include experts from maternal-fetal medicine, obstetrics and gynecology, endocrinology and infertility, pediatrics, nutrition, epidemiology, economics, and statistics. The panel is an independent group whose members’ travel expenses are paid for by the NIH but who do not receive other compensation for serving on the panel.

BETHESDA, MD. – The current two-step method of diagnosing gestational diabetes mellitus in U.S. pregnant women will not change, based on the recommendations of an independent, voluntary panel at a National Institutes of Health Consensus Development Conference.

The panel released their statement March 6, following the 3-day NIH Consensus Development Conference on Diagnosing Gestational Diabetes Mellitus, during which expert and public comments were incorporated into the draft consensus statement.

The conference convened to review the evidence on gestational diabetes mellitus (GDM) diagnosis methods and discuss seven questions regarding the possible adoption of a recommendation for the single-step approach to diagnosing the condition rather than the two-step method currently used most commonly in the United States.

The one-step approach was proposed by the International Association of the Diabetes and Pregnancy Study Groups (IADPSG) following a 2008 study suggesting that thousands of women are adversely affected by subclinical hyperglycemia. The independent NIH panel, however, determined that additional research is necessary before recommending the single-step method.

"At present, the panel believes that there is not sufficient evidence to adopt a one-step approach, such as that proposed by the IADPSG," the panel wrote. "The panel is particularly concerned about the adoption of new criteria that would increase the prevalence of GDM, and the corresponding costs and interventions, without clear demonstration of improvements in the most clinically important health and patient-centered outcomes."

GDM currently affects approximately 5%-6% of all U.S. pregnancies, including more than 240,000 pregnant women, according to the NIH. This prevalence, however, is based on the use of the current two-step test, and widespread implementation of the single-step test under consideration would likely increase the number of women diagnosed with GDM by two to three times.

A variety of methods exist for screening women for GDM, depending on whether fasting is required, how many grams of glucose the woman consumes for the test, how many appointments the screening requires, and what glucose threshold is used for diagnosis.

The most commonly used method in the United States, recommended by the American College of Obstetricians and Gynecologists, is a two-step method conducted when women are 24-28 weeks pregnant. Women’s blood glucose levels are initially tested 1 hour after consumption of a 50-g glucose drink. If the test indicates a blood glucose level of 130 mg/dL or greater on that test, she undergoes a 3-hour 100-g glucose tolerance test. GDM is diagnosed if a woman’s blood glucose levels reach at least two of the following thresholds: 95 mg/dL after fasting, 180 mg/dL at 1 hour, 155 mg/dL at 2 hours, or 140 mg/dL at 3 hours.

The single-step method involves a fasting plasma glucose and a 75-g 2-hour test between 24 and 28 weeks of pregnancy. A result of at least 92 mg/dL at fasting, 180 mg/dL at 1 hour, or 153 mg/dL at 2 hours would be the threshold for diagnosis of GDM.

The single-step approach is supported by the American Diabetes Association and the World Health Organization (with 110 mg/dL at fasting and 140 mg/dL at 2 hours) and is used in a number of other countries.

However, several doctors have raised concerns that moving to the single-step approach could lead to more interventions for a much larger number of pregnant women who would now be diagnosed, increasing possible harms and costs.

Dr. Peter VanDorsten, the conference panel chairperson and Lawrence L. Hester, Jr. Professor at the Medical University of South Carolina, Charleston, said the research indicates that using the single-step method would increase the number of women diagnosed with GDM to 15%-20% of all pregnancies.

On the plus side, more diagnoses would result in more treatment for these women, which could include nutritional and lifestyle counseling, more clinic visits, and possible insulin therapy. Possible complications associated with GDM include preeclampsia, cesarean delivery, macrosomia, shoulder dystocia, and birth injuries to the mother. Women diagnosed with GDM are also 35%-60% more likely to develop type 2 diabetes later. Babies born to women with GDM are also at a higher risk for hypoglycemia, jaundice, and having difficulty breathing at birth.

On the other hand, more diagnoses would result in higher health care costs and more interventions for women, which could lead to possible harms.

"There is also evidence in some studies that the labeling of these women may have unintended consequences, such as an increase in cesarean delivery and more intensive newborn assessments," the panel wrote in their statement. "In addition, increased patient costs, life disruptions, and psychosocial burdens have been identified. Currently available studies do not provide clear evidence that a one-step approach is cost-effective in comparison with the current two-step approach."

 

 

During a teleconference about the panel’s statement, Dr. VanDorsten said it operationally makes sense to get in line with what others are doing in using the 75-g one-time glucose challenge, the same test used in the nonpregnant population – but not yet.

"Until we have evidence that the benefits of extending the possible diagnosis outweigh the harms," he said, the panel did not find it is appropriate to recommend the single-step approach currently.

"We left the door ajar for reconsideration should these data be forthcoming," Dr. VanDorsten added. He noted that funding agencies will often follow with money for research after the NIH has identified research that is needed.

The panel agreed that "a single standard for screening and diagnostic thresholds for GDM should be established by professional organizations" but identified nine major research gaps that must be addressed in determining what this standard should be. These areas include the following:

• Defining the best strategy in developing a diagnostic approach that aligns more closely with international approaches in the most cost-effective manner possible.

• Determining whether women who would be diagnosed with GDM in the single-step – but not two-step – approach would gain benefit from the diagnosis and treatment.

• Understanding the cost-benefit implications of changing the diagnostic standard.

• Understanding the psychological consequences of a GDM diagnosis on women.

• Conducting cohort studies to show the "real-world" impact that GDM treatment has on practices and care utilization.

• Determining what lifestyle interventions might improve outcomes for pregnant women and their children.

• Assessing long-term impacts of changing the GDM diagnostic criteria.

• Understanding the "long-term metabolic, cardiovascular, developmental, and epigenetic impact on offspring whose mothers have been treated for GDM."

• Assessing what interventions might decrease GDM-diagnosed women’s risk of metabolic syndrome, diabetes, and cardiovascular disease.

Dr. VanDorsten did not define a specific timeline regarding when the NIH would revisit this issue, but he noted that as more evidence becomes available from cohort studies and randomized trials reassessing diagnostic screening methods, a recommendation of the single-step method is possible in the future.

The 15 members of the panel include experts from maternal-fetal medicine, obstetrics and gynecology, endocrinology and infertility, pediatrics, nutrition, epidemiology, economics, and statistics. The panel is an independent group whose members’ travel expenses are paid for by the NIH but who do not receive other compensation for serving on the panel.

Publications
Publications
Topics
Article Type
Display Headline
Recommendations for gestational diabetes mellitus screening remain unchanged
Display Headline
Recommendations for gestational diabetes mellitus screening remain unchanged
Legacy Keywords
gestational diabetes mellitus, National Institutes of Health Consensus Development Conference, International Association of the Diabetes and Pregnancy Study Groups, American College of Obstetricians and Gynecologists
Legacy Keywords
gestational diabetes mellitus, National Institutes of Health Consensus Development Conference, International Association of the Diabetes and Pregnancy Study Groups, American College of Obstetricians and Gynecologists
Article Source

AT AN NIH CONSENSUS DEVELOPMENT CONFERENCE

PURLs Copyright

Inside the Article

Vitals

Major finding: An independent panel assembled for the NIH Consensus Development Conference on Diagnosing Gestational Diabetes Mellitus determined that the evidence is insufficient to recommend moving from the current two-step GDM screening process to the single-step GDM screening.

Data source: The findings are based on a review of all the current evidence regarding cost-effectiveness and maternal/fetal outcomes, as well as analyses of possible benefits and harms to use of the single-step approach.

Disclosures: The panel is an independent group whose members’ travel expenses are paid for by the NIH but who do not receive other compensation for serving on the panel.

Double-jointed teens have high risk for musculoskeletal pain

Article Type
Changed
Display Headline
Double-jointed teens have high risk for musculoskeletal pain

Adolescents with hypermobility in their joints – or "double jointedness" – are almost twice as likely as their normal-jointed counterparts are to develop musculoskeletal pain in their shoulders, knees, ankles, and feet as they enter adulthood, according to a case-control study.

Dr. Jonathan Tobias

The risk of later knee pain was even greater for obese, double-jointed teens, with 10 times greater odds than for normal-weight adolescents without joint hypermobility.

The 4-year prospective study found that 44.8% of teens (with or without joint hypermobility) in the study reported joint pain within the past month that lasted at least a day. A higher proportion of girls (47.5%) than boys (41.3%, P = .001) reported joint pain, which included the spine, shoulder, knee and ankle/foot, reported Dr. Jonathan H. Tobias of the University of Bristol and his associates (Arthritis Rheum. 2013 Feb. 28 [doi:10.1002/art.37836]).

The researchers assessed hypermobility with the Beighton score in 2,901 teens enrolled in the Avon Longitudinal Study of Parents and Children (ALSPAC) when the participants were a mean 13.8 years old. The 4.6% of teenagers classified as hypermobile included 7% of the 1,634 girls and 1.3% of the 1,267 boys (P less than .001).

The Beighton score runs from 0 to 9, based on the number of hypermobile joints in an evaluation of both thumbs, both little fingers, both elbows, both knees, and the trunk. In this study, joint hypermobility is classified as having a score of 6 or greater, which is higher than the score of 4 or more that has been frequently used in other studies.

The researchers gave participants a pain questionnaire 4 years after enrollment to determine which teens had experienced "at least moderately troublesome pain lasting 1 day or longer within the last month, at specific musculoskeletal sites." Participants rated the severity of their pain and the degree to which it interfered with daily activities on scales from 1 to 10.

The 44.8% of participants who reported pain most often cited lower back pain (16.1%), followed by upper back pain (8.9%), neck pain (8.6%), shoulder pain (9.5%), knee pain (8.8%), and ankle/foot pain (6.8%). A total of 4.8% of the participants reported knee, hip, shoulder, or lower back chronic pain for at least 3 months. Another 4.4% reported having chronic widespread pain for at least 3 months.

The researchers used a Chi-squared test and then logistic regression analysis to determine odds ratios after adjusting for sex, maternal education, and body-mass index. The adolescents with joint hypermobility had greater odds for shoulder pain (odds ratio, 1.68; 95% confidence interval, 1.04-2.72), knee pain (OR, 1.83; 95% CI, 1.10-3.02), and ankle/foot pain (OR, 1.82; 95% CI, 1.05-3.16). No association was found between joint hypermobility and musculoskeletal pain at the spine, elbows, hands, or hips.

Another analysis that accounted for risks associated with obesity found that obese adolescents with joint hypermobility were particularly at high risk for knee pain, with an odds ratio of 1.6 for nonobese hypermobile teens and an OR of 11.0 for obese hypermobile teens (P = .04).

When the researchers analyzed the data using a more common cut-off Beighton score of at least 4, only shoulder pain was significantly associated with hypermobility (OR, 1.42; P = .02) while knee pain (OR, 1.17) and ankle/foot pain (OR, 0.92) were nonsignificant after adjustment.

While this study’s findings are consistent with another study involving a cohort of 228 younger children aged 10 years and 12 years, these findings are inconsistent with a recent systematic review of 15 papers that found an association between joint hypermobility and musculoskeletal pain in Afro Asian region residents but not in Europeans. The authors of the current study note that its differences include its larger sample size, prospective design, and its definition of joint hypermobility as a Beighton score of 6 or greater, thereby including only children in the top 5%-10% for joint hypermobility.

The pain research in the study was funded by a grant from Arthritis Research UK. The ALSPAC is funded by the UK Medical Research Council, the Wellcome Trust, and the University of Bristol, England. The authors had no disclosures.

rhnews@elsevier.com

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Jonathan Tobias, adolescents, hypermobility, double jointed, musculoskeletal pain
Author and Disclosure Information

Author and Disclosure Information

Adolescents with hypermobility in their joints – or "double jointedness" – are almost twice as likely as their normal-jointed counterparts are to develop musculoskeletal pain in their shoulders, knees, ankles, and feet as they enter adulthood, according to a case-control study.

Dr. Jonathan Tobias

The risk of later knee pain was even greater for obese, double-jointed teens, with 10 times greater odds than for normal-weight adolescents without joint hypermobility.

The 4-year prospective study found that 44.8% of teens (with or without joint hypermobility) in the study reported joint pain within the past month that lasted at least a day. A higher proportion of girls (47.5%) than boys (41.3%, P = .001) reported joint pain, which included the spine, shoulder, knee and ankle/foot, reported Dr. Jonathan H. Tobias of the University of Bristol and his associates (Arthritis Rheum. 2013 Feb. 28 [doi:10.1002/art.37836]).

The researchers assessed hypermobility with the Beighton score in 2,901 teens enrolled in the Avon Longitudinal Study of Parents and Children (ALSPAC) when the participants were a mean 13.8 years old. The 4.6% of teenagers classified as hypermobile included 7% of the 1,634 girls and 1.3% of the 1,267 boys (P less than .001).

The Beighton score runs from 0 to 9, based on the number of hypermobile joints in an evaluation of both thumbs, both little fingers, both elbows, both knees, and the trunk. In this study, joint hypermobility is classified as having a score of 6 or greater, which is higher than the score of 4 or more that has been frequently used in other studies.

The researchers gave participants a pain questionnaire 4 years after enrollment to determine which teens had experienced "at least moderately troublesome pain lasting 1 day or longer within the last month, at specific musculoskeletal sites." Participants rated the severity of their pain and the degree to which it interfered with daily activities on scales from 1 to 10.

The 44.8% of participants who reported pain most often cited lower back pain (16.1%), followed by upper back pain (8.9%), neck pain (8.6%), shoulder pain (9.5%), knee pain (8.8%), and ankle/foot pain (6.8%). A total of 4.8% of the participants reported knee, hip, shoulder, or lower back chronic pain for at least 3 months. Another 4.4% reported having chronic widespread pain for at least 3 months.

The researchers used a Chi-squared test and then logistic regression analysis to determine odds ratios after adjusting for sex, maternal education, and body-mass index. The adolescents with joint hypermobility had greater odds for shoulder pain (odds ratio, 1.68; 95% confidence interval, 1.04-2.72), knee pain (OR, 1.83; 95% CI, 1.10-3.02), and ankle/foot pain (OR, 1.82; 95% CI, 1.05-3.16). No association was found between joint hypermobility and musculoskeletal pain at the spine, elbows, hands, or hips.

Another analysis that accounted for risks associated with obesity found that obese adolescents with joint hypermobility were particularly at high risk for knee pain, with an odds ratio of 1.6 for nonobese hypermobile teens and an OR of 11.0 for obese hypermobile teens (P = .04).

When the researchers analyzed the data using a more common cut-off Beighton score of at least 4, only shoulder pain was significantly associated with hypermobility (OR, 1.42; P = .02) while knee pain (OR, 1.17) and ankle/foot pain (OR, 0.92) were nonsignificant after adjustment.

While this study’s findings are consistent with another study involving a cohort of 228 younger children aged 10 years and 12 years, these findings are inconsistent with a recent systematic review of 15 papers that found an association between joint hypermobility and musculoskeletal pain in Afro Asian region residents but not in Europeans. The authors of the current study note that its differences include its larger sample size, prospective design, and its definition of joint hypermobility as a Beighton score of 6 or greater, thereby including only children in the top 5%-10% for joint hypermobility.

The pain research in the study was funded by a grant from Arthritis Research UK. The ALSPAC is funded by the UK Medical Research Council, the Wellcome Trust, and the University of Bristol, England. The authors had no disclosures.

rhnews@elsevier.com

Adolescents with hypermobility in their joints – or "double jointedness" – are almost twice as likely as their normal-jointed counterparts are to develop musculoskeletal pain in their shoulders, knees, ankles, and feet as they enter adulthood, according to a case-control study.

Dr. Jonathan Tobias

The risk of later knee pain was even greater for obese, double-jointed teens, with 10 times greater odds than for normal-weight adolescents without joint hypermobility.

The 4-year prospective study found that 44.8% of teens (with or without joint hypermobility) in the study reported joint pain within the past month that lasted at least a day. A higher proportion of girls (47.5%) than boys (41.3%, P = .001) reported joint pain, which included the spine, shoulder, knee and ankle/foot, reported Dr. Jonathan H. Tobias of the University of Bristol and his associates (Arthritis Rheum. 2013 Feb. 28 [doi:10.1002/art.37836]).

The researchers assessed hypermobility with the Beighton score in 2,901 teens enrolled in the Avon Longitudinal Study of Parents and Children (ALSPAC) when the participants were a mean 13.8 years old. The 4.6% of teenagers classified as hypermobile included 7% of the 1,634 girls and 1.3% of the 1,267 boys (P less than .001).

The Beighton score runs from 0 to 9, based on the number of hypermobile joints in an evaluation of both thumbs, both little fingers, both elbows, both knees, and the trunk. In this study, joint hypermobility is classified as having a score of 6 or greater, which is higher than the score of 4 or more that has been frequently used in other studies.

The researchers gave participants a pain questionnaire 4 years after enrollment to determine which teens had experienced "at least moderately troublesome pain lasting 1 day or longer within the last month, at specific musculoskeletal sites." Participants rated the severity of their pain and the degree to which it interfered with daily activities on scales from 1 to 10.

The 44.8% of participants who reported pain most often cited lower back pain (16.1%), followed by upper back pain (8.9%), neck pain (8.6%), shoulder pain (9.5%), knee pain (8.8%), and ankle/foot pain (6.8%). A total of 4.8% of the participants reported knee, hip, shoulder, or lower back chronic pain for at least 3 months. Another 4.4% reported having chronic widespread pain for at least 3 months.

The researchers used a Chi-squared test and then logistic regression analysis to determine odds ratios after adjusting for sex, maternal education, and body-mass index. The adolescents with joint hypermobility had greater odds for shoulder pain (odds ratio, 1.68; 95% confidence interval, 1.04-2.72), knee pain (OR, 1.83; 95% CI, 1.10-3.02), and ankle/foot pain (OR, 1.82; 95% CI, 1.05-3.16). No association was found between joint hypermobility and musculoskeletal pain at the spine, elbows, hands, or hips.

Another analysis that accounted for risks associated with obesity found that obese adolescents with joint hypermobility were particularly at high risk for knee pain, with an odds ratio of 1.6 for nonobese hypermobile teens and an OR of 11.0 for obese hypermobile teens (P = .04).

When the researchers analyzed the data using a more common cut-off Beighton score of at least 4, only shoulder pain was significantly associated with hypermobility (OR, 1.42; P = .02) while knee pain (OR, 1.17) and ankle/foot pain (OR, 0.92) were nonsignificant after adjustment.

While this study’s findings are consistent with another study involving a cohort of 228 younger children aged 10 years and 12 years, these findings are inconsistent with a recent systematic review of 15 papers that found an association between joint hypermobility and musculoskeletal pain in Afro Asian region residents but not in Europeans. The authors of the current study note that its differences include its larger sample size, prospective design, and its definition of joint hypermobility as a Beighton score of 6 or greater, thereby including only children in the top 5%-10% for joint hypermobility.

The pain research in the study was funded by a grant from Arthritis Research UK. The ALSPAC is funded by the UK Medical Research Council, the Wellcome Trust, and the University of Bristol, England. The authors had no disclosures.

rhnews@elsevier.com

Publications
Publications
Topics
Article Type
Display Headline
Double-jointed teens have high risk for musculoskeletal pain
Display Headline
Double-jointed teens have high risk for musculoskeletal pain
Legacy Keywords
Jonathan Tobias, adolescents, hypermobility, double jointed, musculoskeletal pain
Legacy Keywords
Jonathan Tobias, adolescents, hypermobility, double jointed, musculoskeletal pain
Article Source

FROM ARTHRITIS & RHEUMATISM

PURLs Copyright

Inside the Article

Vitals

Major finding: Adolescents deemed to have hypermobile joints had greater odds for having pain 4 years later in the shoulder (OR, 1.68), knee (OR, 1.83), and ankle/foot (OR, 1.82) than did adolescents without joint hypermobility.

Data source: A prospective study of 2,901 adolescents who were enrolled in the Avon Longitudinal Study of Parents and Children (ALSPAC) in southwest England.

Disclosures: The pain research in the study was funded by a grant from Arthritis Research-UK. ALSPAC is funded by the UK Medical Research Council, the Wellcome Trust, and the University of Bristol. The authors had no disclosures.

Health care-associated infections in hospitals continue to decline

Implementing best practices will continue to boost improvements
Article Type
Changed
Display Headline
Health care-associated infections in hospitals continue to decline

The rates of three major types of health care–associated infections have continued to decrease in U.S. hospitals, according to a new report from the Centers for Disease Control and Prevention.

The rate of central line–associated bloodstream infections (CLABSI) is down nationally by 41%, catheter-associated urinary tract infections (CAUTI) are down by 7%, and surgical site infections (SSI) for a combined 10 surgical procedures are down by 17%.

These declines are measured against the 2008 baseline rates of CLABSIs, CAUTIs and SSIs reported when the U.S. Department of Health and Human Services established its 5-year goals for reducing health care–associated infections by the end of 2013. The HHS goals include reducing CLABSIs by 50%, CAUTIs by 25%, and SSIs by 25%. The American College of Surgeons and the CDC have partnered to develop the means to report, measure, and prevent health care–associated infections, and the ACS has been instrumental in collecting and submitting standard SSI measure data and other data to the CDC’s National Healthcare Safety Network (NHSN) and the ACS’s National Surgical Quality Improvement Program (NSQIP).

“One thing these numbers show us is the complexity of achieving im­ provement,” said Dr. Clifford Y. Ko, MD, MS, FACS, Director of the American College of Surgeons (ACS) Division of Research and Optimal Patient Care. “The College’s recent effort with the Joint Commission Center for Transforming Healthcare to reduce SSI has shown us that SSIs are very multifactorial, and not every provider or facility has the same is­ sues to address.

“Similarly, even the measurement and analytical techniques used in this study can be improved upon,” Dr. Ko added. “While better than they used to be, we know through ACS NSQIP®[the College’s National Surgical Quality Improvement Program] that we can measure and feedback risk-adjusted infection rates on all procedures, not just 10. This is important for gaining traction with all providers because it will likely require the effort of all providers to achieve system- wide, sustained improvement.”

Paul J. Malpiedi and associates at the CDC reported the findings in the 2011 National and State Healthcare-Associated Infections Standard Infection Ratio Report. Mr. Malpiedi’s team compared the standard infection ratios (SIRs) between 2010 and 2011 to determine progress in preventing health care–associated infections.

Standard infection ratios developed at the national, state, and facility levels compare the number of infections that actually occurred to the number that would be expected based on the referent years: 2008 for CLABSIs and SSIs, and 2009 for CAUTIs. The standard infection ratios were adjusted to account for hospital type, hospital size (based on bed number), and hospital affiliation with a medical school.

Mr. Malpiedi’s team analyzed the data reported for the 2011 calendar year to the NHSN from 3,472 facilities for CLABSIs, 1,807 facilities for CAUTIs and 2,130 facilities for SSIs, based on reports submitted through Sept. 4, 2012. Non–acute care hospitals, outpatient dialysis facilities, inpatient dialysis wards, long-term care facilities, and outpatient surgical settings were excluded from the analysis.

A total of 18,113 CLABSIs were reported during 2011, compared with 30,617 that were predicted to occur based on the 2008 referent population, for an SIR of 0.592. This 41% reduction is an improvement over the 2010 reduction of 32%. With a median SIR of 0.469, half the reporting facilities in 2011 had reduced their CLABSIs by 53%. The lowest rate of CLABSIs was reported in ICUs, where the infections had declined 44% since 2008.

The number of facilities reporting infection data increased from 2010 to 2011. The 3,472 facilities in 50 states and the District of Columbia that reported data for CLABSIs represented a 55% increase from those reporting in 2010. The 2011 data came from 12,122 patient care locations, which included 5,722 ICUs (47%), 5,436 wards (45%) and 946 NICUs (8%).

The overall reduction in CAUTIs was less substantial, with no significant overall change since 2010. The 7% reduction in CAUTIs, with an SIR of 0.93, came from 14,315 reported CAUTIs, compared to the 15,398 predicted infections based on the 2009 referent population. Specifically, CAUTIs in wards declined about 15% while the infection rates in ICUs remained unchanged.

When only the 550 facilities that reported in both 2010 and 2011 were included in the analysis, the reduction since 2010 was statistically significant. A total of 1,807 facilities in 50 states and the District of Columbia reported CAUTI data, an 84% increase from the 2010 number of 981 reporting facilities. The 6,402 patient care locations included in the CAUTI data came from 2,633 ICUs (41%) and 3,769 wards (59%).

 

 

A total of 2,130 facilities from 48 states and the District of Columbia reported SSI data. Among the 748,192 surgical procedures included were 6,357 deep incisional and organ/space infections occurred, compared to the 7,683 SSIs that were predicted using the 2008 baseline, for an SIR of 0.827 .

This lower SIR represents a 17% decline in SSIs since 2008. SSIs declined for hip arthroplasty (10.4% decline), knee arthroplasty (14.3%), coronary artery bypass graft (22.1%), cardiac surgery (30.2%), peripheral vascular bypass surgery (25.5%), abdominal aortic aneurysm repair (45.7%), colon surgery (20.4%), rectal surgery (25.6%), abdominal hysterectomy (16.6%),and vaginal hysterectomy (13.3%).

The increase in reporting facilities in 2011 is partly a result of new state requirements for reporting health care–associated infections to the NHSN (30 states plus the District of Columbia as of December 2012) and from the federal requirement that all hospitals participating in the CMS Hospital Inpatient Quality Reporting Program report these infections to the NHSN.

The authors estimated that each CLABSI occurring in ICU patients cost the CMS approximately $26,000. However, the report did not have information on the insurance status of the patients with CLABSIs, so this figure would not apply to the private insurance patients.

The report was funded by the CDC, and no disclosures were noted.

surgerynews@elsevier.com

Body

The CDC report on health care–associated infections is great news. It shows that we have been making significant and substantial progress in the often preventable infections that occur in our hospitals. Reductions of 41% (CLABSI) are very impressive. This is a significant number of patients who did not get infected, receive otherwise unnecessary antibiotics and remain in the hospital longer than necessary. This also represents a significant cost savings. As we strive for improved value for our patients – higher quality care at lower costs – improvements like this are amazing.

One interesting finding is that, while there are reductions in CAUTIs and SSIs, they are not as significant as those with CLABSI. I think part of this has to do with the research into CLABSI and the fact that it lent itself well to the use of protocols and checklists, which are easily adopted by institutions. Peter Pronovost’s 2006 New England Journal of Medicine study detailed the 66% reduction in CLABSI throughout Michigan ICUs via the use of a simple checklist. SSIs also lend themselves to "protocol-ization." CAUTIs are slightly more difficult because a different human factor is introduced – the convenience and wishes of the patient. We need to continue educating our patients about CAUTIs and developing protocols that make the early removal of catheters the norm rather than the exception.

Physicians should be proud of their efforts in reducing health care–associated infections. We need to continue working hard to sustain these gains and identify other areas where similar interventions will yield positive outcomes. Sustained education and intervention will get us close to the HHS goals by the end of 2013, if not achieve them outright. One simple method of preventing health care–associated infections is to (a) implement a standardized checklist of proven steps to reduce said infections, and (b) empower members of the health care team to stop the provider when those steps are not being followed. A team approach, both in the development and implementation of these protocols, is essential to initial and sustained success.

Dr. Michael Pistoria is an internal medicine specialist and hospitalist at Allentown Hospital and Bethlehem Hospital in New Jersey. He is a senior fellow of the Society of Hospital Medicine and served as lead editor of the publication "Core Competencies in Hospital Medicine," which defined hospitalists’ roles. He made these comments in an e-mail interview with this news organization.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
health care–associated infections, decrease in U.S. hospitals, Centers for Disease Control and Prevention, central line–associated bloodstream infections, CLABSI, catheter-associated urinary tract infections, CAUTI, U.S. Department of Health and Human Services, American College of Surgeons, National Healthcare Safety Network, NHSN, ACS’s National Surgical Quality Improvement Program, NSQIP, Paul J. Malpiedi, 2011 National and State Healthcare-Associated Infections Standard Infection Ratio Report,
Author and Disclosure Information

Author and Disclosure Information

Body

The CDC report on health care–associated infections is great news. It shows that we have been making significant and substantial progress in the often preventable infections that occur in our hospitals. Reductions of 41% (CLABSI) are very impressive. This is a significant number of patients who did not get infected, receive otherwise unnecessary antibiotics and remain in the hospital longer than necessary. This also represents a significant cost savings. As we strive for improved value for our patients – higher quality care at lower costs – improvements like this are amazing.

One interesting finding is that, while there are reductions in CAUTIs and SSIs, they are not as significant as those with CLABSI. I think part of this has to do with the research into CLABSI and the fact that it lent itself well to the use of protocols and checklists, which are easily adopted by institutions. Peter Pronovost’s 2006 New England Journal of Medicine study detailed the 66% reduction in CLABSI throughout Michigan ICUs via the use of a simple checklist. SSIs also lend themselves to "protocol-ization." CAUTIs are slightly more difficult because a different human factor is introduced – the convenience and wishes of the patient. We need to continue educating our patients about CAUTIs and developing protocols that make the early removal of catheters the norm rather than the exception.

Physicians should be proud of their efforts in reducing health care–associated infections. We need to continue working hard to sustain these gains and identify other areas where similar interventions will yield positive outcomes. Sustained education and intervention will get us close to the HHS goals by the end of 2013, if not achieve them outright. One simple method of preventing health care–associated infections is to (a) implement a standardized checklist of proven steps to reduce said infections, and (b) empower members of the health care team to stop the provider when those steps are not being followed. A team approach, both in the development and implementation of these protocols, is essential to initial and sustained success.

Dr. Michael Pistoria is an internal medicine specialist and hospitalist at Allentown Hospital and Bethlehem Hospital in New Jersey. He is a senior fellow of the Society of Hospital Medicine and served as lead editor of the publication "Core Competencies in Hospital Medicine," which defined hospitalists’ roles. He made these comments in an e-mail interview with this news organization.

Body

The CDC report on health care–associated infections is great news. It shows that we have been making significant and substantial progress in the often preventable infections that occur in our hospitals. Reductions of 41% (CLABSI) are very impressive. This is a significant number of patients who did not get infected, receive otherwise unnecessary antibiotics and remain in the hospital longer than necessary. This also represents a significant cost savings. As we strive for improved value for our patients – higher quality care at lower costs – improvements like this are amazing.

One interesting finding is that, while there are reductions in CAUTIs and SSIs, they are not as significant as those with CLABSI. I think part of this has to do with the research into CLABSI and the fact that it lent itself well to the use of protocols and checklists, which are easily adopted by institutions. Peter Pronovost’s 2006 New England Journal of Medicine study detailed the 66% reduction in CLABSI throughout Michigan ICUs via the use of a simple checklist. SSIs also lend themselves to "protocol-ization." CAUTIs are slightly more difficult because a different human factor is introduced – the convenience and wishes of the patient. We need to continue educating our patients about CAUTIs and developing protocols that make the early removal of catheters the norm rather than the exception.

Physicians should be proud of their efforts in reducing health care–associated infections. We need to continue working hard to sustain these gains and identify other areas where similar interventions will yield positive outcomes. Sustained education and intervention will get us close to the HHS goals by the end of 2013, if not achieve them outright. One simple method of preventing health care–associated infections is to (a) implement a standardized checklist of proven steps to reduce said infections, and (b) empower members of the health care team to stop the provider when those steps are not being followed. A team approach, both in the development and implementation of these protocols, is essential to initial and sustained success.

Dr. Michael Pistoria is an internal medicine specialist and hospitalist at Allentown Hospital and Bethlehem Hospital in New Jersey. He is a senior fellow of the Society of Hospital Medicine and served as lead editor of the publication "Core Competencies in Hospital Medicine," which defined hospitalists’ roles. He made these comments in an e-mail interview with this news organization.

Title
Implementing best practices will continue to boost improvements
Implementing best practices will continue to boost improvements

The rates of three major types of health care–associated infections have continued to decrease in U.S. hospitals, according to a new report from the Centers for Disease Control and Prevention.

The rate of central line–associated bloodstream infections (CLABSI) is down nationally by 41%, catheter-associated urinary tract infections (CAUTI) are down by 7%, and surgical site infections (SSI) for a combined 10 surgical procedures are down by 17%.

These declines are measured against the 2008 baseline rates of CLABSIs, CAUTIs and SSIs reported when the U.S. Department of Health and Human Services established its 5-year goals for reducing health care–associated infections by the end of 2013. The HHS goals include reducing CLABSIs by 50%, CAUTIs by 25%, and SSIs by 25%. The American College of Surgeons and the CDC have partnered to develop the means to report, measure, and prevent health care–associated infections, and the ACS has been instrumental in collecting and submitting standard SSI measure data and other data to the CDC’s National Healthcare Safety Network (NHSN) and the ACS’s National Surgical Quality Improvement Program (NSQIP).

“One thing these numbers show us is the complexity of achieving im­ provement,” said Dr. Clifford Y. Ko, MD, MS, FACS, Director of the American College of Surgeons (ACS) Division of Research and Optimal Patient Care. “The College’s recent effort with the Joint Commission Center for Transforming Healthcare to reduce SSI has shown us that SSIs are very multifactorial, and not every provider or facility has the same is­ sues to address.

“Similarly, even the measurement and analytical techniques used in this study can be improved upon,” Dr. Ko added. “While better than they used to be, we know through ACS NSQIP®[the College’s National Surgical Quality Improvement Program] that we can measure and feedback risk-adjusted infection rates on all procedures, not just 10. This is important for gaining traction with all providers because it will likely require the effort of all providers to achieve system- wide, sustained improvement.”

Paul J. Malpiedi and associates at the CDC reported the findings in the 2011 National and State Healthcare-Associated Infections Standard Infection Ratio Report. Mr. Malpiedi’s team compared the standard infection ratios (SIRs) between 2010 and 2011 to determine progress in preventing health care–associated infections.

Standard infection ratios developed at the national, state, and facility levels compare the number of infections that actually occurred to the number that would be expected based on the referent years: 2008 for CLABSIs and SSIs, and 2009 for CAUTIs. The standard infection ratios were adjusted to account for hospital type, hospital size (based on bed number), and hospital affiliation with a medical school.

Mr. Malpiedi’s team analyzed the data reported for the 2011 calendar year to the NHSN from 3,472 facilities for CLABSIs, 1,807 facilities for CAUTIs and 2,130 facilities for SSIs, based on reports submitted through Sept. 4, 2012. Non–acute care hospitals, outpatient dialysis facilities, inpatient dialysis wards, long-term care facilities, and outpatient surgical settings were excluded from the analysis.

A total of 18,113 CLABSIs were reported during 2011, compared with 30,617 that were predicted to occur based on the 2008 referent population, for an SIR of 0.592. This 41% reduction is an improvement over the 2010 reduction of 32%. With a median SIR of 0.469, half the reporting facilities in 2011 had reduced their CLABSIs by 53%. The lowest rate of CLABSIs was reported in ICUs, where the infections had declined 44% since 2008.

The number of facilities reporting infection data increased from 2010 to 2011. The 3,472 facilities in 50 states and the District of Columbia that reported data for CLABSIs represented a 55% increase from those reporting in 2010. The 2011 data came from 12,122 patient care locations, which included 5,722 ICUs (47%), 5,436 wards (45%) and 946 NICUs (8%).

The overall reduction in CAUTIs was less substantial, with no significant overall change since 2010. The 7% reduction in CAUTIs, with an SIR of 0.93, came from 14,315 reported CAUTIs, compared to the 15,398 predicted infections based on the 2009 referent population. Specifically, CAUTIs in wards declined about 15% while the infection rates in ICUs remained unchanged.

When only the 550 facilities that reported in both 2010 and 2011 were included in the analysis, the reduction since 2010 was statistically significant. A total of 1,807 facilities in 50 states and the District of Columbia reported CAUTI data, an 84% increase from the 2010 number of 981 reporting facilities. The 6,402 patient care locations included in the CAUTI data came from 2,633 ICUs (41%) and 3,769 wards (59%).

 

 

A total of 2,130 facilities from 48 states and the District of Columbia reported SSI data. Among the 748,192 surgical procedures included were 6,357 deep incisional and organ/space infections occurred, compared to the 7,683 SSIs that were predicted using the 2008 baseline, for an SIR of 0.827 .

This lower SIR represents a 17% decline in SSIs since 2008. SSIs declined for hip arthroplasty (10.4% decline), knee arthroplasty (14.3%), coronary artery bypass graft (22.1%), cardiac surgery (30.2%), peripheral vascular bypass surgery (25.5%), abdominal aortic aneurysm repair (45.7%), colon surgery (20.4%), rectal surgery (25.6%), abdominal hysterectomy (16.6%),and vaginal hysterectomy (13.3%).

The increase in reporting facilities in 2011 is partly a result of new state requirements for reporting health care–associated infections to the NHSN (30 states plus the District of Columbia as of December 2012) and from the federal requirement that all hospitals participating in the CMS Hospital Inpatient Quality Reporting Program report these infections to the NHSN.

The authors estimated that each CLABSI occurring in ICU patients cost the CMS approximately $26,000. However, the report did not have information on the insurance status of the patients with CLABSIs, so this figure would not apply to the private insurance patients.

The report was funded by the CDC, and no disclosures were noted.

surgerynews@elsevier.com

The rates of three major types of health care–associated infections have continued to decrease in U.S. hospitals, according to a new report from the Centers for Disease Control and Prevention.

The rate of central line–associated bloodstream infections (CLABSI) is down nationally by 41%, catheter-associated urinary tract infections (CAUTI) are down by 7%, and surgical site infections (SSI) for a combined 10 surgical procedures are down by 17%.

These declines are measured against the 2008 baseline rates of CLABSIs, CAUTIs and SSIs reported when the U.S. Department of Health and Human Services established its 5-year goals for reducing health care–associated infections by the end of 2013. The HHS goals include reducing CLABSIs by 50%, CAUTIs by 25%, and SSIs by 25%. The American College of Surgeons and the CDC have partnered to develop the means to report, measure, and prevent health care–associated infections, and the ACS has been instrumental in collecting and submitting standard SSI measure data and other data to the CDC’s National Healthcare Safety Network (NHSN) and the ACS’s National Surgical Quality Improvement Program (NSQIP).

“One thing these numbers show us is the complexity of achieving im­ provement,” said Dr. Clifford Y. Ko, MD, MS, FACS, Director of the American College of Surgeons (ACS) Division of Research and Optimal Patient Care. “The College’s recent effort with the Joint Commission Center for Transforming Healthcare to reduce SSI has shown us that SSIs are very multifactorial, and not every provider or facility has the same is­ sues to address.

“Similarly, even the measurement and analytical techniques used in this study can be improved upon,” Dr. Ko added. “While better than they used to be, we know through ACS NSQIP®[the College’s National Surgical Quality Improvement Program] that we can measure and feedback risk-adjusted infection rates on all procedures, not just 10. This is important for gaining traction with all providers because it will likely require the effort of all providers to achieve system- wide, sustained improvement.”

Paul J. Malpiedi and associates at the CDC reported the findings in the 2011 National and State Healthcare-Associated Infections Standard Infection Ratio Report. Mr. Malpiedi’s team compared the standard infection ratios (SIRs) between 2010 and 2011 to determine progress in preventing health care–associated infections.

Standard infection ratios developed at the national, state, and facility levels compare the number of infections that actually occurred to the number that would be expected based on the referent years: 2008 for CLABSIs and SSIs, and 2009 for CAUTIs. The standard infection ratios were adjusted to account for hospital type, hospital size (based on bed number), and hospital affiliation with a medical school.

Mr. Malpiedi’s team analyzed the data reported for the 2011 calendar year to the NHSN from 3,472 facilities for CLABSIs, 1,807 facilities for CAUTIs and 2,130 facilities for SSIs, based on reports submitted through Sept. 4, 2012. Non–acute care hospitals, outpatient dialysis facilities, inpatient dialysis wards, long-term care facilities, and outpatient surgical settings were excluded from the analysis.

A total of 18,113 CLABSIs were reported during 2011, compared with 30,617 that were predicted to occur based on the 2008 referent population, for an SIR of 0.592. This 41% reduction is an improvement over the 2010 reduction of 32%. With a median SIR of 0.469, half the reporting facilities in 2011 had reduced their CLABSIs by 53%. The lowest rate of CLABSIs was reported in ICUs, where the infections had declined 44% since 2008.

The number of facilities reporting infection data increased from 2010 to 2011. The 3,472 facilities in 50 states and the District of Columbia that reported data for CLABSIs represented a 55% increase from those reporting in 2010. The 2011 data came from 12,122 patient care locations, which included 5,722 ICUs (47%), 5,436 wards (45%) and 946 NICUs (8%).

The overall reduction in CAUTIs was less substantial, with no significant overall change since 2010. The 7% reduction in CAUTIs, with an SIR of 0.93, came from 14,315 reported CAUTIs, compared to the 15,398 predicted infections based on the 2009 referent population. Specifically, CAUTIs in wards declined about 15% while the infection rates in ICUs remained unchanged.

When only the 550 facilities that reported in both 2010 and 2011 were included in the analysis, the reduction since 2010 was statistically significant. A total of 1,807 facilities in 50 states and the District of Columbia reported CAUTI data, an 84% increase from the 2010 number of 981 reporting facilities. The 6,402 patient care locations included in the CAUTI data came from 2,633 ICUs (41%) and 3,769 wards (59%).

 

 

A total of 2,130 facilities from 48 states and the District of Columbia reported SSI data. Among the 748,192 surgical procedures included were 6,357 deep incisional and organ/space infections occurred, compared to the 7,683 SSIs that were predicted using the 2008 baseline, for an SIR of 0.827 .

This lower SIR represents a 17% decline in SSIs since 2008. SSIs declined for hip arthroplasty (10.4% decline), knee arthroplasty (14.3%), coronary artery bypass graft (22.1%), cardiac surgery (30.2%), peripheral vascular bypass surgery (25.5%), abdominal aortic aneurysm repair (45.7%), colon surgery (20.4%), rectal surgery (25.6%), abdominal hysterectomy (16.6%),and vaginal hysterectomy (13.3%).

The increase in reporting facilities in 2011 is partly a result of new state requirements for reporting health care–associated infections to the NHSN (30 states plus the District of Columbia as of December 2012) and from the federal requirement that all hospitals participating in the CMS Hospital Inpatient Quality Reporting Program report these infections to the NHSN.

The authors estimated that each CLABSI occurring in ICU patients cost the CMS approximately $26,000. However, the report did not have information on the insurance status of the patients with CLABSIs, so this figure would not apply to the private insurance patients.

The report was funded by the CDC, and no disclosures were noted.

surgerynews@elsevier.com

Publications
Publications
Topics
Article Type
Display Headline
Health care-associated infections in hospitals continue to decline
Display Headline
Health care-associated infections in hospitals continue to decline
Legacy Keywords
health care–associated infections, decrease in U.S. hospitals, Centers for Disease Control and Prevention, central line–associated bloodstream infections, CLABSI, catheter-associated urinary tract infections, CAUTI, U.S. Department of Health and Human Services, American College of Surgeons, National Healthcare Safety Network, NHSN, ACS’s National Surgical Quality Improvement Program, NSQIP, Paul J. Malpiedi, 2011 National and State Healthcare-Associated Infections Standard Infection Ratio Report,
Legacy Keywords
health care–associated infections, decrease in U.S. hospitals, Centers for Disease Control and Prevention, central line–associated bloodstream infections, CLABSI, catheter-associated urinary tract infections, CAUTI, U.S. Department of Health and Human Services, American College of Surgeons, National Healthcare Safety Network, NHSN, ACS’s National Surgical Quality Improvement Program, NSQIP, Paul J. Malpiedi, 2011 National and State Healthcare-Associated Infections Standard Infection Ratio Report,
Article Source

PURLs Copyright

Inside the Article

Vitals

Major Finding: Since 2008, central line-associated bloodstream infections in facilities from all 50 states and the District of Columbia have declined 41% and surgical site infections have declined 17%, with catheter-associated urinary tract infections showing a decline of 7% since 2009.

Data Source: The data come from 3,472 facilities’ reports for central line-associated bloodstream infections, 1,807 facilities’ reports for catheter-associated urinary tract infections and 2,130 facilities’ reports for surgical site infections, which occurred during the 2011 calendar year, included in reports submitted up through Sept. 4, 2012.

Disclosures: The study was funded by the U.S. Centers for Disease Control and Prevention. No disclosures were noted.

Loss of autism diagnosis and symptoms achievable, with caveats

Nonoptimal outcomes are not failures
Article Type
Changed
Display Headline
Loss of autism diagnosis and symptoms achievable, with caveats

The potential for high-functioning autistic children to lose both their autism spectrum disorder diagnosis and to achieve typical, nonautistic social and communication functioning was demonstrated in a recent study in the Journal of Child Psychology and Psychiatry.

The participants initially met criteria for an autism spectrum disorder (ASD) diagnosis, but they have since lost all ASD symptoms and diagnosis based on clinical judgment and on assessments in social cognition (face recognition), language, and social interaction as measured on the Vineland Adaptive Behavior Scales (VABS) and the Autism Diagnostic Observation Schedule (ADOS).

The small study, involving 34 formerly ASD children who achieved "optimal outcomes" (OO) and their matched cohorts, includes a number of limitations and was conducted largely to "demonstrate the existence" of a group that clearly had autism previously and now no longer does, reported Dr. Deborah Fein of the University of Connecticut, Storrs, and her associates (J. Child Psychol. Psychiatry 2013;54:195-205).

Dr. Deborah Fein

The researchers matched the OO participants by gender, age, and nonverbal IQ to 44 high-functioning autism (HFA) participants and 34 typically developing (TD) participants. OO participant eligibility required a documented ASD diagnosis before age 5 years, which was confirmed independently with a study coinvestigator using only behavior notes in the child’s records. Additionally, OO participants had to have typically developing friends, could not currently meet the criteria for any ASD diagnosis (also independently confirmed), needed at least a 77 on the communication and socialization domains of the VABS, and could not be receiving any special education services related to autism.

Children in the HFA group had to meet ASD criteria clinically and with the ADOS. Children in the TD group could not have ever met criteria for ASD (by parent report) or have a first-degree relative with ASD, and had to have at least a 77 on the communication and socialization VABS domains. All group participants were excluded if they had a debilitating active psychotic disorder, severe visual or hearing impairments, a seizure disorder, fragile X syndrome, or any significant head trauma.

During approximately 6-hour testing sessions, the study participants underwent assessments using the ADOS, the VABS, the Benton Facial Recognition Test, the Clinical Evaluation of Language Fundamentals-IV, the Wechsler Abbreviated Scale of Intelligence for verbal and nonverbal IQ, and the Edinburgh Handedness Inventory. The latter test was used because "left-handedness or delayed maturation of handedness is overrepresented in autism." Parent interviews were used to establish the severity of the children’s initial ASD diagnosis, using the Autism Diagnostic Interview-Revised (ADI-R), and all parents were interviewed using the Social Communication-Questionnaire (SCQ).

No OO or TD participants met ASD diagnostic criteria currently. Although seven OO participants showed some mild social impairment, it was determined to be nonautistic and related to anxiety, depression, embarrassment, inattention, or related issues. TD and OO participants had nearly identical and high average verbal IQ scores, which were an average 7 points higher than the HFA verbal IQ scores. The HFA group was below average on the facial recognition scores; in the OO and TD groups, facial recognitions scores were average and similar.

On the ADOS communication items, 21 TD and 20 OO participants had straight zeroes, which indicates most typically functioning; none of the HFA participants scored straight zeros on these items. Also, 22 TD and 16 OO participants had straight zeroes on the ADOS social items, but none of the HFA participants did. Scores on the VABS communication (OO, 98.30; TD, 93.44), socialization (OO, 102.03; TD, 101.74), and daily living (OO, 92.30; TD, 88.76) scales were similar between OO and TD groups. The HFA group’s mean scores were significantly lower in the same domains (82.70, 75.51, and 75.40, respectively).

Although the OO participants had shown less impaired lifetime socialization scores on the ADI-R than the HFA group (15.24, compared with 20.30; P less than .001), the two groups’ communication (OO, 14.30; HFA, 15.51) and repetitive behaviors (OO, 5.85; HFA, 6.19) scores were similar. Yet the OO participants’ autism symptoms were, on average, a bit milder than those of the HFA group, according to comparisons of parent reports. Indeed, the OO individuals’ milder childhood autism is one limitation of the study, and the seemingly similar reports of communication and repetitive behaviors between the OO and HFA individuals could be biased by parent report.

Dr. Fein and her associates concluded their "results clearly demonstrate the existence of a group of individuals with an early history of ASD, who no longer meet criteria for any ASD, and whose communication and socialization skills ... are on par with that of TD individuals."

 

 

They noted, however, that there may be "subtle residual deficits" among the OO participants that the assessments did not detect, and they are analyzing further results of cognitive ability, language, academics, and executive function testing for later reporting.

Dr. Fein and her associates also noted that analyzing peer interaction and the quality of friendships would more conclusively establish evidence of normal social functioning in the OO group.

The surprisingly higher average IQ scores among the OO individuals also points to the possibility that "above average cognition allowed individuals with ASD to compensate for some of their deficits" or that there was a higher study volunteerism rate among families with higher-IQ children, they said. Further, OO participants were screened to specifically include scores in "the normal range on specific cognitive and adaptive measures," reducing likely differences between the OO and TD children.

The study’s applicability also has significant limitations. The researchers cannot address the question of how many children with ASD can necessarily reach these outcomes, which would require a prospective, longitudinal study. The study also does not offer insights into which interventions – if any – might more likely produce an optimal outcome, which itself was narrowly defined in this study. It’s also unclear whether the optimal outcomes result from compensatory functioning or from actual changes in brain structure and function, Dr. Fein and her associates said.

The researchers also mentioned a lack of diversity in their study, which enrolled mainly children in the northeastern United States and largely white participants. They theorized that OO may be rare in children from minority groups or families with lower socioeconomic status because of lack of optimal interventions or resources.

Other "crucial questions" remain related to the "biology of remediable autism, the course of improvement, and the necessary and sufficient conditions, including treatment, for such improvement," they said.

The study was funded by the National Institutes of Health. The authors said that they had no relevant disclosures.

Body

This is an important paper that, like all others, needs replication, and there is reasonably strong evidence that early detection and intervention have led, on balance, to significant improvements in outcome, said Dr. Fred Volkmar.

There are, however, complexities associated with understanding the word "cure." It is important to realize that a range of outcomes is possible, and we sometimes don’t have a good sense until adolescence of how well a person will do. Sadly, even with good programs and for reasons we don’t understand, the degree of improvement is not what we want.

Yet lesser improvement should not be regarded as "failure," noted Dr. Sally Ozonoff.

Researchers have generally avoided the word "recovery," as Dr. Fein and her associates do in this study, to avoid creating false hopes, sounding like marketing materials for treatments, or implying any other outcome than an optimal one is a failure. Yet, while recovery won’t be possible for everyone or the only outcome worth fighting for, this study does provide reason to talk seriously about the possibility of "recovery" as long as it does not detract attention from those who achieve smaller gains.

Meanwhile, other optimal outcomes, as A.A. Broderick has noted, can also "include emergence from isolation into engagement with the world and full participation in an ordinary life, even while retaining significant symptoms," Dr. Ozonoff wrote.

Dr. Volkmar is chief of child psychiatry at Yale-New Haven Hospital and director of the Child Study Center at Yale University. He made these comments in an interview. Dr. Ozonoff, joint editor of the Journal of Child Psychology and Psychiatry, is a professor specializing in autism research in the department of psychiatry and behavioral sciences at the University of California, Davis. Her comments appeared in a commentary published with the study (J. Child Psychol. Psychiatry 2013;54:113-4).

Author and Disclosure Information

Publications
Topics
Legacy Keywords
potential, high-functioning, autistic, children, autism spectrum disorder, diagnosis, nonautistic social, communication functioning, the Journal of Child Psychology and Psychiatry, ASD, the Vineland Adaptive Behavior Scales, VABS, Autism Diagnostic Observation Schedule, ADOS
Author and Disclosure Information

Author and Disclosure Information

Body

This is an important paper that, like all others, needs replication, and there is reasonably strong evidence that early detection and intervention have led, on balance, to significant improvements in outcome, said Dr. Fred Volkmar.

There are, however, complexities associated with understanding the word "cure." It is important to realize that a range of outcomes is possible, and we sometimes don’t have a good sense until adolescence of how well a person will do. Sadly, even with good programs and for reasons we don’t understand, the degree of improvement is not what we want.

Yet lesser improvement should not be regarded as "failure," noted Dr. Sally Ozonoff.

Researchers have generally avoided the word "recovery," as Dr. Fein and her associates do in this study, to avoid creating false hopes, sounding like marketing materials for treatments, or implying any other outcome than an optimal one is a failure. Yet, while recovery won’t be possible for everyone or the only outcome worth fighting for, this study does provide reason to talk seriously about the possibility of "recovery" as long as it does not detract attention from those who achieve smaller gains.

Meanwhile, other optimal outcomes, as A.A. Broderick has noted, can also "include emergence from isolation into engagement with the world and full participation in an ordinary life, even while retaining significant symptoms," Dr. Ozonoff wrote.

Dr. Volkmar is chief of child psychiatry at Yale-New Haven Hospital and director of the Child Study Center at Yale University. He made these comments in an interview. Dr. Ozonoff, joint editor of the Journal of Child Psychology and Psychiatry, is a professor specializing in autism research in the department of psychiatry and behavioral sciences at the University of California, Davis. Her comments appeared in a commentary published with the study (J. Child Psychol. Psychiatry 2013;54:113-4).

Body

This is an important paper that, like all others, needs replication, and there is reasonably strong evidence that early detection and intervention have led, on balance, to significant improvements in outcome, said Dr. Fred Volkmar.

There are, however, complexities associated with understanding the word "cure." It is important to realize that a range of outcomes is possible, and we sometimes don’t have a good sense until adolescence of how well a person will do. Sadly, even with good programs and for reasons we don’t understand, the degree of improvement is not what we want.

Yet lesser improvement should not be regarded as "failure," noted Dr. Sally Ozonoff.

Researchers have generally avoided the word "recovery," as Dr. Fein and her associates do in this study, to avoid creating false hopes, sounding like marketing materials for treatments, or implying any other outcome than an optimal one is a failure. Yet, while recovery won’t be possible for everyone or the only outcome worth fighting for, this study does provide reason to talk seriously about the possibility of "recovery" as long as it does not detract attention from those who achieve smaller gains.

Meanwhile, other optimal outcomes, as A.A. Broderick has noted, can also "include emergence from isolation into engagement with the world and full participation in an ordinary life, even while retaining significant symptoms," Dr. Ozonoff wrote.

Dr. Volkmar is chief of child psychiatry at Yale-New Haven Hospital and director of the Child Study Center at Yale University. He made these comments in an interview. Dr. Ozonoff, joint editor of the Journal of Child Psychology and Psychiatry, is a professor specializing in autism research in the department of psychiatry and behavioral sciences at the University of California, Davis. Her comments appeared in a commentary published with the study (J. Child Psychol. Psychiatry 2013;54:113-4).

Title
Nonoptimal outcomes are not failures
Nonoptimal outcomes are not failures

The potential for high-functioning autistic children to lose both their autism spectrum disorder diagnosis and to achieve typical, nonautistic social and communication functioning was demonstrated in a recent study in the Journal of Child Psychology and Psychiatry.

The participants initially met criteria for an autism spectrum disorder (ASD) diagnosis, but they have since lost all ASD symptoms and diagnosis based on clinical judgment and on assessments in social cognition (face recognition), language, and social interaction as measured on the Vineland Adaptive Behavior Scales (VABS) and the Autism Diagnostic Observation Schedule (ADOS).

The small study, involving 34 formerly ASD children who achieved "optimal outcomes" (OO) and their matched cohorts, includes a number of limitations and was conducted largely to "demonstrate the existence" of a group that clearly had autism previously and now no longer does, reported Dr. Deborah Fein of the University of Connecticut, Storrs, and her associates (J. Child Psychol. Psychiatry 2013;54:195-205).

Dr. Deborah Fein

The researchers matched the OO participants by gender, age, and nonverbal IQ to 44 high-functioning autism (HFA) participants and 34 typically developing (TD) participants. OO participant eligibility required a documented ASD diagnosis before age 5 years, which was confirmed independently with a study coinvestigator using only behavior notes in the child’s records. Additionally, OO participants had to have typically developing friends, could not currently meet the criteria for any ASD diagnosis (also independently confirmed), needed at least a 77 on the communication and socialization domains of the VABS, and could not be receiving any special education services related to autism.

Children in the HFA group had to meet ASD criteria clinically and with the ADOS. Children in the TD group could not have ever met criteria for ASD (by parent report) or have a first-degree relative with ASD, and had to have at least a 77 on the communication and socialization VABS domains. All group participants were excluded if they had a debilitating active psychotic disorder, severe visual or hearing impairments, a seizure disorder, fragile X syndrome, or any significant head trauma.

During approximately 6-hour testing sessions, the study participants underwent assessments using the ADOS, the VABS, the Benton Facial Recognition Test, the Clinical Evaluation of Language Fundamentals-IV, the Wechsler Abbreviated Scale of Intelligence for verbal and nonverbal IQ, and the Edinburgh Handedness Inventory. The latter test was used because "left-handedness or delayed maturation of handedness is overrepresented in autism." Parent interviews were used to establish the severity of the children’s initial ASD diagnosis, using the Autism Diagnostic Interview-Revised (ADI-R), and all parents were interviewed using the Social Communication-Questionnaire (SCQ).

No OO or TD participants met ASD diagnostic criteria currently. Although seven OO participants showed some mild social impairment, it was determined to be nonautistic and related to anxiety, depression, embarrassment, inattention, or related issues. TD and OO participants had nearly identical and high average verbal IQ scores, which were an average 7 points higher than the HFA verbal IQ scores. The HFA group was below average on the facial recognition scores; in the OO and TD groups, facial recognitions scores were average and similar.

On the ADOS communication items, 21 TD and 20 OO participants had straight zeroes, which indicates most typically functioning; none of the HFA participants scored straight zeros on these items. Also, 22 TD and 16 OO participants had straight zeroes on the ADOS social items, but none of the HFA participants did. Scores on the VABS communication (OO, 98.30; TD, 93.44), socialization (OO, 102.03; TD, 101.74), and daily living (OO, 92.30; TD, 88.76) scales were similar between OO and TD groups. The HFA group’s mean scores were significantly lower in the same domains (82.70, 75.51, and 75.40, respectively).

Although the OO participants had shown less impaired lifetime socialization scores on the ADI-R than the HFA group (15.24, compared with 20.30; P less than .001), the two groups’ communication (OO, 14.30; HFA, 15.51) and repetitive behaviors (OO, 5.85; HFA, 6.19) scores were similar. Yet the OO participants’ autism symptoms were, on average, a bit milder than those of the HFA group, according to comparisons of parent reports. Indeed, the OO individuals’ milder childhood autism is one limitation of the study, and the seemingly similar reports of communication and repetitive behaviors between the OO and HFA individuals could be biased by parent report.

Dr. Fein and her associates concluded their "results clearly demonstrate the existence of a group of individuals with an early history of ASD, who no longer meet criteria for any ASD, and whose communication and socialization skills ... are on par with that of TD individuals."

 

 

They noted, however, that there may be "subtle residual deficits" among the OO participants that the assessments did not detect, and they are analyzing further results of cognitive ability, language, academics, and executive function testing for later reporting.

Dr. Fein and her associates also noted that analyzing peer interaction and the quality of friendships would more conclusively establish evidence of normal social functioning in the OO group.

The surprisingly higher average IQ scores among the OO individuals also points to the possibility that "above average cognition allowed individuals with ASD to compensate for some of their deficits" or that there was a higher study volunteerism rate among families with higher-IQ children, they said. Further, OO participants were screened to specifically include scores in "the normal range on specific cognitive and adaptive measures," reducing likely differences between the OO and TD children.

The study’s applicability also has significant limitations. The researchers cannot address the question of how many children with ASD can necessarily reach these outcomes, which would require a prospective, longitudinal study. The study also does not offer insights into which interventions – if any – might more likely produce an optimal outcome, which itself was narrowly defined in this study. It’s also unclear whether the optimal outcomes result from compensatory functioning or from actual changes in brain structure and function, Dr. Fein and her associates said.

The researchers also mentioned a lack of diversity in their study, which enrolled mainly children in the northeastern United States and largely white participants. They theorized that OO may be rare in children from minority groups or families with lower socioeconomic status because of lack of optimal interventions or resources.

Other "crucial questions" remain related to the "biology of remediable autism, the course of improvement, and the necessary and sufficient conditions, including treatment, for such improvement," they said.

The study was funded by the National Institutes of Health. The authors said that they had no relevant disclosures.

The potential for high-functioning autistic children to lose both their autism spectrum disorder diagnosis and to achieve typical, nonautistic social and communication functioning was demonstrated in a recent study in the Journal of Child Psychology and Psychiatry.

The participants initially met criteria for an autism spectrum disorder (ASD) diagnosis, but they have since lost all ASD symptoms and diagnosis based on clinical judgment and on assessments in social cognition (face recognition), language, and social interaction as measured on the Vineland Adaptive Behavior Scales (VABS) and the Autism Diagnostic Observation Schedule (ADOS).

The small study, involving 34 formerly ASD children who achieved "optimal outcomes" (OO) and their matched cohorts, includes a number of limitations and was conducted largely to "demonstrate the existence" of a group that clearly had autism previously and now no longer does, reported Dr. Deborah Fein of the University of Connecticut, Storrs, and her associates (J. Child Psychol. Psychiatry 2013;54:195-205).

Dr. Deborah Fein

The researchers matched the OO participants by gender, age, and nonverbal IQ to 44 high-functioning autism (HFA) participants and 34 typically developing (TD) participants. OO participant eligibility required a documented ASD diagnosis before age 5 years, which was confirmed independently with a study coinvestigator using only behavior notes in the child’s records. Additionally, OO participants had to have typically developing friends, could not currently meet the criteria for any ASD diagnosis (also independently confirmed), needed at least a 77 on the communication and socialization domains of the VABS, and could not be receiving any special education services related to autism.

Children in the HFA group had to meet ASD criteria clinically and with the ADOS. Children in the TD group could not have ever met criteria for ASD (by parent report) or have a first-degree relative with ASD, and had to have at least a 77 on the communication and socialization VABS domains. All group participants were excluded if they had a debilitating active psychotic disorder, severe visual or hearing impairments, a seizure disorder, fragile X syndrome, or any significant head trauma.

During approximately 6-hour testing sessions, the study participants underwent assessments using the ADOS, the VABS, the Benton Facial Recognition Test, the Clinical Evaluation of Language Fundamentals-IV, the Wechsler Abbreviated Scale of Intelligence for verbal and nonverbal IQ, and the Edinburgh Handedness Inventory. The latter test was used because "left-handedness or delayed maturation of handedness is overrepresented in autism." Parent interviews were used to establish the severity of the children’s initial ASD diagnosis, using the Autism Diagnostic Interview-Revised (ADI-R), and all parents were interviewed using the Social Communication-Questionnaire (SCQ).

No OO or TD participants met ASD diagnostic criteria currently. Although seven OO participants showed some mild social impairment, it was determined to be nonautistic and related to anxiety, depression, embarrassment, inattention, or related issues. TD and OO participants had nearly identical and high average verbal IQ scores, which were an average 7 points higher than the HFA verbal IQ scores. The HFA group was below average on the facial recognition scores; in the OO and TD groups, facial recognitions scores were average and similar.

On the ADOS communication items, 21 TD and 20 OO participants had straight zeroes, which indicates most typically functioning; none of the HFA participants scored straight zeros on these items. Also, 22 TD and 16 OO participants had straight zeroes on the ADOS social items, but none of the HFA participants did. Scores on the VABS communication (OO, 98.30; TD, 93.44), socialization (OO, 102.03; TD, 101.74), and daily living (OO, 92.30; TD, 88.76) scales were similar between OO and TD groups. The HFA group’s mean scores were significantly lower in the same domains (82.70, 75.51, and 75.40, respectively).

Although the OO participants had shown less impaired lifetime socialization scores on the ADI-R than the HFA group (15.24, compared with 20.30; P less than .001), the two groups’ communication (OO, 14.30; HFA, 15.51) and repetitive behaviors (OO, 5.85; HFA, 6.19) scores were similar. Yet the OO participants’ autism symptoms were, on average, a bit milder than those of the HFA group, according to comparisons of parent reports. Indeed, the OO individuals’ milder childhood autism is one limitation of the study, and the seemingly similar reports of communication and repetitive behaviors between the OO and HFA individuals could be biased by parent report.

Dr. Fein and her associates concluded their "results clearly demonstrate the existence of a group of individuals with an early history of ASD, who no longer meet criteria for any ASD, and whose communication and socialization skills ... are on par with that of TD individuals."

 

 

They noted, however, that there may be "subtle residual deficits" among the OO participants that the assessments did not detect, and they are analyzing further results of cognitive ability, language, academics, and executive function testing for later reporting.

Dr. Fein and her associates also noted that analyzing peer interaction and the quality of friendships would more conclusively establish evidence of normal social functioning in the OO group.

The surprisingly higher average IQ scores among the OO individuals also points to the possibility that "above average cognition allowed individuals with ASD to compensate for some of their deficits" or that there was a higher study volunteerism rate among families with higher-IQ children, they said. Further, OO participants were screened to specifically include scores in "the normal range on specific cognitive and adaptive measures," reducing likely differences between the OO and TD children.

The study’s applicability also has significant limitations. The researchers cannot address the question of how many children with ASD can necessarily reach these outcomes, which would require a prospective, longitudinal study. The study also does not offer insights into which interventions – if any – might more likely produce an optimal outcome, which itself was narrowly defined in this study. It’s also unclear whether the optimal outcomes result from compensatory functioning or from actual changes in brain structure and function, Dr. Fein and her associates said.

The researchers also mentioned a lack of diversity in their study, which enrolled mainly children in the northeastern United States and largely white participants. They theorized that OO may be rare in children from minority groups or families with lower socioeconomic status because of lack of optimal interventions or resources.

Other "crucial questions" remain related to the "biology of remediable autism, the course of improvement, and the necessary and sufficient conditions, including treatment, for such improvement," they said.

The study was funded by the National Institutes of Health. The authors said that they had no relevant disclosures.

Publications
Publications
Topics
Article Type
Display Headline
Loss of autism diagnosis and symptoms achievable, with caveats
Display Headline
Loss of autism diagnosis and symptoms achievable, with caveats
Legacy Keywords
potential, high-functioning, autistic, children, autism spectrum disorder, diagnosis, nonautistic social, communication functioning, the Journal of Child Psychology and Psychiatry, ASD, the Vineland Adaptive Behavior Scales, VABS, Autism Diagnostic Observation Schedule, ADOS
Legacy Keywords
potential, high-functioning, autistic, children, autism spectrum disorder, diagnosis, nonautistic social, communication functioning, the Journal of Child Psychology and Psychiatry, ASD, the Vineland Adaptive Behavior Scales, VABS, Autism Diagnostic Observation Schedule, ADOS
Article Source

FROM THE JOURNAL OF CHILD PSYCHOLOGY AND PSYCHIATRY

PURLs Copyright

Inside the Article

Vitals

Major Finding: The ability of children with autism spectrum disorder to lose their diagnosis and their autistic symptoms and achieve performance in the typical development range for verbal IQ, communication skills, and social functioning has been shown to be possible and to be one "optimal outcome" for children with ASD.

Data Source: A battery of social, communication, IQ, and related functioning assessments of 34 "optimal outcome" (formerly ASD) participants, 44 high-functioning autism participants, and 34 typically developing participants, all matched by gender, age, and nonverbal IQ.

Disclosures: The study was funded by the National Institutes of Health. The authors said that they had no relevant disclosures.

Cough aerosols flagged most-infectious TB patients

Article Type
Changed
Display Headline
Cough aerosols flagged most-infectious TB patients

A new, effective method of assessing tuberculosis infectiousness involves directly measuring aerosols from the coughs of pulmonary TB patients, according to a study published Jan. 10 in the American Journal of Respiratory and Critical Care Medicine.

An analysis of cough aerosols, when available, more accurately predicted transmission than did the traditional method of sputum smear microscopy or culture, reported Dr. Edward C. Jones López of Boston Medical Center and his associates (Am. J. Respir. Crit. Care Med. 2013 Jan. 10 [doi: 10.1164/rccm.201208-1422OC]).

Photo Credit: Janice Carr, Centers for Disease Control and Prevention
Gram-positive M. tuberculosis bacteria

Therefore, the researchers analyzed the number of M. tuberculosis colony forming units (CFUs) in TB patients’ aerosols to see whether the CFU number better predicted new infection in contacts than did an AFB-positive smear. The study group included 96 adult TB patients with sputum AFB-positive culture and their 442 household contacts from May 2009 to January 2011.

The TB patients attended the Mulago Hospital National Tuberculosis and Leprosy Programme in Kampala, Uganda, and lived with at least three household contacts. All had an initial AFB of at least 1+ plus M. tuberculosis culture growth and had received fewer than 6 days of antituberculous treatment or no treatment.

A total of 45% of patients (43) produced culturable M. tuberculosis in aerosols during the two 5-minute coughing periods of sample collection. The 26% of total study group patients who produced high aerosols (at least 10 CFUs) were more likely to transmit an infection to their contacts than the 19% with low aerosols (1-9 CFUs) or the 55% of aerosol-negative cases. Ten was selected as a CFU cut-off, based on an associated increase in tuberculin skin test (TST) conversion risk at this number.

While 69% of the contacts of high aerosol patients were "at-risk" of TST conversion, 25% of contacts of low aerosol patients and 30% of contacts of aerosol negative patients were at risk of TST conversation (P = .009). New infections were diagnosed through a positive tuberculin skin test (TST) or interferon-gamma release assay (IGRA), with retests six weeks later for contacts who tested negative at baseline for both TST and IGRA.

TST conversion risk in low aerosol and aerosol-negative patient contacts was similar (odds ratio, 0.77; 95% confidence interval 0.27-2.17; P = .62). However, the risk in contacts of high-aerosol patients was over five times greater than in contacts of low-aerosol patients (OR, 5.18; 95% CI 1.52-17.61) before adjustment. An adjusted analysis yielded a similar odds ratio (OR, 4.81; 95% CI 1.20-19.23).

Meanwhile, "the same analysis using sputum AFB smear grade to classify exposure groups did not show a clear or consistent risk stratification," the authors wrote. Therefore, high-aerosol TB patients more accurately predicted new TB infections, based on risk of TST conversion.

"In addition to providing a more precise marker of source infectiousness, cough aerosols may help determine the individual risk of M. tuberculosis infection after exposure, which can be variable and is poorly understood," the authors wrote. Yet they acknowledge the limitation that cough aerosols’ predictive value over time is unknown.

They noted three primary implications of their findings, first of which is a "new framework for rational and cost-effective infection control decisions" since the common wisdom that all sputum AFB positive patients are equally infectious is no longer necessarily the case. They also noted that Latent Tuberculosis Infection treatment programs may be improved through more efficient selection of TB contacts with TB exposure.

Finally, the authors suggested that analyzing TB aerosols might offer more accurate classifications of contacts’ inhaled doses of TB, thereby potentially offering an opportunity to better understand how the immune system responds to TB. This information may, in turn, contribute to studies related to TB vaccines, medications and immune responses.

This study was supported by a University of Medicine and Dentistry of New Jersey Foundation award with matching funds from the Division of Infectious Diseases at New Jersey Medical School, funds from the section of infectious diseases at Boston Medical Center and support to Dr. Matthew Fox from the National Institute of Allergy and Infectious Diseases.

Click for Credit Link
Author and Disclosure Information

Publications
Topics
Legacy Keywords
tuberculosis, infectiousness, aerosols, coughs, pulmonary TB, patients, the American Journal of Respiratory and Critical Care Medicine, transmission, microscopy, culture, Dr. Edward C. Jones López, Boston Medical Center
Sections
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

A new, effective method of assessing tuberculosis infectiousness involves directly measuring aerosols from the coughs of pulmonary TB patients, according to a study published Jan. 10 in the American Journal of Respiratory and Critical Care Medicine.

An analysis of cough aerosols, when available, more accurately predicted transmission than did the traditional method of sputum smear microscopy or culture, reported Dr. Edward C. Jones López of Boston Medical Center and his associates (Am. J. Respir. Crit. Care Med. 2013 Jan. 10 [doi: 10.1164/rccm.201208-1422OC]).

Photo Credit: Janice Carr, Centers for Disease Control and Prevention
Gram-positive M. tuberculosis bacteria

Therefore, the researchers analyzed the number of M. tuberculosis colony forming units (CFUs) in TB patients’ aerosols to see whether the CFU number better predicted new infection in contacts than did an AFB-positive smear. The study group included 96 adult TB patients with sputum AFB-positive culture and their 442 household contacts from May 2009 to January 2011.

The TB patients attended the Mulago Hospital National Tuberculosis and Leprosy Programme in Kampala, Uganda, and lived with at least three household contacts. All had an initial AFB of at least 1+ plus M. tuberculosis culture growth and had received fewer than 6 days of antituberculous treatment or no treatment.

A total of 45% of patients (43) produced culturable M. tuberculosis in aerosols during the two 5-minute coughing periods of sample collection. The 26% of total study group patients who produced high aerosols (at least 10 CFUs) were more likely to transmit an infection to their contacts than the 19% with low aerosols (1-9 CFUs) or the 55% of aerosol-negative cases. Ten was selected as a CFU cut-off, based on an associated increase in tuberculin skin test (TST) conversion risk at this number.

While 69% of the contacts of high aerosol patients were "at-risk" of TST conversion, 25% of contacts of low aerosol patients and 30% of contacts of aerosol negative patients were at risk of TST conversation (P = .009). New infections were diagnosed through a positive tuberculin skin test (TST) or interferon-gamma release assay (IGRA), with retests six weeks later for contacts who tested negative at baseline for both TST and IGRA.

TST conversion risk in low aerosol and aerosol-negative patient contacts was similar (odds ratio, 0.77; 95% confidence interval 0.27-2.17; P = .62). However, the risk in contacts of high-aerosol patients was over five times greater than in contacts of low-aerosol patients (OR, 5.18; 95% CI 1.52-17.61) before adjustment. An adjusted analysis yielded a similar odds ratio (OR, 4.81; 95% CI 1.20-19.23).

Meanwhile, "the same analysis using sputum AFB smear grade to classify exposure groups did not show a clear or consistent risk stratification," the authors wrote. Therefore, high-aerosol TB patients more accurately predicted new TB infections, based on risk of TST conversion.

"In addition to providing a more precise marker of source infectiousness, cough aerosols may help determine the individual risk of M. tuberculosis infection after exposure, which can be variable and is poorly understood," the authors wrote. Yet they acknowledge the limitation that cough aerosols’ predictive value over time is unknown.

They noted three primary implications of their findings, first of which is a "new framework for rational and cost-effective infection control decisions" since the common wisdom that all sputum AFB positive patients are equally infectious is no longer necessarily the case. They also noted that Latent Tuberculosis Infection treatment programs may be improved through more efficient selection of TB contacts with TB exposure.

Finally, the authors suggested that analyzing TB aerosols might offer more accurate classifications of contacts’ inhaled doses of TB, thereby potentially offering an opportunity to better understand how the immune system responds to TB. This information may, in turn, contribute to studies related to TB vaccines, medications and immune responses.

This study was supported by a University of Medicine and Dentistry of New Jersey Foundation award with matching funds from the Division of Infectious Diseases at New Jersey Medical School, funds from the section of infectious diseases at Boston Medical Center and support to Dr. Matthew Fox from the National Institute of Allergy and Infectious Diseases.

A new, effective method of assessing tuberculosis infectiousness involves directly measuring aerosols from the coughs of pulmonary TB patients, according to a study published Jan. 10 in the American Journal of Respiratory and Critical Care Medicine.

An analysis of cough aerosols, when available, more accurately predicted transmission than did the traditional method of sputum smear microscopy or culture, reported Dr. Edward C. Jones López of Boston Medical Center and his associates (Am. J. Respir. Crit. Care Med. 2013 Jan. 10 [doi: 10.1164/rccm.201208-1422OC]).

Photo Credit: Janice Carr, Centers for Disease Control and Prevention
Gram-positive M. tuberculosis bacteria

Therefore, the researchers analyzed the number of M. tuberculosis colony forming units (CFUs) in TB patients’ aerosols to see whether the CFU number better predicted new infection in contacts than did an AFB-positive smear. The study group included 96 adult TB patients with sputum AFB-positive culture and their 442 household contacts from May 2009 to January 2011.

The TB patients attended the Mulago Hospital National Tuberculosis and Leprosy Programme in Kampala, Uganda, and lived with at least three household contacts. All had an initial AFB of at least 1+ plus M. tuberculosis culture growth and had received fewer than 6 days of antituberculous treatment or no treatment.

A total of 45% of patients (43) produced culturable M. tuberculosis in aerosols during the two 5-minute coughing periods of sample collection. The 26% of total study group patients who produced high aerosols (at least 10 CFUs) were more likely to transmit an infection to their contacts than the 19% with low aerosols (1-9 CFUs) or the 55% of aerosol-negative cases. Ten was selected as a CFU cut-off, based on an associated increase in tuberculin skin test (TST) conversion risk at this number.

While 69% of the contacts of high aerosol patients were "at-risk" of TST conversion, 25% of contacts of low aerosol patients and 30% of contacts of aerosol negative patients were at risk of TST conversation (P = .009). New infections were diagnosed through a positive tuberculin skin test (TST) or interferon-gamma release assay (IGRA), with retests six weeks later for contacts who tested negative at baseline for both TST and IGRA.

TST conversion risk in low aerosol and aerosol-negative patient contacts was similar (odds ratio, 0.77; 95% confidence interval 0.27-2.17; P = .62). However, the risk in contacts of high-aerosol patients was over five times greater than in contacts of low-aerosol patients (OR, 5.18; 95% CI 1.52-17.61) before adjustment. An adjusted analysis yielded a similar odds ratio (OR, 4.81; 95% CI 1.20-19.23).

Meanwhile, "the same analysis using sputum AFB smear grade to classify exposure groups did not show a clear or consistent risk stratification," the authors wrote. Therefore, high-aerosol TB patients more accurately predicted new TB infections, based on risk of TST conversion.

"In addition to providing a more precise marker of source infectiousness, cough aerosols may help determine the individual risk of M. tuberculosis infection after exposure, which can be variable and is poorly understood," the authors wrote. Yet they acknowledge the limitation that cough aerosols’ predictive value over time is unknown.

They noted three primary implications of their findings, first of which is a "new framework for rational and cost-effective infection control decisions" since the common wisdom that all sputum AFB positive patients are equally infectious is no longer necessarily the case. They also noted that Latent Tuberculosis Infection treatment programs may be improved through more efficient selection of TB contacts with TB exposure.

Finally, the authors suggested that analyzing TB aerosols might offer more accurate classifications of contacts’ inhaled doses of TB, thereby potentially offering an opportunity to better understand how the immune system responds to TB. This information may, in turn, contribute to studies related to TB vaccines, medications and immune responses.

This study was supported by a University of Medicine and Dentistry of New Jersey Foundation award with matching funds from the Division of Infectious Diseases at New Jersey Medical School, funds from the section of infectious diseases at Boston Medical Center and support to Dr. Matthew Fox from the National Institute of Allergy and Infectious Diseases.

Publications
Publications
Topics
Article Type
Display Headline
Cough aerosols flagged most-infectious TB patients
Display Headline
Cough aerosols flagged most-infectious TB patients
Legacy Keywords
tuberculosis, infectiousness, aerosols, coughs, pulmonary TB, patients, the American Journal of Respiratory and Critical Care Medicine, transmission, microscopy, culture, Dr. Edward C. Jones López, Boston Medical Center
Legacy Keywords
tuberculosis, infectiousness, aerosols, coughs, pulmonary TB, patients, the American Journal of Respiratory and Critical Care Medicine, transmission, microscopy, culture, Dr. Edward C. Jones López, Boston Medical Center
Sections
Article Source

FROM THE AMERICAN JOURNAL OF RESPIRATORY AND CRITICAL CARE MEDICINE

PURLs Copyright

Inside the Article

Vitals

Major Finding: While 69% of the contacts of the tuberculosis patients who produced cough aerosols with at least 10 M. tuberculosis CFUs were at risk of TST conversion, only 25% of contacts of low-aerosol patients and 30% of the contacts of aerosol-negative patients were at risk of TST conversion. Analysis using sputum of acid-fast bacilli smear grade did not show clear risk stratification.

Data Source: Analysis of 96 sputum AFB-positive TB patients attending Mulago Hospital National Tuberculosis and Leprosy Programme in Kampala, Uganda, from May 2009 to January 2011, plus subsequent new TB infections in their 442 household contacts.

Disclosures: This study was supported by a University of Medicine and Dentistry of New Jersey Foundation, the Division of Infectious Diseases at New Jersey Medical School, Boston Medical Center and the National Institute of Allergy and Infectious Diseases. There were no relevant conflicts of interest.

Abatacept proves noninferior to adalimumab for rheumatoid arthritis

Article Type
Changed
Display Headline
Abatacept proves noninferior to adalimumab for rheumatoid arthritis

Abatacept achieved an efficacy similar to that of adalimumab in the treatment of rheumatoid arthritis with a comparable safety profile in a 2-year, head-to-head randomized trial.

The multinational phase IIIb study established the noninferiority of abatacept as compared with the commonly used adalimumab. Both agents are targeted biologic disease-modifying antirheumatic drugs (bDMARDs), but with different mechanisms of action: Adalimumab is a tumor necrosis factor inhibitor, whereas abatacept is a T-cell costimulation modulator.

Dr. Michael E. Weinblatt of Brigham and Women’s Hospital in Boston and his associates conducted a intent-to-treat analysis of the two drugs, administered along with a stable dosage of methotrexate (MTX), in 646 patients who had confirmed diagnoses of rheumatoid arthritis for less than 2 years and had an inadequate response to MTX alone (Arthritis Rheum. 2013;65:28-38 [doi: 10.1002/art.37711]).

    Dr. Michael E. Weinblatt

The patients had not received previous bDMARD therapy; and all had active disease with a score of at least 3.2 on the Disease Activity Score in 28 joints using the C-reactive protein level (DAS28-CRP). They were stratified according to whether they had moderate disease (a DAS28-CRP score of 3.2-5.1) or severe disease (greater than 5.1), with approximately equal distribution of disease severity levels in both treatment groups.

A total of 318 patients received 125 mg of subcutaneous abatacept once a week, and 328 patients received 40 mg of subcutaneous adalimumab once every other week. Both groups received stable doses of MTX at 15-25 mg per week (except for those who received 7.5 mg or more if they had a documented intolerance to higher dosages). Patients could not take any other DMARDs during the trial but could receive either hydroxychloroquine or sulfasalazine, as well as low-dose oral corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), and up to two courses of high-dose corticosteroids.

Patients could not be blinded to the treatments for logistical reasons, but clinical assessors of their treatment, adverse events, and disease severity were blinded. The 13.8% of abatacept patients and 18% of adalimumab patients who discontinued therapy were considered nonresponders at all visits after they discontinued the study.

The researchers used the American College of Rheumatology 20% improvement response (ACR20) at 1 year as the primary outcome. ACR20 improvement criteria include at least a 20% reduction in the number of tender joints (out of 68) and swollen joints (out of 66), as well as three of five other attributes related to the patient’s pain, the patient’s or doctor’s disease assessment, or the C-reactive protein levels.

At 1 year, 64.8% of patients in the abatacept group (95% confidence interval, 59.5%-70.0%) and 63.4% of patients in the adalimumab group (95% CI, 58.2%-68.6%) achieved an ACR20 response. The researchers estimated the difference between the two groups’ ACR20 response rates to be 1.8% (95% CI, –5.6% to 9.2%).

Secondary outcomes of ACR50 and ACR70 were also reported at 1 year: 46.2% of abatacept patients and 46% of adalimumab patients achieved an ACR50 response. Meanwhile, 29.2% of abatacept patients and 26.2% of adalimumab patients achieved an ACR70 response. The adjusted mean improvement in the DAS28-CRP score at 1 year was –2.30 for the abatacept patients and –2.27 for the adalimumab patients. Using an ACR/European League Against Rheumatism definition, the remission rate among abatacept patients was 13.5%, compared with 15.7% among adalimumab patients.

Safety profiles were similar in both groups, except that more adalimumab patients reported local injection site reactions (9.1%) and injection pain (2.4%) than did abatacept patients (3.8% and 0%, respectively). Overall, the serious adverse event (SAE) rate was 10.1% in the abatacept group and 9.1% in the adalimumab group; related SAEs were 2.5% in the abatacept group and 3.4% in the adalimumab group.

Small percentages of participants discontinued the study because of adverse events: 1.3% of patients taking abatacept and 3% of patients taking adalimumab, while 3.5% of the patients on abatacept and 6.1% of patients on adalimumab discontinued because of non-SAEs.

The researchers concluded that both drugs are of "comparable clinical benefit, suggesting that these two agents should be considered equally for the treatment of rheumatoid arthritis patients who have an inadequate response to MTX."

The study was funded by Bristol-Myers Squibb, which manufactures abatacept. All eight authors have past or present financial relationships, including research grants, consulting fees, and/or stock ownership (two authors) in Bristol-Myers Squibb. Most have financial relationships with Abbott and multiple other pharmaceutical companies.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Abatacept, adalimumab, rheumatoid arthritis,
targeted biologic disease-modifying antirheumatic drugs, bDMARDs, tumor necrosis factor inhibitor, T-cell costimulation modulator, Dr. Michael E. Weinblatt, Brigham and Women’s Hospital, methotrexate
Author and Disclosure Information

Author and Disclosure Information

Abatacept achieved an efficacy similar to that of adalimumab in the treatment of rheumatoid arthritis with a comparable safety profile in a 2-year, head-to-head randomized trial.

The multinational phase IIIb study established the noninferiority of abatacept as compared with the commonly used adalimumab. Both agents are targeted biologic disease-modifying antirheumatic drugs (bDMARDs), but with different mechanisms of action: Adalimumab is a tumor necrosis factor inhibitor, whereas abatacept is a T-cell costimulation modulator.

Dr. Michael E. Weinblatt of Brigham and Women’s Hospital in Boston and his associates conducted a intent-to-treat analysis of the two drugs, administered along with a stable dosage of methotrexate (MTX), in 646 patients who had confirmed diagnoses of rheumatoid arthritis for less than 2 years and had an inadequate response to MTX alone (Arthritis Rheum. 2013;65:28-38 [doi: 10.1002/art.37711]).

    Dr. Michael E. Weinblatt

The patients had not received previous bDMARD therapy; and all had active disease with a score of at least 3.2 on the Disease Activity Score in 28 joints using the C-reactive protein level (DAS28-CRP). They were stratified according to whether they had moderate disease (a DAS28-CRP score of 3.2-5.1) or severe disease (greater than 5.1), with approximately equal distribution of disease severity levels in both treatment groups.

A total of 318 patients received 125 mg of subcutaneous abatacept once a week, and 328 patients received 40 mg of subcutaneous adalimumab once every other week. Both groups received stable doses of MTX at 15-25 mg per week (except for those who received 7.5 mg or more if they had a documented intolerance to higher dosages). Patients could not take any other DMARDs during the trial but could receive either hydroxychloroquine or sulfasalazine, as well as low-dose oral corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), and up to two courses of high-dose corticosteroids.

Patients could not be blinded to the treatments for logistical reasons, but clinical assessors of their treatment, adverse events, and disease severity were blinded. The 13.8% of abatacept patients and 18% of adalimumab patients who discontinued therapy were considered nonresponders at all visits after they discontinued the study.

The researchers used the American College of Rheumatology 20% improvement response (ACR20) at 1 year as the primary outcome. ACR20 improvement criteria include at least a 20% reduction in the number of tender joints (out of 68) and swollen joints (out of 66), as well as three of five other attributes related to the patient’s pain, the patient’s or doctor’s disease assessment, or the C-reactive protein levels.

At 1 year, 64.8% of patients in the abatacept group (95% confidence interval, 59.5%-70.0%) and 63.4% of patients in the adalimumab group (95% CI, 58.2%-68.6%) achieved an ACR20 response. The researchers estimated the difference between the two groups’ ACR20 response rates to be 1.8% (95% CI, –5.6% to 9.2%).

Secondary outcomes of ACR50 and ACR70 were also reported at 1 year: 46.2% of abatacept patients and 46% of adalimumab patients achieved an ACR50 response. Meanwhile, 29.2% of abatacept patients and 26.2% of adalimumab patients achieved an ACR70 response. The adjusted mean improvement in the DAS28-CRP score at 1 year was –2.30 for the abatacept patients and –2.27 for the adalimumab patients. Using an ACR/European League Against Rheumatism definition, the remission rate among abatacept patients was 13.5%, compared with 15.7% among adalimumab patients.

Safety profiles were similar in both groups, except that more adalimumab patients reported local injection site reactions (9.1%) and injection pain (2.4%) than did abatacept patients (3.8% and 0%, respectively). Overall, the serious adverse event (SAE) rate was 10.1% in the abatacept group and 9.1% in the adalimumab group; related SAEs were 2.5% in the abatacept group and 3.4% in the adalimumab group.

Small percentages of participants discontinued the study because of adverse events: 1.3% of patients taking abatacept and 3% of patients taking adalimumab, while 3.5% of the patients on abatacept and 6.1% of patients on adalimumab discontinued because of non-SAEs.

The researchers concluded that both drugs are of "comparable clinical benefit, suggesting that these two agents should be considered equally for the treatment of rheumatoid arthritis patients who have an inadequate response to MTX."

The study was funded by Bristol-Myers Squibb, which manufactures abatacept. All eight authors have past or present financial relationships, including research grants, consulting fees, and/or stock ownership (two authors) in Bristol-Myers Squibb. Most have financial relationships with Abbott and multiple other pharmaceutical companies.

Abatacept achieved an efficacy similar to that of adalimumab in the treatment of rheumatoid arthritis with a comparable safety profile in a 2-year, head-to-head randomized trial.

The multinational phase IIIb study established the noninferiority of abatacept as compared with the commonly used adalimumab. Both agents are targeted biologic disease-modifying antirheumatic drugs (bDMARDs), but with different mechanisms of action: Adalimumab is a tumor necrosis factor inhibitor, whereas abatacept is a T-cell costimulation modulator.

Dr. Michael E. Weinblatt of Brigham and Women’s Hospital in Boston and his associates conducted a intent-to-treat analysis of the two drugs, administered along with a stable dosage of methotrexate (MTX), in 646 patients who had confirmed diagnoses of rheumatoid arthritis for less than 2 years and had an inadequate response to MTX alone (Arthritis Rheum. 2013;65:28-38 [doi: 10.1002/art.37711]).

    Dr. Michael E. Weinblatt

The patients had not received previous bDMARD therapy; and all had active disease with a score of at least 3.2 on the Disease Activity Score in 28 joints using the C-reactive protein level (DAS28-CRP). They were stratified according to whether they had moderate disease (a DAS28-CRP score of 3.2-5.1) or severe disease (greater than 5.1), with approximately equal distribution of disease severity levels in both treatment groups.

A total of 318 patients received 125 mg of subcutaneous abatacept once a week, and 328 patients received 40 mg of subcutaneous adalimumab once every other week. Both groups received stable doses of MTX at 15-25 mg per week (except for those who received 7.5 mg or more if they had a documented intolerance to higher dosages). Patients could not take any other DMARDs during the trial but could receive either hydroxychloroquine or sulfasalazine, as well as low-dose oral corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), and up to two courses of high-dose corticosteroids.

Patients could not be blinded to the treatments for logistical reasons, but clinical assessors of their treatment, adverse events, and disease severity were blinded. The 13.8% of abatacept patients and 18% of adalimumab patients who discontinued therapy were considered nonresponders at all visits after they discontinued the study.

The researchers used the American College of Rheumatology 20% improvement response (ACR20) at 1 year as the primary outcome. ACR20 improvement criteria include at least a 20% reduction in the number of tender joints (out of 68) and swollen joints (out of 66), as well as three of five other attributes related to the patient’s pain, the patient’s or doctor’s disease assessment, or the C-reactive protein levels.

At 1 year, 64.8% of patients in the abatacept group (95% confidence interval, 59.5%-70.0%) and 63.4% of patients in the adalimumab group (95% CI, 58.2%-68.6%) achieved an ACR20 response. The researchers estimated the difference between the two groups’ ACR20 response rates to be 1.8% (95% CI, –5.6% to 9.2%).

Secondary outcomes of ACR50 and ACR70 were also reported at 1 year: 46.2% of abatacept patients and 46% of adalimumab patients achieved an ACR50 response. Meanwhile, 29.2% of abatacept patients and 26.2% of adalimumab patients achieved an ACR70 response. The adjusted mean improvement in the DAS28-CRP score at 1 year was –2.30 for the abatacept patients and –2.27 for the adalimumab patients. Using an ACR/European League Against Rheumatism definition, the remission rate among abatacept patients was 13.5%, compared with 15.7% among adalimumab patients.

Safety profiles were similar in both groups, except that more adalimumab patients reported local injection site reactions (9.1%) and injection pain (2.4%) than did abatacept patients (3.8% and 0%, respectively). Overall, the serious adverse event (SAE) rate was 10.1% in the abatacept group and 9.1% in the adalimumab group; related SAEs were 2.5% in the abatacept group and 3.4% in the adalimumab group.

Small percentages of participants discontinued the study because of adverse events: 1.3% of patients taking abatacept and 3% of patients taking adalimumab, while 3.5% of the patients on abatacept and 6.1% of patients on adalimumab discontinued because of non-SAEs.

The researchers concluded that both drugs are of "comparable clinical benefit, suggesting that these two agents should be considered equally for the treatment of rheumatoid arthritis patients who have an inadequate response to MTX."

The study was funded by Bristol-Myers Squibb, which manufactures abatacept. All eight authors have past or present financial relationships, including research grants, consulting fees, and/or stock ownership (two authors) in Bristol-Myers Squibb. Most have financial relationships with Abbott and multiple other pharmaceutical companies.

Publications
Publications
Topics
Article Type
Display Headline
Abatacept proves noninferior to adalimumab for rheumatoid arthritis
Display Headline
Abatacept proves noninferior to adalimumab for rheumatoid arthritis
Legacy Keywords
Abatacept, adalimumab, rheumatoid arthritis,
targeted biologic disease-modifying antirheumatic drugs, bDMARDs, tumor necrosis factor inhibitor, T-cell costimulation modulator, Dr. Michael E. Weinblatt, Brigham and Women’s Hospital, methotrexate
Legacy Keywords
Abatacept, adalimumab, rheumatoid arthritis,
targeted biologic disease-modifying antirheumatic drugs, bDMARDs, tumor necrosis factor inhibitor, T-cell costimulation modulator, Dr. Michael E. Weinblatt, Brigham and Women’s Hospital, methotrexate
Article Source

FROM ARTHRITIS AND RHEUMATISM

PURLs Copyright

Inside the Article

Vitals

Major Finding: Treatment with abatacept resulted in an ACR20 response in 64.8% of patients, with a serious adverse event rate of 10.1%, while adalimumab led to an ACR20 response in 63.4% and a serious adverse event rate of 9.1%.

Data Source: An intent-to-treat analysis of a 2-year, phase IIIb, multinational, prospective, randomized study of abatacept (125 mg weekly) and adalimumab (40 mg biweekly) given with methotrexate in 646 adult patients who had rheumatoid arthritis for less than 5 years and with inadequate response to methotrexate.

Disclosures: The study was funded by Bristol-Myers Squibb, which manufactures abatacept. All eight authors have past or present financial relationships, including research grants, consulting fees, and/or stock ownership (two authors) in Bristol-Myers Squibb. Most have financial relationships with Abbott and multiple other pharmaceutical companies.

Review: Interferon therapy for hepatitis C offers little benefit

Article Type
Changed
Display Headline
Review: Interferon therapy for hepatitis C offers little benefit

Using interferon monotherapy to treat hepatitis C in patients who have failed to respond to other treatments did not improve mortality rates and may actually cause harm, according to a Cochrane Collaboration review.

Although interferon does appear to reduce the levels of hepatitis C virus in the blood, this reduced viral load does not translate to increased survival or quality of life.

Dr. Ronald L. Koretz, a gastroenterologist and internal medicine specialist in Granada Hills, Calif., and his associates reported that they could not recommend interferon monotherapy because of the increased risk of all-cause mortality paired with a higher number of adverse events. The report was published online Jan. 30 (Cochrane Database Syst. Rev. 2013 Jan. 30 [doi:10.1002/14651858.CD003617.pub2]).

Interferon is typically used in hepatitis C retreatment when ribavirin or protease inhibitors have not been effective (or are contraindicated or not tolerated). The outcome goal is sustained viral response (SVR), referring to no measurable viral RNA in the blood for 6 months after treatment.

However, using SVR as a surrogate outcome for hepatitis C improvement had not been validated due to the dearth of randomized clinical trials with mortality data.

Dr. Koretz and his colleagues investigated randomized trials in which interferon was compared with a placebo or no treatment at all in chronic hepatitis C patients who had severe fibrosis (grade 3 or 4) and who had not responded to another treatment or had relapsed following interferon treatment. Patients were excluded if they had undergone a liver transplant, had HBV and/or HIV, or had evidence of hepatic decompensation.

Primary outcomes included all-cause and hepatic death, quality of life, and adverse events. Secondary outcomes included liver-related morbidity, SVR, biochemical responses, and histological responses. The researchers identified seven trials with a total of 1,976 patients, but five of these (n = 300) were at high risk of bias due to lack of blinding and, in four, possible selection and reporting bias.

Only three trials included outcomes on mortality and hepatic morbidity: HALT-C (Hepatitis C Antiviral Long-Term Treatment Against Cirrhosis) and EPIC 3 (Evaluation of PegIntron in Control of Hepatitis C Cirrhosis), which tracked patients who had severe fibrosis for 3-5 years, and a third trial that was ended before its 48-week endpoint because of the former trials’ results.

When the researchers analyzed only the two larger trials with low bias risk, they found all-cause mortality among the 1,676 patients to be significantly higher in the patients receiving pegylated interferon. The all-cause mortality rate was 9.4% (78/828) among interferon patients, compared with 6.7% (57/848) in patients receiving a placebo or no treatment (RR, 1.41; 95% CI: 1.02-1.96).

The additional deaths among interferon recipients appeared to be unrelated to liver function. Liver-related mortality in the large 5-year trial (low bias risk) showed no significant difference between interferon patients and untreated patients alone or when analyzed along with a trial at high bias risk (RR, 1.07; 95% CI: 0.7-1.63). In the one large trial whose 622 patients began without cirrhosis, interferon recipients were no less likely to develop cirrhosis (RR, 0.93; 95% CI: 0.69-1.25).

Interferon recipients did experience less variceal bleeding: 0.5% (4/843) in interferon recipients, compared with 2.1% (18/867) in untreated patients. No significant differences were seen for fibrosis markers or for encephalopathy, ascites, hepatocellular carcinoma, or liver transplantation. Only one small trial reported quality of life scores with pain scores among interferon patients to be "significantly higher, P < .001," but without numbers provided.

In the two large trials with low bias risk, interferon recipients also experienced significantly more adverse events (RR, 1.18; 95% CI: 0.99-1.41, P = .07), primarily infections, rash, irritability, fatigue, headaches, muscle pain, flu-like symptoms, and hematologic complications such as neutropenia and thrombocytopenia.

Analysis of four trials did show that 3.6% (20/557) of interferon recipients achieved SVR, compared with 0.2% (1/579) of untreated patients (RR, 15.38; 95% CI: 2.93-80.71). Interferon was also linked to reduced inflammation – but not reduced fibrosis – as measured by METAVIR activity scores. Among interferon recipients, 65% (36/55) had improved METAVIR activity scores, compared with 43.5% (20/46) of untreated patients (RR, 1.49; 95% CI: 1.02-2.18).

But these surrogate outcome improvements did not translate to better clinical outcomes. "Two of the commonly employed surrogate markers, sustained viral response and markers of inflammation, failed to be validated since they improved even though the clinical outcomes did not (or may even have become worse)," the researchers wrote.

The review did not receive internal or external funding support. The authors reported no permanent financial contracts with companies producing interferon or other conflicts of interest. Dr. Pilar Barrera Baena receives research funding from Centro de Investigacion Biomedica en Red en Enfermedades Hepaticas y Digestivas (CIBERehd).

Author and Disclosure Information

Publications
Topics
Legacy Keywords
interferon monotherapy, hepatitis C, Cochrane Collaboration review, reduce the levels of hepatitis C virus in the blood, reduced viral load, Dr. Ronald L. Koretz, gastroenterologist, hepatitis C retreatment, ribavirin, protease inhibitors, SVR, HALT-C (Hepatitis C Antiviral Long-Term Treatment Against Cirrhosis
Author and Disclosure Information

Author and Disclosure Information

Using interferon monotherapy to treat hepatitis C in patients who have failed to respond to other treatments did not improve mortality rates and may actually cause harm, according to a Cochrane Collaboration review.

Although interferon does appear to reduce the levels of hepatitis C virus in the blood, this reduced viral load does not translate to increased survival or quality of life.

Dr. Ronald L. Koretz, a gastroenterologist and internal medicine specialist in Granada Hills, Calif., and his associates reported that they could not recommend interferon monotherapy because of the increased risk of all-cause mortality paired with a higher number of adverse events. The report was published online Jan. 30 (Cochrane Database Syst. Rev. 2013 Jan. 30 [doi:10.1002/14651858.CD003617.pub2]).

Interferon is typically used in hepatitis C retreatment when ribavirin or protease inhibitors have not been effective (or are contraindicated or not tolerated). The outcome goal is sustained viral response (SVR), referring to no measurable viral RNA in the blood for 6 months after treatment.

However, using SVR as a surrogate outcome for hepatitis C improvement had not been validated due to the dearth of randomized clinical trials with mortality data.

Dr. Koretz and his colleagues investigated randomized trials in which interferon was compared with a placebo or no treatment at all in chronic hepatitis C patients who had severe fibrosis (grade 3 or 4) and who had not responded to another treatment or had relapsed following interferon treatment. Patients were excluded if they had undergone a liver transplant, had HBV and/or HIV, or had evidence of hepatic decompensation.

Primary outcomes included all-cause and hepatic death, quality of life, and adverse events. Secondary outcomes included liver-related morbidity, SVR, biochemical responses, and histological responses. The researchers identified seven trials with a total of 1,976 patients, but five of these (n = 300) were at high risk of bias due to lack of blinding and, in four, possible selection and reporting bias.

Only three trials included outcomes on mortality and hepatic morbidity: HALT-C (Hepatitis C Antiviral Long-Term Treatment Against Cirrhosis) and EPIC 3 (Evaluation of PegIntron in Control of Hepatitis C Cirrhosis), which tracked patients who had severe fibrosis for 3-5 years, and a third trial that was ended before its 48-week endpoint because of the former trials’ results.

When the researchers analyzed only the two larger trials with low bias risk, they found all-cause mortality among the 1,676 patients to be significantly higher in the patients receiving pegylated interferon. The all-cause mortality rate was 9.4% (78/828) among interferon patients, compared with 6.7% (57/848) in patients receiving a placebo or no treatment (RR, 1.41; 95% CI: 1.02-1.96).

The additional deaths among interferon recipients appeared to be unrelated to liver function. Liver-related mortality in the large 5-year trial (low bias risk) showed no significant difference between interferon patients and untreated patients alone or when analyzed along with a trial at high bias risk (RR, 1.07; 95% CI: 0.7-1.63). In the one large trial whose 622 patients began without cirrhosis, interferon recipients were no less likely to develop cirrhosis (RR, 0.93; 95% CI: 0.69-1.25).

Interferon recipients did experience less variceal bleeding: 0.5% (4/843) in interferon recipients, compared with 2.1% (18/867) in untreated patients. No significant differences were seen for fibrosis markers or for encephalopathy, ascites, hepatocellular carcinoma, or liver transplantation. Only one small trial reported quality of life scores with pain scores among interferon patients to be "significantly higher, P < .001," but without numbers provided.

In the two large trials with low bias risk, interferon recipients also experienced significantly more adverse events (RR, 1.18; 95% CI: 0.99-1.41, P = .07), primarily infections, rash, irritability, fatigue, headaches, muscle pain, flu-like symptoms, and hematologic complications such as neutropenia and thrombocytopenia.

Analysis of four trials did show that 3.6% (20/557) of interferon recipients achieved SVR, compared with 0.2% (1/579) of untreated patients (RR, 15.38; 95% CI: 2.93-80.71). Interferon was also linked to reduced inflammation – but not reduced fibrosis – as measured by METAVIR activity scores. Among interferon recipients, 65% (36/55) had improved METAVIR activity scores, compared with 43.5% (20/46) of untreated patients (RR, 1.49; 95% CI: 1.02-2.18).

But these surrogate outcome improvements did not translate to better clinical outcomes. "Two of the commonly employed surrogate markers, sustained viral response and markers of inflammation, failed to be validated since they improved even though the clinical outcomes did not (or may even have become worse)," the researchers wrote.

The review did not receive internal or external funding support. The authors reported no permanent financial contracts with companies producing interferon or other conflicts of interest. Dr. Pilar Barrera Baena receives research funding from Centro de Investigacion Biomedica en Red en Enfermedades Hepaticas y Digestivas (CIBERehd).

Using interferon monotherapy to treat hepatitis C in patients who have failed to respond to other treatments did not improve mortality rates and may actually cause harm, according to a Cochrane Collaboration review.

Although interferon does appear to reduce the levels of hepatitis C virus in the blood, this reduced viral load does not translate to increased survival or quality of life.

Dr. Ronald L. Koretz, a gastroenterologist and internal medicine specialist in Granada Hills, Calif., and his associates reported that they could not recommend interferon monotherapy because of the increased risk of all-cause mortality paired with a higher number of adverse events. The report was published online Jan. 30 (Cochrane Database Syst. Rev. 2013 Jan. 30 [doi:10.1002/14651858.CD003617.pub2]).

Interferon is typically used in hepatitis C retreatment when ribavirin or protease inhibitors have not been effective (or are contraindicated or not tolerated). The outcome goal is sustained viral response (SVR), referring to no measurable viral RNA in the blood for 6 months after treatment.

However, using SVR as a surrogate outcome for hepatitis C improvement had not been validated due to the dearth of randomized clinical trials with mortality data.

Dr. Koretz and his colleagues investigated randomized trials in which interferon was compared with a placebo or no treatment at all in chronic hepatitis C patients who had severe fibrosis (grade 3 or 4) and who had not responded to another treatment or had relapsed following interferon treatment. Patients were excluded if they had undergone a liver transplant, had HBV and/or HIV, or had evidence of hepatic decompensation.

Primary outcomes included all-cause and hepatic death, quality of life, and adverse events. Secondary outcomes included liver-related morbidity, SVR, biochemical responses, and histological responses. The researchers identified seven trials with a total of 1,976 patients, but five of these (n = 300) were at high risk of bias due to lack of blinding and, in four, possible selection and reporting bias.

Only three trials included outcomes on mortality and hepatic morbidity: HALT-C (Hepatitis C Antiviral Long-Term Treatment Against Cirrhosis) and EPIC 3 (Evaluation of PegIntron in Control of Hepatitis C Cirrhosis), which tracked patients who had severe fibrosis for 3-5 years, and a third trial that was ended before its 48-week endpoint because of the former trials’ results.

When the researchers analyzed only the two larger trials with low bias risk, they found all-cause mortality among the 1,676 patients to be significantly higher in the patients receiving pegylated interferon. The all-cause mortality rate was 9.4% (78/828) among interferon patients, compared with 6.7% (57/848) in patients receiving a placebo or no treatment (RR, 1.41; 95% CI: 1.02-1.96).

The additional deaths among interferon recipients appeared to be unrelated to liver function. Liver-related mortality in the large 5-year trial (low bias risk) showed no significant difference between interferon patients and untreated patients alone or when analyzed along with a trial at high bias risk (RR, 1.07; 95% CI: 0.7-1.63). In the one large trial whose 622 patients began without cirrhosis, interferon recipients were no less likely to develop cirrhosis (RR, 0.93; 95% CI: 0.69-1.25).

Interferon recipients did experience less variceal bleeding: 0.5% (4/843) in interferon recipients, compared with 2.1% (18/867) in untreated patients. No significant differences were seen for fibrosis markers or for encephalopathy, ascites, hepatocellular carcinoma, or liver transplantation. Only one small trial reported quality of life scores with pain scores among interferon patients to be "significantly higher, P < .001," but without numbers provided.

In the two large trials with low bias risk, interferon recipients also experienced significantly more adverse events (RR, 1.18; 95% CI: 0.99-1.41, P = .07), primarily infections, rash, irritability, fatigue, headaches, muscle pain, flu-like symptoms, and hematologic complications such as neutropenia and thrombocytopenia.

Analysis of four trials did show that 3.6% (20/557) of interferon recipients achieved SVR, compared with 0.2% (1/579) of untreated patients (RR, 15.38; 95% CI: 2.93-80.71). Interferon was also linked to reduced inflammation – but not reduced fibrosis – as measured by METAVIR activity scores. Among interferon recipients, 65% (36/55) had improved METAVIR activity scores, compared with 43.5% (20/46) of untreated patients (RR, 1.49; 95% CI: 1.02-2.18).

But these surrogate outcome improvements did not translate to better clinical outcomes. "Two of the commonly employed surrogate markers, sustained viral response and markers of inflammation, failed to be validated since they improved even though the clinical outcomes did not (or may even have become worse)," the researchers wrote.

The review did not receive internal or external funding support. The authors reported no permanent financial contracts with companies producing interferon or other conflicts of interest. Dr. Pilar Barrera Baena receives research funding from Centro de Investigacion Biomedica en Red en Enfermedades Hepaticas y Digestivas (CIBERehd).

Publications
Publications
Topics
Article Type
Display Headline
Review: Interferon therapy for hepatitis C offers little benefit
Display Headline
Review: Interferon therapy for hepatitis C offers little benefit
Legacy Keywords
interferon monotherapy, hepatitis C, Cochrane Collaboration review, reduce the levels of hepatitis C virus in the blood, reduced viral load, Dr. Ronald L. Koretz, gastroenterologist, hepatitis C retreatment, ribavirin, protease inhibitors, SVR, HALT-C (Hepatitis C Antiviral Long-Term Treatment Against Cirrhosis
Legacy Keywords
interferon monotherapy, hepatitis C, Cochrane Collaboration review, reduce the levels of hepatitis C virus in the blood, reduced viral load, Dr. Ronald L. Koretz, gastroenterologist, hepatitis C retreatment, ribavirin, protease inhibitors, SVR, HALT-C (Hepatitis C Antiviral Long-Term Treatment Against Cirrhosis
Article Source

FROM THE COCHRANE DATABASE OF SYSTEMATIC REVIEWS

PURLs Copyright

Inside the Article

Vitals

Major Finding: The risk of all-cause mortality among hepatitis C patients receiving interferon monotherapy after not responding to prior treatment is 9.4% (78/828 patients), compared with 6.7% (57/848) among patients receiving placebo or no treatment, despite higher sustained viral responses among interferon-treated patients (RR 15.38, 95% CI 2.93-80.71) and reduced inflammation scores (RR 1.49, 95% CI 1.02-2.18).

Data Source: An analysis of seven trials with 1,976 total patients, then narrowed to the two largest trials, HALT-C and EPIC 3, that had low risk of bias and which included a total of 1,676 patients.

Disclosures: The review did not receive internal or external funding support. The authors reported no permanent financial contracts with companies producing interferon or other conflicts of interest. Dr. Pilar Barrera Baena receives research funding from Centro de Investigacion Biomedica en Red en Enfermedades Hepaticas y Digestivas (CIBERehd).

Bullying based on weight common for teens

Article Type
Changed
Display Headline
Bullying based on weight common for teens

Adolescents undergoing weight loss treatment are subject to high rates of all forms of bullying from both their peers and adults, according to a study published Dec. 24 in Pediatrics.

Questionnaires from students attending two weight loss treatment camps revealed that 64% of them had experienced bullying related to their weight from peers, friends, parents, teachers, and strangers, and the majority of those bullied (78%) said the bullying had lasted at least a year. Over a third (36%) reported it lasted at least 5 years.

The survey involved 321 adolescents, aged 14-18 years, enrolled in Camp Shane or Wellspring Camps, reported Dr. Rebecca Puhl and her associates at Yale University, New Haven, Conn.

Dr. Puhl’s team offered gift cards as incentives to 1,025 students enrolled at Camp Shane and 400 at Wellspring Academies to complete online self-report surveys e-mailed to all the campers. Of the 550 who started the survey, 321 gave consent and finished it, for a response rate of 27% (Pediatrics 2012 Dec. 24 [doi:10.1542/peds.2012-1106]).

Of the 64% of participants who reported being bullied for their weight, 92% had been bullied by peers, 70% by friends, 42% by P.E. teachers or coaches, 37% by their parents and 27% by teachers. In addition, 55% reported they had been bullied by an unknown person (possibly through cyberbullying or strangers). Nearly a third (30%) said they were bullied by peers often or very often.

Although 94% of those bullied said they had been bullied because of their weight (21% said often/very often for weight), other top reasons included their appearance (89%), their friends (74%), their clothes (70%), a dating partner (65%), the way they speak (52%), and their intelligence and/or school performance (50%).

Verbal teasing was the most common form of bullying, including being laughed at (88%), being teased (84%), being called names (83%), and being loudly insulted or the victim of nasty looks (75%). Relational, or social, bullying followed with 74%-82% reporting this form, depending on the particular types of social bullying (from isolation/exclusion to being the object of rumors).

The regression analysis revealed that participants were more likely to report weight-based bullying with increasing body weight, and students with two overweight parents were twice as likely to report it.

The researchers noted that the findings "highlight the need for providers to educate parents about weight-based bullying and to offer them appropriate strategies to address their child’s weight with sensitivity and support." Providers should also help adolescents targeted by bullying to develop coping strategies.

The study was funded by Yale’s Rudd Center for Food Policy and Obesity, and the authors had no disclosures.

Click for Credit Link
Author and Disclosure Information

Publications
Topics
Legacy Keywords
overweight, childhood obesity, bullying
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Adolescents undergoing weight loss treatment are subject to high rates of all forms of bullying from both their peers and adults, according to a study published Dec. 24 in Pediatrics.

Questionnaires from students attending two weight loss treatment camps revealed that 64% of them had experienced bullying related to their weight from peers, friends, parents, teachers, and strangers, and the majority of those bullied (78%) said the bullying had lasted at least a year. Over a third (36%) reported it lasted at least 5 years.

The survey involved 321 adolescents, aged 14-18 years, enrolled in Camp Shane or Wellspring Camps, reported Dr. Rebecca Puhl and her associates at Yale University, New Haven, Conn.

Dr. Puhl’s team offered gift cards as incentives to 1,025 students enrolled at Camp Shane and 400 at Wellspring Academies to complete online self-report surveys e-mailed to all the campers. Of the 550 who started the survey, 321 gave consent and finished it, for a response rate of 27% (Pediatrics 2012 Dec. 24 [doi:10.1542/peds.2012-1106]).

Of the 64% of participants who reported being bullied for their weight, 92% had been bullied by peers, 70% by friends, 42% by P.E. teachers or coaches, 37% by their parents and 27% by teachers. In addition, 55% reported they had been bullied by an unknown person (possibly through cyberbullying or strangers). Nearly a third (30%) said they were bullied by peers often or very often.

Although 94% of those bullied said they had been bullied because of their weight (21% said often/very often for weight), other top reasons included their appearance (89%), their friends (74%), their clothes (70%), a dating partner (65%), the way they speak (52%), and their intelligence and/or school performance (50%).

Verbal teasing was the most common form of bullying, including being laughed at (88%), being teased (84%), being called names (83%), and being loudly insulted or the victim of nasty looks (75%). Relational, or social, bullying followed with 74%-82% reporting this form, depending on the particular types of social bullying (from isolation/exclusion to being the object of rumors).

The regression analysis revealed that participants were more likely to report weight-based bullying with increasing body weight, and students with two overweight parents were twice as likely to report it.

The researchers noted that the findings "highlight the need for providers to educate parents about weight-based bullying and to offer them appropriate strategies to address their child’s weight with sensitivity and support." Providers should also help adolescents targeted by bullying to develop coping strategies.

The study was funded by Yale’s Rudd Center for Food Policy and Obesity, and the authors had no disclosures.

Adolescents undergoing weight loss treatment are subject to high rates of all forms of bullying from both their peers and adults, according to a study published Dec. 24 in Pediatrics.

Questionnaires from students attending two weight loss treatment camps revealed that 64% of them had experienced bullying related to their weight from peers, friends, parents, teachers, and strangers, and the majority of those bullied (78%) said the bullying had lasted at least a year. Over a third (36%) reported it lasted at least 5 years.

The survey involved 321 adolescents, aged 14-18 years, enrolled in Camp Shane or Wellspring Camps, reported Dr. Rebecca Puhl and her associates at Yale University, New Haven, Conn.

Dr. Puhl’s team offered gift cards as incentives to 1,025 students enrolled at Camp Shane and 400 at Wellspring Academies to complete online self-report surveys e-mailed to all the campers. Of the 550 who started the survey, 321 gave consent and finished it, for a response rate of 27% (Pediatrics 2012 Dec. 24 [doi:10.1542/peds.2012-1106]).

Of the 64% of participants who reported being bullied for their weight, 92% had been bullied by peers, 70% by friends, 42% by P.E. teachers or coaches, 37% by their parents and 27% by teachers. In addition, 55% reported they had been bullied by an unknown person (possibly through cyberbullying or strangers). Nearly a third (30%) said they were bullied by peers often or very often.

Although 94% of those bullied said they had been bullied because of their weight (21% said often/very often for weight), other top reasons included their appearance (89%), their friends (74%), their clothes (70%), a dating partner (65%), the way they speak (52%), and their intelligence and/or school performance (50%).

Verbal teasing was the most common form of bullying, including being laughed at (88%), being teased (84%), being called names (83%), and being loudly insulted or the victim of nasty looks (75%). Relational, or social, bullying followed with 74%-82% reporting this form, depending on the particular types of social bullying (from isolation/exclusion to being the object of rumors).

The regression analysis revealed that participants were more likely to report weight-based bullying with increasing body weight, and students with two overweight parents were twice as likely to report it.

The researchers noted that the findings "highlight the need for providers to educate parents about weight-based bullying and to offer them appropriate strategies to address their child’s weight with sensitivity and support." Providers should also help adolescents targeted by bullying to develop coping strategies.

The study was funded by Yale’s Rudd Center for Food Policy and Obesity, and the authors had no disclosures.

Publications
Publications
Topics
Article Type
Display Headline
Bullying based on weight common for teens
Display Headline
Bullying based on weight common for teens
Legacy Keywords
overweight, childhood obesity, bullying
Legacy Keywords
overweight, childhood obesity, bullying
Article Source

FROM PEDIATRICS

PURLs Copyright

Inside the Article

Vitals

Major Finding: Among adolescents receiving treatment for weight loss, 92% reported being bullied by peers, while 37% reported being bullied by their parents and 42% by PE teachers or coaches.

Data Source: Online questionnaires from 321 adolescents aged 14-18 years enrolled weight loss camps.

Disclosures: The study was funded by the Rudd Center for Food Policy and Obesity at Yale University. The authors had no disclosures.