User login
DES Exposure Linked to Lifetime Risk of Adverse Outcomes
In-utero exposure to diethylstilbestrol was associated with a high lifetime risk of a broad spectrum of adverse outcomes in a follow-up study of patients now in their 40s, 50s, and 60s, according to a report in the Oct. 6 issue of the New England Journal of Medicine.
Most of these risks were increased by a factor of more than two, compared with the risks in women of the same age who were not exposed to diethylstilbestrol (DES), said Dr. Robert N. Hoover of the National Cancer Institute, Bethesda, Md., and his associates.
"Although DES has not been prescribed for pregnant women in the United States for 40 years, adverse outcomes continue to occur in women exposed in utero, and continued monitoring ... for established and unexpected adverse outcomes seems prudent," they noted.
In the early 1990s, Dr. Hoover and his colleagues combined three cohort studies of DES-exposed women that had begun in the mid-1970s, so that the pooled subjects could be followed periodically with self-report questionnaires. Their Combined Cohort Study of DES Exposure involved 4,001 DES-exposed women and 1,683 nonexposed control subjects from the original cohorts, who were born between the late 1940s and the early 1960s and whose average age at last follow-up was 48 years.
All cancer diagnoses and biopsy results from cervical or vaginal specimens were verified through review of pathological reports.
Twelve adverse health outcomes that were significantly associated with DES in previous studies were assessed in the combined cohort, and all 12 were found to be significantly associated with DES in this combined analysis.
The hazard ratios (HRs) associated with DES exposure, compared with nonexposure, ranged from a low of 1.42 for preeclampsia to a high of 8.12 for neonatal death (usually related to preterm delivery).
In ascending order, the HRs were 1.64 for spontaneous abortion; 1.82 for breast cancer diagnosed at age 40 or older; 2.28 for cervical intraepithelial neoplasia of grade 2 or higher; 2.35 for early menopause; 2.37 for infertility; 2.45 for stillbirth; 3.72 for ectopic pregnancy; 3.77 for loss of second-trimester pregnancy; and 4.68 for preterm delivery, the investigators wrote (N. Engl. J. Med. 2011;365:1304-14).
In addition, three cases of clear-cell adenocarcinoma of the vagina and one case of clear-cell adenocarcinoma of the cervix were diagnosed in exposed women, compared with no cases in the unexposed women. "The number of clear-cell adenocarcinoma of the vagina or cervix expected on the basis of age-specific rates in the U.S. population was 0.102, for an observed-to-expected ratio of 39," Dr. Hoover and his associates said.
DES-exposed women who had clinical evidence of vaginal epithelial changes at a young age – a marker of high DES dose and exposure early in gestation – were found to have significantly higher risks for adverse outcomes than did exposed women who showed no vaginal epithelial changes. This finding provides additional support for the argument that DES exposure caused, and was not just linked to, the adverse outcomes, they said.
The researchers also calculated the excess risk of adverse outcomes that could be attributed directly to DES exposure. This excess risk was 1.7% for breast cancer, 3.4% for early menopause, 3.5% for CIN, 6.3% for stillbirth, 7.2% for neonatal death, 11.7% for both spontaneous abortion and ectopic pregnancy, 12.7% for preeclampsia, 14.7% for loss of second-trimester pregnancy, 17.8% for infertility, and 35.4% for preterm delivery.
The Combined Cohort Study of DES Exposure was supported by the National Cancer Institute. Dr. Robboy reports receiving consulting fees from UCB, Belgium. Dr. Karlan reports holding stock in and receiving board membership fees from IRIS International. Dr. Hatch receives royalties as a reviewer of the DES card on the UpToDate medical information site.
DES pregnancy, pregnancy risks, preeclampsia, neonatal death
In-utero exposure to diethylstilbestrol was associated with a high lifetime risk of a broad spectrum of adverse outcomes in a follow-up study of patients now in their 40s, 50s, and 60s, according to a report in the Oct. 6 issue of the New England Journal of Medicine.
Most of these risks were increased by a factor of more than two, compared with the risks in women of the same age who were not exposed to diethylstilbestrol (DES), said Dr. Robert N. Hoover of the National Cancer Institute, Bethesda, Md., and his associates.
"Although DES has not been prescribed for pregnant women in the United States for 40 years, adverse outcomes continue to occur in women exposed in utero, and continued monitoring ... for established and unexpected adverse outcomes seems prudent," they noted.
In the early 1990s, Dr. Hoover and his colleagues combined three cohort studies of DES-exposed women that had begun in the mid-1970s, so that the pooled subjects could be followed periodically with self-report questionnaires. Their Combined Cohort Study of DES Exposure involved 4,001 DES-exposed women and 1,683 nonexposed control subjects from the original cohorts, who were born between the late 1940s and the early 1960s and whose average age at last follow-up was 48 years.
All cancer diagnoses and biopsy results from cervical or vaginal specimens were verified through review of pathological reports.
Twelve adverse health outcomes that were significantly associated with DES in previous studies were assessed in the combined cohort, and all 12 were found to be significantly associated with DES in this combined analysis.
The hazard ratios (HRs) associated with DES exposure, compared with nonexposure, ranged from a low of 1.42 for preeclampsia to a high of 8.12 for neonatal death (usually related to preterm delivery).
In ascending order, the HRs were 1.64 for spontaneous abortion; 1.82 for breast cancer diagnosed at age 40 or older; 2.28 for cervical intraepithelial neoplasia of grade 2 or higher; 2.35 for early menopause; 2.37 for infertility; 2.45 for stillbirth; 3.72 for ectopic pregnancy; 3.77 for loss of second-trimester pregnancy; and 4.68 for preterm delivery, the investigators wrote (N. Engl. J. Med. 2011;365:1304-14).
In addition, three cases of clear-cell adenocarcinoma of the vagina and one case of clear-cell adenocarcinoma of the cervix were diagnosed in exposed women, compared with no cases in the unexposed women. "The number of clear-cell adenocarcinoma of the vagina or cervix expected on the basis of age-specific rates in the U.S. population was 0.102, for an observed-to-expected ratio of 39," Dr. Hoover and his associates said.
DES-exposed women who had clinical evidence of vaginal epithelial changes at a young age – a marker of high DES dose and exposure early in gestation – were found to have significantly higher risks for adverse outcomes than did exposed women who showed no vaginal epithelial changes. This finding provides additional support for the argument that DES exposure caused, and was not just linked to, the adverse outcomes, they said.
The researchers also calculated the excess risk of adverse outcomes that could be attributed directly to DES exposure. This excess risk was 1.7% for breast cancer, 3.4% for early menopause, 3.5% for CIN, 6.3% for stillbirth, 7.2% for neonatal death, 11.7% for both spontaneous abortion and ectopic pregnancy, 12.7% for preeclampsia, 14.7% for loss of second-trimester pregnancy, 17.8% for infertility, and 35.4% for preterm delivery.
The Combined Cohort Study of DES Exposure was supported by the National Cancer Institute. Dr. Robboy reports receiving consulting fees from UCB, Belgium. Dr. Karlan reports holding stock in and receiving board membership fees from IRIS International. Dr. Hatch receives royalties as a reviewer of the DES card on the UpToDate medical information site.
In-utero exposure to diethylstilbestrol was associated with a high lifetime risk of a broad spectrum of adverse outcomes in a follow-up study of patients now in their 40s, 50s, and 60s, according to a report in the Oct. 6 issue of the New England Journal of Medicine.
Most of these risks were increased by a factor of more than two, compared with the risks in women of the same age who were not exposed to diethylstilbestrol (DES), said Dr. Robert N. Hoover of the National Cancer Institute, Bethesda, Md., and his associates.
"Although DES has not been prescribed for pregnant women in the United States for 40 years, adverse outcomes continue to occur in women exposed in utero, and continued monitoring ... for established and unexpected adverse outcomes seems prudent," they noted.
In the early 1990s, Dr. Hoover and his colleagues combined three cohort studies of DES-exposed women that had begun in the mid-1970s, so that the pooled subjects could be followed periodically with self-report questionnaires. Their Combined Cohort Study of DES Exposure involved 4,001 DES-exposed women and 1,683 nonexposed control subjects from the original cohorts, who were born between the late 1940s and the early 1960s and whose average age at last follow-up was 48 years.
All cancer diagnoses and biopsy results from cervical or vaginal specimens were verified through review of pathological reports.
Twelve adverse health outcomes that were significantly associated with DES in previous studies were assessed in the combined cohort, and all 12 were found to be significantly associated with DES in this combined analysis.
The hazard ratios (HRs) associated with DES exposure, compared with nonexposure, ranged from a low of 1.42 for preeclampsia to a high of 8.12 for neonatal death (usually related to preterm delivery).
In ascending order, the HRs were 1.64 for spontaneous abortion; 1.82 for breast cancer diagnosed at age 40 or older; 2.28 for cervical intraepithelial neoplasia of grade 2 or higher; 2.35 for early menopause; 2.37 for infertility; 2.45 for stillbirth; 3.72 for ectopic pregnancy; 3.77 for loss of second-trimester pregnancy; and 4.68 for preterm delivery, the investigators wrote (N. Engl. J. Med. 2011;365:1304-14).
In addition, three cases of clear-cell adenocarcinoma of the vagina and one case of clear-cell adenocarcinoma of the cervix were diagnosed in exposed women, compared with no cases in the unexposed women. "The number of clear-cell adenocarcinoma of the vagina or cervix expected on the basis of age-specific rates in the U.S. population was 0.102, for an observed-to-expected ratio of 39," Dr. Hoover and his associates said.
DES-exposed women who had clinical evidence of vaginal epithelial changes at a young age – a marker of high DES dose and exposure early in gestation – were found to have significantly higher risks for adverse outcomes than did exposed women who showed no vaginal epithelial changes. This finding provides additional support for the argument that DES exposure caused, and was not just linked to, the adverse outcomes, they said.
The researchers also calculated the excess risk of adverse outcomes that could be attributed directly to DES exposure. This excess risk was 1.7% for breast cancer, 3.4% for early menopause, 3.5% for CIN, 6.3% for stillbirth, 7.2% for neonatal death, 11.7% for both spontaneous abortion and ectopic pregnancy, 12.7% for preeclampsia, 14.7% for loss of second-trimester pregnancy, 17.8% for infertility, and 35.4% for preterm delivery.
The Combined Cohort Study of DES Exposure was supported by the National Cancer Institute. Dr. Robboy reports receiving consulting fees from UCB, Belgium. Dr. Karlan reports holding stock in and receiving board membership fees from IRIS International. Dr. Hatch receives royalties as a reviewer of the DES card on the UpToDate medical information site.
DES pregnancy, pregnancy risks, preeclampsia, neonatal death
DES pregnancy, pregnancy risks, preeclampsia, neonatal death
FROM NEW ENGLAND JOURNAL OF MEDICINE
Major Finding: Twelve adverse outcomes were at least twice as likely to develop in DES-exposed as in nonexposed women by the time they reached middle age: preeclampsia, spontaneous abortion, breast cancer, cervical intraepithethial neoplasia, early menopause, infertility, stillbirth, ectopic pregnancy, loss of second-trimester pregnancy, preterm delivery, neonatal death, and clear-cell adenocarcinoma of the vagina or cervix.
Data Source: A combined follow-up study of 5,684 women participating in 3 large cohort studies of DES exposure begun in the mid-1970s.
Disclosures: The Combined Cohort Study of DES Exposure was supported by the National Cancer Institute. Dr. Robboy reports receiving consulting fees from UCB, Belgium. Dr. Karlan reports holding stock in and receiving board membership fees from IRIS International. Dr. Hatch receives royalties as a reviewer of the DES card on the UpToDate medical information site
Neonatal Herpes: Oral Acyclovir Improves Neurodevelopment
Babies who have neonatal herpes with CNS involvement show improved neurodevelopmental outcomes at age 1 year when they are given 6 months of oral acyclovir to suppress the virus, according to a report in the Oct. 6 New England Journal of Medicine.
In addition, the treatment prevents cutaneous recurrences in babies who have neonatal herpes simplex virus (HSV) involving the skin, eye, or mouth, said Dr. David W. Kimberlin of the University of Alabama at Birmingham and his associates.
These results have been "implied" but not definitively established in previous small, uncontrolled case series. But the current report marks the first time that two parallel, phase III, placebo-controlled clinical trials provided strong evidence to support the use of suppressive acyclovir after neonatal HSV.
The disease is so rare that these two trials, conducted by the National Institute of Allergy and Infectious Diseases’ Collaborative Antiviral Study Group (CASG), required 11 years and 19 participating medical centers to enroll a sufficient number of patients.
In the first trial, CASG 103, 45 infants who had CNS HSV (37 subjects) or disseminated HSV with CNS involvement (8 subjects) were randomly assigned to receive 6 months of oral acyclovir or matching placebo after first undergoing a standard 21-day course of parenteral acyclovir. According to the study protocol, babies who developed a second recurrence of cutaneous lesions during the study period were removed from random assignment and given open-label acyclovir suppression.
Thirty-nine subjects (87%) either completed 6 months of treatment or had two skin recurrences and were placed on open-label treatment. The other six infants were lost to follow-up, withdrew consent, or were dropped from the study for nonadherence; one infant in the placebo group died.
Twenty-eight of the original 45 infants (62%) underwent assessment using the Bayley Scales of Infant Development at age 1 year. Those who had received suppressive acyclovir showed a significantly higher mean mental score (88.2) than did babies who had received placebo (68.1).
Moreover, 69% of the babies in the acyclovir group were classified as having normal neurologic outcomes, compared with only 33% of those in the placebo group. In the acyclovir group, 6% had mild neurologic impairment, 6% had moderate impairment, and 19% had severe impairment; the corresponding proportions in the placebo group were 8%, 25%, and 33%.
In addition, within the group randomly assigned to receive acyclovir, 15 infants received the full 6-month course of therapy, 6 received only part of the 6-month course, and 7 did not receive any active drug. Bayley mental development scores rose incrementally with increasing time on acyclovir, so that babies who took the full 6 months had a mean score of 85, those who took less than the full 6 months had a mean score of 80, and those who took no acyclovir had a mean score of 73, the investigators said (N. Engl. J. Med. 2011;365:1284-92). Bayley motor development scores did not differ significantly between the acyclovir and placebo groups.
The finding that acyclovir suppression of HSV improves neurodevelopment "should be tempered by the fact that ... assessments were not performed in 38% of subjects in the CASG 103 study. This substantial attrition renders the primary protocol end point less interpretable," they noted.
"Ongoing neurologic injury occurs in infants who survive neonatal HSV disease and ... it can be decreased by longer-term antiviral suppression."
Nevertheless, this study "provides the first controlled data that suggest that ongoing neurologic injury occurs in infants who survive neonatal HSV disease and that it can be decreased by longer-term antiviral suppression," Dr. Kimberlin and his colleagues said.
In the second trial, CASG 104, 29 babies with neonatal skin, eye, or mouth HSV were randomly assigned to receive 6 months of oral acyclovir or matching placebo after first undergoing a standard 14-day course of parenteral acyclovir. Twenty-six (90%) either completed 6 months of therapy or experienced two cutaneous recurrences and switched to open-label acyclovir. The remaining three were lost to follow-up or were dropped from the study because of nonadherence.
None of the babies in this trial developed CNS involvement during cutaneous recurrences. There were no significant differences in Bayley mental development scores between babies who received acyclovir and those who received placebo.
However, as expected, the active treatment did prevent cutaneous recurrences, compared with placebo. The positive socioeconomic effect of decreased recurrences should not be underestimated, the investigators noted.
Turning to adverse effects of active therapy, the researchers found no differences between acyclovir and placebo groups, and there were no adverse events that led to discontinuation of the study drug.
Previous studies have suggested that acyclovir may be associated with neutropenia. There were no differences in absolute neutrophil counts between acyclovir and placebo groups in either CASG 103 or CASG 104, although there was a nonsignificant trend toward neutropenia with acyclovir. Low-normal absolute neutrophil counts (500 cells/mcL or less) developed in 25% of the subjects receiving acyclovir in CASG 103 and in 20% of those receiving acyclovir in CASG 104, compared with only 5% and 7%, respectively, of babies receiving placebo.
However, "it is possible that there is indeed an association that our studies were underpowered to detect; thus, we believe that neutropenia should continue to be considered as a possible toxic effect of longer-term oral acyclovir therapy," Dr. Kimberlin and his associates said.
These studies were supported by the National Institute of Allergy and Infectious Diseases. Oral acyclovir and matching placebo were provided by GlaxoSmithKline, Alpharma USPD, and Pharm Ops. Dr. Kimberlin’s associates reported ties to numerous drug companies.
Babies who have neonatal herpes with CNS involvement show improved neurodevelopmental outcomes at age 1 year when they are given 6 months of oral acyclovir to suppress the virus, according to a report in the Oct. 6 New England Journal of Medicine.
In addition, the treatment prevents cutaneous recurrences in babies who have neonatal herpes simplex virus (HSV) involving the skin, eye, or mouth, said Dr. David W. Kimberlin of the University of Alabama at Birmingham and his associates.
These results have been "implied" but not definitively established in previous small, uncontrolled case series. But the current report marks the first time that two parallel, phase III, placebo-controlled clinical trials provided strong evidence to support the use of suppressive acyclovir after neonatal HSV.
The disease is so rare that these two trials, conducted by the National Institute of Allergy and Infectious Diseases’ Collaborative Antiviral Study Group (CASG), required 11 years and 19 participating medical centers to enroll a sufficient number of patients.
In the first trial, CASG 103, 45 infants who had CNS HSV (37 subjects) or disseminated HSV with CNS involvement (8 subjects) were randomly assigned to receive 6 months of oral acyclovir or matching placebo after first undergoing a standard 21-day course of parenteral acyclovir. According to the study protocol, babies who developed a second recurrence of cutaneous lesions during the study period were removed from random assignment and given open-label acyclovir suppression.
Thirty-nine subjects (87%) either completed 6 months of treatment or had two skin recurrences and were placed on open-label treatment. The other six infants were lost to follow-up, withdrew consent, or were dropped from the study for nonadherence; one infant in the placebo group died.
Twenty-eight of the original 45 infants (62%) underwent assessment using the Bayley Scales of Infant Development at age 1 year. Those who had received suppressive acyclovir showed a significantly higher mean mental score (88.2) than did babies who had received placebo (68.1).
Moreover, 69% of the babies in the acyclovir group were classified as having normal neurologic outcomes, compared with only 33% of those in the placebo group. In the acyclovir group, 6% had mild neurologic impairment, 6% had moderate impairment, and 19% had severe impairment; the corresponding proportions in the placebo group were 8%, 25%, and 33%.
In addition, within the group randomly assigned to receive acyclovir, 15 infants received the full 6-month course of therapy, 6 received only part of the 6-month course, and 7 did not receive any active drug. Bayley mental development scores rose incrementally with increasing time on acyclovir, so that babies who took the full 6 months had a mean score of 85, those who took less than the full 6 months had a mean score of 80, and those who took no acyclovir had a mean score of 73, the investigators said (N. Engl. J. Med. 2011;365:1284-92). Bayley motor development scores did not differ significantly between the acyclovir and placebo groups.
The finding that acyclovir suppression of HSV improves neurodevelopment "should be tempered by the fact that ... assessments were not performed in 38% of subjects in the CASG 103 study. This substantial attrition renders the primary protocol end point less interpretable," they noted.
"Ongoing neurologic injury occurs in infants who survive neonatal HSV disease and ... it can be decreased by longer-term antiviral suppression."
Nevertheless, this study "provides the first controlled data that suggest that ongoing neurologic injury occurs in infants who survive neonatal HSV disease and that it can be decreased by longer-term antiviral suppression," Dr. Kimberlin and his colleagues said.
In the second trial, CASG 104, 29 babies with neonatal skin, eye, or mouth HSV were randomly assigned to receive 6 months of oral acyclovir or matching placebo after first undergoing a standard 14-day course of parenteral acyclovir. Twenty-six (90%) either completed 6 months of therapy or experienced two cutaneous recurrences and switched to open-label acyclovir. The remaining three were lost to follow-up or were dropped from the study because of nonadherence.
None of the babies in this trial developed CNS involvement during cutaneous recurrences. There were no significant differences in Bayley mental development scores between babies who received acyclovir and those who received placebo.
However, as expected, the active treatment did prevent cutaneous recurrences, compared with placebo. The positive socioeconomic effect of decreased recurrences should not be underestimated, the investigators noted.
Turning to adverse effects of active therapy, the researchers found no differences between acyclovir and placebo groups, and there were no adverse events that led to discontinuation of the study drug.
Previous studies have suggested that acyclovir may be associated with neutropenia. There were no differences in absolute neutrophil counts between acyclovir and placebo groups in either CASG 103 or CASG 104, although there was a nonsignificant trend toward neutropenia with acyclovir. Low-normal absolute neutrophil counts (500 cells/mcL or less) developed in 25% of the subjects receiving acyclovir in CASG 103 and in 20% of those receiving acyclovir in CASG 104, compared with only 5% and 7%, respectively, of babies receiving placebo.
However, "it is possible that there is indeed an association that our studies were underpowered to detect; thus, we believe that neutropenia should continue to be considered as a possible toxic effect of longer-term oral acyclovir therapy," Dr. Kimberlin and his associates said.
These studies were supported by the National Institute of Allergy and Infectious Diseases. Oral acyclovir and matching placebo were provided by GlaxoSmithKline, Alpharma USPD, and Pharm Ops. Dr. Kimberlin’s associates reported ties to numerous drug companies.
Babies who have neonatal herpes with CNS involvement show improved neurodevelopmental outcomes at age 1 year when they are given 6 months of oral acyclovir to suppress the virus, according to a report in the Oct. 6 New England Journal of Medicine.
In addition, the treatment prevents cutaneous recurrences in babies who have neonatal herpes simplex virus (HSV) involving the skin, eye, or mouth, said Dr. David W. Kimberlin of the University of Alabama at Birmingham and his associates.
These results have been "implied" but not definitively established in previous small, uncontrolled case series. But the current report marks the first time that two parallel, phase III, placebo-controlled clinical trials provided strong evidence to support the use of suppressive acyclovir after neonatal HSV.
The disease is so rare that these two trials, conducted by the National Institute of Allergy and Infectious Diseases’ Collaborative Antiviral Study Group (CASG), required 11 years and 19 participating medical centers to enroll a sufficient number of patients.
In the first trial, CASG 103, 45 infants who had CNS HSV (37 subjects) or disseminated HSV with CNS involvement (8 subjects) were randomly assigned to receive 6 months of oral acyclovir or matching placebo after first undergoing a standard 21-day course of parenteral acyclovir. According to the study protocol, babies who developed a second recurrence of cutaneous lesions during the study period were removed from random assignment and given open-label acyclovir suppression.
Thirty-nine subjects (87%) either completed 6 months of treatment or had two skin recurrences and were placed on open-label treatment. The other six infants were lost to follow-up, withdrew consent, or were dropped from the study for nonadherence; one infant in the placebo group died.
Twenty-eight of the original 45 infants (62%) underwent assessment using the Bayley Scales of Infant Development at age 1 year. Those who had received suppressive acyclovir showed a significantly higher mean mental score (88.2) than did babies who had received placebo (68.1).
Moreover, 69% of the babies in the acyclovir group were classified as having normal neurologic outcomes, compared with only 33% of those in the placebo group. In the acyclovir group, 6% had mild neurologic impairment, 6% had moderate impairment, and 19% had severe impairment; the corresponding proportions in the placebo group were 8%, 25%, and 33%.
In addition, within the group randomly assigned to receive acyclovir, 15 infants received the full 6-month course of therapy, 6 received only part of the 6-month course, and 7 did not receive any active drug. Bayley mental development scores rose incrementally with increasing time on acyclovir, so that babies who took the full 6 months had a mean score of 85, those who took less than the full 6 months had a mean score of 80, and those who took no acyclovir had a mean score of 73, the investigators said (N. Engl. J. Med. 2011;365:1284-92). Bayley motor development scores did not differ significantly between the acyclovir and placebo groups.
The finding that acyclovir suppression of HSV improves neurodevelopment "should be tempered by the fact that ... assessments were not performed in 38% of subjects in the CASG 103 study. This substantial attrition renders the primary protocol end point less interpretable," they noted.
"Ongoing neurologic injury occurs in infants who survive neonatal HSV disease and ... it can be decreased by longer-term antiviral suppression."
Nevertheless, this study "provides the first controlled data that suggest that ongoing neurologic injury occurs in infants who survive neonatal HSV disease and that it can be decreased by longer-term antiviral suppression," Dr. Kimberlin and his colleagues said.
In the second trial, CASG 104, 29 babies with neonatal skin, eye, or mouth HSV were randomly assigned to receive 6 months of oral acyclovir or matching placebo after first undergoing a standard 14-day course of parenteral acyclovir. Twenty-six (90%) either completed 6 months of therapy or experienced two cutaneous recurrences and switched to open-label acyclovir. The remaining three were lost to follow-up or were dropped from the study because of nonadherence.
None of the babies in this trial developed CNS involvement during cutaneous recurrences. There were no significant differences in Bayley mental development scores between babies who received acyclovir and those who received placebo.
However, as expected, the active treatment did prevent cutaneous recurrences, compared with placebo. The positive socioeconomic effect of decreased recurrences should not be underestimated, the investigators noted.
Turning to adverse effects of active therapy, the researchers found no differences between acyclovir and placebo groups, and there were no adverse events that led to discontinuation of the study drug.
Previous studies have suggested that acyclovir may be associated with neutropenia. There were no differences in absolute neutrophil counts between acyclovir and placebo groups in either CASG 103 or CASG 104, although there was a nonsignificant trend toward neutropenia with acyclovir. Low-normal absolute neutrophil counts (500 cells/mcL or less) developed in 25% of the subjects receiving acyclovir in CASG 103 and in 20% of those receiving acyclovir in CASG 104, compared with only 5% and 7%, respectively, of babies receiving placebo.
However, "it is possible that there is indeed an association that our studies were underpowered to detect; thus, we believe that neutropenia should continue to be considered as a possible toxic effect of longer-term oral acyclovir therapy," Dr. Kimberlin and his associates said.
These studies were supported by the National Institute of Allergy and Infectious Diseases. Oral acyclovir and matching placebo were provided by GlaxoSmithKline, Alpharma USPD, and Pharm Ops. Dr. Kimberlin’s associates reported ties to numerous drug companies.
FROM NEW ENGLAND JOURNAL OF MEDICINE
Major Finding: Babies with neonatal herpes who had received 6 months of oral acyclovir to suppress CNS HSV had a significantly higher mean score (88.2) on the Bayley mental development assessment at age 1 year than did those given placebo (68.1).
Data Source: Two parallel, multicenter, phase III, randomized, double-blind clinical trials including 45 patients with neonatal HSV involving the CNS and 29 with HSV involving the skin, eye, or mouth who were followed for 1 year.
Disclosures: These studies were supported by the National Institute of Allergy and Infectious Diseases. Oral acyclovir and matching placebo were provided by GlaxoSmithKline, Alpharma USPD, and Pharm Ops. Dr. Kimberlin’s associates reported ties to numerous drug companies.
Advance Directives Limit Costs Only in Certain Regions
Advance directives specifying that patients don’t want aggressive treatment at the end of life only limit interventions and their costs in regions in which aggressive end-of-life care is common, according to a report in the Oct. 5 issue of JAMA.
In contrast, in areas of the United States where end-of-life treatment is not aggressive, such treatment-limiting documents have little effect on the use of these interventions or on health care costs, said Lauren Hersch Nicholas, Ph.D., of the University of Michigan, Ann Arbor, and her associates.
These results suggest that "the clinical effect of advance directives is critically dependent on the context in which a patient receives care," they noted.
Dr. Nicholas and her colleagues assessed end-of-life care for 3,302 Medicare beneficiaries who died between 1998 and 2007, at a mean age of 83 years. They calculated health care spending during the last 6 months of life across all care settings, including inpatient, outpatient, hospice, home health, and skilled nursing settings. The researchers selected patients who had lived in specific geographic regions across the country that were characterized by low, medium, or high end-of-life health care expenditures.
Overall, 70% of the beneficiaries were hospitalized at least once during their final 6 months of life, 41% died in a hospital, and 61% had an advanced directive (a living will or durable power of attorney).
"The clinical effect of advance directives is critically dependent on the context in which a patient receives care."
For the study population as a whole, health care spending did not vary according to whether or not a patient had an advance directive.
When the data were broken down by the usual type of end-of-life care (and costs) in each region, advance directives were associated with less-aggressive care (and lower costs) only in regions where more-aggressive care (and higher costs) were the norm.
"When patients in high-spending areas had advance directives limiting treatment, they averaged significantly lower end-of-life Medicare spending, were less likely to have an in-hospital death, and had significantly greater odds of hospice use than [did] decedents without advance directives in these regions," Dr. Nicholas and her associates wrote (JAMA 2011;306:1447-53).
In contrast, advance directives had no effect on end-of-life care or on end-of-life expenses among patients in medium- or low-spending regions.
"One interpretation of these data is that advance directives are most effective when one prefers treatment that is different from local norms. Thus, in high-intensity regions, more-limited treatment requires an explicit statement," the investigators noted.
"We urgently need studies to examine the extent to which greater advance directive use in high-intensity regions would result in treatment that is more concordant with patient preferences and to understand the patient, physician, and health system characteristics that lead to higher rates of use," they added.
This study was supported by the National Institutes of Health and the Michigan Institute for Clinical and Health Research. The authors reported that they had no relevant financial conflicts of interest.
The findings by Nicholas et al. "may reveal as much about the power of local norms to shape care as about the power of advance directives to overcome them," said Dr. Douglas B. White and Dr. Robert M. Arnold.
In this study, approximately one-third of patients did not have an advance directive, and those who did were no less likely to avoid unwanted end-of-life interventions than were those who did not. "A more potent approach to improving end-of-life care may come through innovative strategies to change medical norms," they said.
Douglas B. White, M.D., is in the department of critical care medicine and the Program on Ethics and Decision Making in Critical Illness at the University of Pittsburgh. Robert M. Arnold, M.D., is in the university’s section on palliative care and medical ethics.
The findings by Nicholas et al. "may reveal as much about the power of local norms to shape care as about the power of advance directives to overcome them," said Dr. Douglas B. White and Dr. Robert M. Arnold.
In this study, approximately one-third of patients did not have an advance directive, and those who did were no less likely to avoid unwanted end-of-life interventions than were those who did not. "A more potent approach to improving end-of-life care may come through innovative strategies to change medical norms," they said.
Douglas B. White, M.D., is in the department of critical care medicine and the Program on Ethics and Decision Making in Critical Illness at the University of Pittsburgh. Robert M. Arnold, M.D., is in the university’s section on palliative care and medical ethics.
The findings by Nicholas et al. "may reveal as much about the power of local norms to shape care as about the power of advance directives to overcome them," said Dr. Douglas B. White and Dr. Robert M. Arnold.
In this study, approximately one-third of patients did not have an advance directive, and those who did were no less likely to avoid unwanted end-of-life interventions than were those who did not. "A more potent approach to improving end-of-life care may come through innovative strategies to change medical norms," they said.
Douglas B. White, M.D., is in the department of critical care medicine and the Program on Ethics and Decision Making in Critical Illness at the University of Pittsburgh. Robert M. Arnold, M.D., is in the university’s section on palliative care and medical ethics.
Advance directives specifying that patients don’t want aggressive treatment at the end of life only limit interventions and their costs in regions in which aggressive end-of-life care is common, according to a report in the Oct. 5 issue of JAMA.
In contrast, in areas of the United States where end-of-life treatment is not aggressive, such treatment-limiting documents have little effect on the use of these interventions or on health care costs, said Lauren Hersch Nicholas, Ph.D., of the University of Michigan, Ann Arbor, and her associates.
These results suggest that "the clinical effect of advance directives is critically dependent on the context in which a patient receives care," they noted.
Dr. Nicholas and her colleagues assessed end-of-life care for 3,302 Medicare beneficiaries who died between 1998 and 2007, at a mean age of 83 years. They calculated health care spending during the last 6 months of life across all care settings, including inpatient, outpatient, hospice, home health, and skilled nursing settings. The researchers selected patients who had lived in specific geographic regions across the country that were characterized by low, medium, or high end-of-life health care expenditures.
Overall, 70% of the beneficiaries were hospitalized at least once during their final 6 months of life, 41% died in a hospital, and 61% had an advanced directive (a living will or durable power of attorney).
"The clinical effect of advance directives is critically dependent on the context in which a patient receives care."
For the study population as a whole, health care spending did not vary according to whether or not a patient had an advance directive.
When the data were broken down by the usual type of end-of-life care (and costs) in each region, advance directives were associated with less-aggressive care (and lower costs) only in regions where more-aggressive care (and higher costs) were the norm.
"When patients in high-spending areas had advance directives limiting treatment, they averaged significantly lower end-of-life Medicare spending, were less likely to have an in-hospital death, and had significantly greater odds of hospice use than [did] decedents without advance directives in these regions," Dr. Nicholas and her associates wrote (JAMA 2011;306:1447-53).
In contrast, advance directives had no effect on end-of-life care or on end-of-life expenses among patients in medium- or low-spending regions.
"One interpretation of these data is that advance directives are most effective when one prefers treatment that is different from local norms. Thus, in high-intensity regions, more-limited treatment requires an explicit statement," the investigators noted.
"We urgently need studies to examine the extent to which greater advance directive use in high-intensity regions would result in treatment that is more concordant with patient preferences and to understand the patient, physician, and health system characteristics that lead to higher rates of use," they added.
This study was supported by the National Institutes of Health and the Michigan Institute for Clinical and Health Research. The authors reported that they had no relevant financial conflicts of interest.
Advance directives specifying that patients don’t want aggressive treatment at the end of life only limit interventions and their costs in regions in which aggressive end-of-life care is common, according to a report in the Oct. 5 issue of JAMA.
In contrast, in areas of the United States where end-of-life treatment is not aggressive, such treatment-limiting documents have little effect on the use of these interventions or on health care costs, said Lauren Hersch Nicholas, Ph.D., of the University of Michigan, Ann Arbor, and her associates.
These results suggest that "the clinical effect of advance directives is critically dependent on the context in which a patient receives care," they noted.
Dr. Nicholas and her colleagues assessed end-of-life care for 3,302 Medicare beneficiaries who died between 1998 and 2007, at a mean age of 83 years. They calculated health care spending during the last 6 months of life across all care settings, including inpatient, outpatient, hospice, home health, and skilled nursing settings. The researchers selected patients who had lived in specific geographic regions across the country that were characterized by low, medium, or high end-of-life health care expenditures.
Overall, 70% of the beneficiaries were hospitalized at least once during their final 6 months of life, 41% died in a hospital, and 61% had an advanced directive (a living will or durable power of attorney).
"The clinical effect of advance directives is critically dependent on the context in which a patient receives care."
For the study population as a whole, health care spending did not vary according to whether or not a patient had an advance directive.
When the data were broken down by the usual type of end-of-life care (and costs) in each region, advance directives were associated with less-aggressive care (and lower costs) only in regions where more-aggressive care (and higher costs) were the norm.
"When patients in high-spending areas had advance directives limiting treatment, they averaged significantly lower end-of-life Medicare spending, were less likely to have an in-hospital death, and had significantly greater odds of hospice use than [did] decedents without advance directives in these regions," Dr. Nicholas and her associates wrote (JAMA 2011;306:1447-53).
In contrast, advance directives had no effect on end-of-life care or on end-of-life expenses among patients in medium- or low-spending regions.
"One interpretation of these data is that advance directives are most effective when one prefers treatment that is different from local norms. Thus, in high-intensity regions, more-limited treatment requires an explicit statement," the investigators noted.
"We urgently need studies to examine the extent to which greater advance directive use in high-intensity regions would result in treatment that is more concordant with patient preferences and to understand the patient, physician, and health system characteristics that lead to higher rates of use," they added.
This study was supported by the National Institutes of Health and the Michigan Institute for Clinical and Health Research. The authors reported that they had no relevant financial conflicts of interest.
FROM JAMA
Major Finding: Advanced directives reduced end-of-life care and its associated costs only among patients who lived in regions characterized by aggressive end-of-life care and high end-of-life care costs.
Data Source: Analysis of survey data for 3,302 Medicare beneficiaries who died between 1998 and 2007, at a mean age of 83 years.
Disclosures: This study was supported by the National Institutes of Health and the Michigan Institute for Clinical and Health Research. The authors reported no relevant financial conflicts of interest.
Asthma Care Measures Don't Reflect Outcomes in Children
Hospital compliance with the Children’s Asthma Care set of process measures did not correlate with asthma patients’ clinical outcomes in a study of more than 37,000 asthma patients who were admitted to 30 U.S. children’s hospitals, according to a study reported in the Oct. 5 issue of JAMA.
Because compliance with these process measures was not associated with improved outcomes, it "cannot serve as a means to evaluate and compare the quality of care provided for patients admitted with asthma exacerbations," said Dr. Rustin B. Morse of Phoenix Children’s Hospital and the University of Arizona, Phoenix, and his associates.
The Joint Commission considers the Children’s Asthma Care (CAC) measure set to be an "accountability measure," appropriate for use in determining accreditation, in public reporting of hospital performance, and in pay-for-performance efforts. But the findings of this study instead suggest that the CAC measure set does not meet the Joint Commission criteria for accountability measures and should be "reconsidered," Dr. Morse and his colleagues said.
They assessed time trends in compliance with the CAC measure set using data on a random sample of 37,267 pediatric inpatients with 45,499 admissions for asthma exacerbations during a 33-month period at 30 freestanding children’s hospitals across the country.
The CAC measure set includes three measures: whether patients received asthma relievers on admission (CAC-1), whether they received systemic corticosteroids on admission (CAC-2), and whether they were discharged with a complete home management plan of care (CAC-3). Compliance is measured quarterly by a review of the medical records of a random sample of patients.
Compliance with CAC-1 and CAC-2 was quite high, exceeding 95% in all but 1 of the 11 quarters assessed, and was consistent across hospitals. Because there were so few cases of poor compliance, no analysis could be performed to examine whether better compliance correlated with improved clinical outcomes.
In contrast, compliance with CAC-3 was not as high and varied substantially among hospitals. Mean CAC-3 compliance was only 41% during the first three quarters of the study and improved to 73% in the final three quarters.
This allowed an analysis of the relationship between compliance with CAC-3 and clinical outcomes. But no significant association was found between CAC-3 compliance and improved outcomes at 7 days, 30 days, or 90 days after discharge, the investigators said (JAMA 2011;306:1454-60).
There also was no association between compliance and clinical outcomes when hospitals in the highest-performing quartile were compared with those in the lowest-performing quartile.
One of Dr. Morse’s associates reported ties to the Robert Wood Johnson Foundation, the National Institute of Allergy and Infectious Diseases, the Child Health Corporation of America, and the Pediatric Research in Inpatient Settings Network. Two reported grants from the Agency for Healthcare ResearcDr. Homer reported no financial conflicts of interest.
The findings of Dr. Morse and coauthors demonstrate that the "use of a written discharge management plan no longer meets the criteria for a high-quality measure," Dr. Charles J. Homer said.
The authors showed that the Joint Commission’s CAC-3 measure (a written plan for managing asthma given to the patient at discharge) "should be retired" as a measure of hospital performance.
They also showed that compliance with CAC measures 1 and 2 is nearly universal within a subset of freestanding children’s hospitals. However, more than two-thirds of hospitalizations occur at other types of facilities, and the performance of these measures is yet to be documented in other settings.
Dr. Homer is with the National Initiative for Children’s Healthcare Quality and the department of pediatrics at Harvard Medical School and Children’s Hospital Boston. He reported no financial conflicts of interest. These remarks were adapted from his editorial accompanying Dr. Morse’s report (JAMA 2011;306:1487-8).
The findings of Dr. Morse and coauthors demonstrate that the "use of a written discharge management plan no longer meets the criteria for a high-quality measure," Dr. Charles J. Homer said.
The authors showed that the Joint Commission’s CAC-3 measure (a written plan for managing asthma given to the patient at discharge) "should be retired" as a measure of hospital performance.
They also showed that compliance with CAC measures 1 and 2 is nearly universal within a subset of freestanding children’s hospitals. However, more than two-thirds of hospitalizations occur at other types of facilities, and the performance of these measures is yet to be documented in other settings.
Dr. Homer is with the National Initiative for Children’s Healthcare Quality and the department of pediatrics at Harvard Medical School and Children’s Hospital Boston. He reported no financial conflicts of interest. These remarks were adapted from his editorial accompanying Dr. Morse’s report (JAMA 2011;306:1487-8).
The findings of Dr. Morse and coauthors demonstrate that the "use of a written discharge management plan no longer meets the criteria for a high-quality measure," Dr. Charles J. Homer said.
The authors showed that the Joint Commission’s CAC-3 measure (a written plan for managing asthma given to the patient at discharge) "should be retired" as a measure of hospital performance.
They also showed that compliance with CAC measures 1 and 2 is nearly universal within a subset of freestanding children’s hospitals. However, more than two-thirds of hospitalizations occur at other types of facilities, and the performance of these measures is yet to be documented in other settings.
Dr. Homer is with the National Initiative for Children’s Healthcare Quality and the department of pediatrics at Harvard Medical School and Children’s Hospital Boston. He reported no financial conflicts of interest. These remarks were adapted from his editorial accompanying Dr. Morse’s report (JAMA 2011;306:1487-8).
Hospital compliance with the Children’s Asthma Care set of process measures did not correlate with asthma patients’ clinical outcomes in a study of more than 37,000 asthma patients who were admitted to 30 U.S. children’s hospitals, according to a study reported in the Oct. 5 issue of JAMA.
Because compliance with these process measures was not associated with improved outcomes, it "cannot serve as a means to evaluate and compare the quality of care provided for patients admitted with asthma exacerbations," said Dr. Rustin B. Morse of Phoenix Children’s Hospital and the University of Arizona, Phoenix, and his associates.
The Joint Commission considers the Children’s Asthma Care (CAC) measure set to be an "accountability measure," appropriate for use in determining accreditation, in public reporting of hospital performance, and in pay-for-performance efforts. But the findings of this study instead suggest that the CAC measure set does not meet the Joint Commission criteria for accountability measures and should be "reconsidered," Dr. Morse and his colleagues said.
They assessed time trends in compliance with the CAC measure set using data on a random sample of 37,267 pediatric inpatients with 45,499 admissions for asthma exacerbations during a 33-month period at 30 freestanding children’s hospitals across the country.
The CAC measure set includes three measures: whether patients received asthma relievers on admission (CAC-1), whether they received systemic corticosteroids on admission (CAC-2), and whether they were discharged with a complete home management plan of care (CAC-3). Compliance is measured quarterly by a review of the medical records of a random sample of patients.
Compliance with CAC-1 and CAC-2 was quite high, exceeding 95% in all but 1 of the 11 quarters assessed, and was consistent across hospitals. Because there were so few cases of poor compliance, no analysis could be performed to examine whether better compliance correlated with improved clinical outcomes.
In contrast, compliance with CAC-3 was not as high and varied substantially among hospitals. Mean CAC-3 compliance was only 41% during the first three quarters of the study and improved to 73% in the final three quarters.
This allowed an analysis of the relationship between compliance with CAC-3 and clinical outcomes. But no significant association was found between CAC-3 compliance and improved outcomes at 7 days, 30 days, or 90 days after discharge, the investigators said (JAMA 2011;306:1454-60).
There also was no association between compliance and clinical outcomes when hospitals in the highest-performing quartile were compared with those in the lowest-performing quartile.
One of Dr. Morse’s associates reported ties to the Robert Wood Johnson Foundation, the National Institute of Allergy and Infectious Diseases, the Child Health Corporation of America, and the Pediatric Research in Inpatient Settings Network. Two reported grants from the Agency for Healthcare ResearcDr. Homer reported no financial conflicts of interest.
Hospital compliance with the Children’s Asthma Care set of process measures did not correlate with asthma patients’ clinical outcomes in a study of more than 37,000 asthma patients who were admitted to 30 U.S. children’s hospitals, according to a study reported in the Oct. 5 issue of JAMA.
Because compliance with these process measures was not associated with improved outcomes, it "cannot serve as a means to evaluate and compare the quality of care provided for patients admitted with asthma exacerbations," said Dr. Rustin B. Morse of Phoenix Children’s Hospital and the University of Arizona, Phoenix, and his associates.
The Joint Commission considers the Children’s Asthma Care (CAC) measure set to be an "accountability measure," appropriate for use in determining accreditation, in public reporting of hospital performance, and in pay-for-performance efforts. But the findings of this study instead suggest that the CAC measure set does not meet the Joint Commission criteria for accountability measures and should be "reconsidered," Dr. Morse and his colleagues said.
They assessed time trends in compliance with the CAC measure set using data on a random sample of 37,267 pediatric inpatients with 45,499 admissions for asthma exacerbations during a 33-month period at 30 freestanding children’s hospitals across the country.
The CAC measure set includes three measures: whether patients received asthma relievers on admission (CAC-1), whether they received systemic corticosteroids on admission (CAC-2), and whether they were discharged with a complete home management plan of care (CAC-3). Compliance is measured quarterly by a review of the medical records of a random sample of patients.
Compliance with CAC-1 and CAC-2 was quite high, exceeding 95% in all but 1 of the 11 quarters assessed, and was consistent across hospitals. Because there were so few cases of poor compliance, no analysis could be performed to examine whether better compliance correlated with improved clinical outcomes.
In contrast, compliance with CAC-3 was not as high and varied substantially among hospitals. Mean CAC-3 compliance was only 41% during the first three quarters of the study and improved to 73% in the final three quarters.
This allowed an analysis of the relationship between compliance with CAC-3 and clinical outcomes. But no significant association was found between CAC-3 compliance and improved outcomes at 7 days, 30 days, or 90 days after discharge, the investigators said (JAMA 2011;306:1454-60).
There also was no association between compliance and clinical outcomes when hospitals in the highest-performing quartile were compared with those in the lowest-performing quartile.
One of Dr. Morse’s associates reported ties to the Robert Wood Johnson Foundation, the National Institute of Allergy and Infectious Diseases, the Child Health Corporation of America, and the Pediatric Research in Inpatient Settings Network. Two reported grants from the Agency for Healthcare ResearcDr. Homer reported no financial conflicts of interest.
FROM JAMA
Major Finding: Compliance with two of the three CAC process measures was so high that no analysis could be performed to assess whether it correlated with patient outcomes, and compliance with the third measure did not correlate with patient outcomes.
Data Source: A cross-sectional study assessing 30 U.S. children’s hospitals’ compliance with the CAC measures set in a sample of 37,267 pediatric asthma patients seen during a 33-month period.
Disclosures: One of Dr. Morse’s associates reported ties to the Robert Wood Johnson Foundation, the National Institute of Allergy and Infectious Diseases, the Child Health Corporation of America, and the Pediatric Research in Inpatient Settings Network; two reported grants from the Agency for Healthcare Research and Quality.
Diet Linked to Risk of Fetal NTDs
Women with high-quality diets during the year before pregnancy were at lower risk than were those with poor diets for delivering a baby with orofacial clefts or neural tube defects, according to a study published online Oct. 3 in Archives of Pediatrics and Adolescent Medicine.
This finding, from an analysis of data in the ongoing National Birth Defects Prevention Study (NBDPS), is "notable" because previous analyses of the same data, "which assessed single-nutrient intakes in isolation, had not been informative.
"In particular, maternal intake of folic acid–containing vitamin/mineral supplements was not associated in the NBDPS with a reduced risk of neural tube defects, and findings for dietary folate were inconsistent" in these previous analyses, said Suzan L. Carmichael, Ph.D., of Stanford (Calif.) University and her associates.
"[Our] findings suggest that overall diet quality is more predictive of birth defect risk than intake of single nutrients," they noted.
Dr. Carmichael and her colleagues developed two indexes of dietary quality, one modeled after the Mediterranean Diet Score and the second after the Diet Quality Index for Pregnancy. They then assessed how each of these indexes performed in predicting risk for isolated (nonsyndromic) neural tube defects and orofacial clefts using data on 9,558 pregnancies in the NBDPS.
The NBDPS is an ongoing multistate, population-based case-control study of well-defined birth defects. For this analysis, the researchers assessed 3,411 pregnancies involving isolated neural tube defects (936) or orofacial clefts (2,475), and 6,147 pregnancies that served as controls. All the deliveries occurred between 1997 and 2005.
For both indexes of dietary quality, "we observed reduced birth defects risks associated with higher dietary quality scores. That is, after adjusting for all covariates, increasing diet quality based on either index was associated with reduced risk of each birth defect studied," the researchers said (Arch. Pediatr. Adolesc. Med. 2011 [doi:10.1001/archpediatrics.2011.185]).
"The strongest associations were observed for anencephaly," they added.
The findings were similar in further analyses of important subgroups of patients, including an assessment restricted to women who took vitamin/mineral supplements.
These results likely are generalizable to other women "because of [our] study’s population-based design, active case ascertainment, and the racial/ethnic, geographic, and socioeconomic diversity" of the subjects.
"Although the focus on folic acid has enabled substantial reductions in the prevalence of neural tube defects and perhaps other birth defects, the population burden of birth defects remains extensive. If increased dietary quality can indeed have a greater impact than individual nutrients, appropriate public health messages may need to be developed that convey this broader perspective," Dr. Carmichael and her associates said.
In an editorial accompanying this report, David R. Jacobs Jr., Ph.D., and his associates said, "The lesson from the article by Carmichael et al. is an important one: People, including women of childbearing age, should eat good food."
"A nutrient [supplement] may correct a deficiency condition but not necessarily be of benefit at higher doses in well-nourished people," they added (Arch. Pediatr. Adolesc. Med. 2011 [doi:10.1001/archpediatrics.2011.184]).
"Reduction of neural tube defects may be achievable by diet alone, at the same time reducing potential risk for other chronic diseases in the rest of the population," said Dr. Jacobs and his associates in the division of epidemiology and community health at the University of Minnesota School of Public Health, Minneapolis.
This study was supported in part by the National Institutes of Health and the Centers for Disease Control and Prevention. Dr. Carmichael and her associates and Dr. Jacobs and his associates reported no relevant financial disclosures.
Women with high-quality diets during the year before pregnancy were at lower risk than were those with poor diets for delivering a baby with orofacial clefts or neural tube defects, according to a study published online Oct. 3 in Archives of Pediatrics and Adolescent Medicine.
This finding, from an analysis of data in the ongoing National Birth Defects Prevention Study (NBDPS), is "notable" because previous analyses of the same data, "which assessed single-nutrient intakes in isolation, had not been informative.
"In particular, maternal intake of folic acid–containing vitamin/mineral supplements was not associated in the NBDPS with a reduced risk of neural tube defects, and findings for dietary folate were inconsistent" in these previous analyses, said Suzan L. Carmichael, Ph.D., of Stanford (Calif.) University and her associates.
"[Our] findings suggest that overall diet quality is more predictive of birth defect risk than intake of single nutrients," they noted.
Dr. Carmichael and her colleagues developed two indexes of dietary quality, one modeled after the Mediterranean Diet Score and the second after the Diet Quality Index for Pregnancy. They then assessed how each of these indexes performed in predicting risk for isolated (nonsyndromic) neural tube defects and orofacial clefts using data on 9,558 pregnancies in the NBDPS.
The NBDPS is an ongoing multistate, population-based case-control study of well-defined birth defects. For this analysis, the researchers assessed 3,411 pregnancies involving isolated neural tube defects (936) or orofacial clefts (2,475), and 6,147 pregnancies that served as controls. All the deliveries occurred between 1997 and 2005.
For both indexes of dietary quality, "we observed reduced birth defects risks associated with higher dietary quality scores. That is, after adjusting for all covariates, increasing diet quality based on either index was associated with reduced risk of each birth defect studied," the researchers said (Arch. Pediatr. Adolesc. Med. 2011 [doi:10.1001/archpediatrics.2011.185]).
"The strongest associations were observed for anencephaly," they added.
The findings were similar in further analyses of important subgroups of patients, including an assessment restricted to women who took vitamin/mineral supplements.
These results likely are generalizable to other women "because of [our] study’s population-based design, active case ascertainment, and the racial/ethnic, geographic, and socioeconomic diversity" of the subjects.
"Although the focus on folic acid has enabled substantial reductions in the prevalence of neural tube defects and perhaps other birth defects, the population burden of birth defects remains extensive. If increased dietary quality can indeed have a greater impact than individual nutrients, appropriate public health messages may need to be developed that convey this broader perspective," Dr. Carmichael and her associates said.
In an editorial accompanying this report, David R. Jacobs Jr., Ph.D., and his associates said, "The lesson from the article by Carmichael et al. is an important one: People, including women of childbearing age, should eat good food."
"A nutrient [supplement] may correct a deficiency condition but not necessarily be of benefit at higher doses in well-nourished people," they added (Arch. Pediatr. Adolesc. Med. 2011 [doi:10.1001/archpediatrics.2011.184]).
"Reduction of neural tube defects may be achievable by diet alone, at the same time reducing potential risk for other chronic diseases in the rest of the population," said Dr. Jacobs and his associates in the division of epidemiology and community health at the University of Minnesota School of Public Health, Minneapolis.
This study was supported in part by the National Institutes of Health and the Centers for Disease Control and Prevention. Dr. Carmichael and her associates and Dr. Jacobs and his associates reported no relevant financial disclosures.
Women with high-quality diets during the year before pregnancy were at lower risk than were those with poor diets for delivering a baby with orofacial clefts or neural tube defects, according to a study published online Oct. 3 in Archives of Pediatrics and Adolescent Medicine.
This finding, from an analysis of data in the ongoing National Birth Defects Prevention Study (NBDPS), is "notable" because previous analyses of the same data, "which assessed single-nutrient intakes in isolation, had not been informative.
"In particular, maternal intake of folic acid–containing vitamin/mineral supplements was not associated in the NBDPS with a reduced risk of neural tube defects, and findings for dietary folate were inconsistent" in these previous analyses, said Suzan L. Carmichael, Ph.D., of Stanford (Calif.) University and her associates.
"[Our] findings suggest that overall diet quality is more predictive of birth defect risk than intake of single nutrients," they noted.
Dr. Carmichael and her colleagues developed two indexes of dietary quality, one modeled after the Mediterranean Diet Score and the second after the Diet Quality Index for Pregnancy. They then assessed how each of these indexes performed in predicting risk for isolated (nonsyndromic) neural tube defects and orofacial clefts using data on 9,558 pregnancies in the NBDPS.
The NBDPS is an ongoing multistate, population-based case-control study of well-defined birth defects. For this analysis, the researchers assessed 3,411 pregnancies involving isolated neural tube defects (936) or orofacial clefts (2,475), and 6,147 pregnancies that served as controls. All the deliveries occurred between 1997 and 2005.
For both indexes of dietary quality, "we observed reduced birth defects risks associated with higher dietary quality scores. That is, after adjusting for all covariates, increasing diet quality based on either index was associated with reduced risk of each birth defect studied," the researchers said (Arch. Pediatr. Adolesc. Med. 2011 [doi:10.1001/archpediatrics.2011.185]).
"The strongest associations were observed for anencephaly," they added.
The findings were similar in further analyses of important subgroups of patients, including an assessment restricted to women who took vitamin/mineral supplements.
These results likely are generalizable to other women "because of [our] study’s population-based design, active case ascertainment, and the racial/ethnic, geographic, and socioeconomic diversity" of the subjects.
"Although the focus on folic acid has enabled substantial reductions in the prevalence of neural tube defects and perhaps other birth defects, the population burden of birth defects remains extensive. If increased dietary quality can indeed have a greater impact than individual nutrients, appropriate public health messages may need to be developed that convey this broader perspective," Dr. Carmichael and her associates said.
In an editorial accompanying this report, David R. Jacobs Jr., Ph.D., and his associates said, "The lesson from the article by Carmichael et al. is an important one: People, including women of childbearing age, should eat good food."
"A nutrient [supplement] may correct a deficiency condition but not necessarily be of benefit at higher doses in well-nourished people," they added (Arch. Pediatr. Adolesc. Med. 2011 [doi:10.1001/archpediatrics.2011.184]).
"Reduction of neural tube defects may be achievable by diet alone, at the same time reducing potential risk for other chronic diseases in the rest of the population," said Dr. Jacobs and his associates in the division of epidemiology and community health at the University of Minnesota School of Public Health, Minneapolis.
This study was supported in part by the National Institutes of Health and the Centers for Disease Control and Prevention. Dr. Carmichael and her associates and Dr. Jacobs and his associates reported no relevant financial disclosures.
FROM ARCHIVES OF PEDIATRICS AND ADOLESCENT MEDICINE
Major Finding: For two separate indexes of dietary quality, better diets during the year before pregnancy were associated with a reduced risk of neural tube defects and of orofacial clefts.
Data Source: An analysis of data from the multistate, population-based, case-control National Birth Defects Prevention Study, which involved 936 neonates with neural tube defects, 2,475 with orofacial clefts, and 6,147 neonates without malformations.
Disclosures: This study was supported in part by the National Institutes of Health and the Centers for Disease Control and Prevention. Dr. Carmichael and her associates reported no relevant financial disclosures.
Gout Patients' Poor Footwear May Exacerbate Pain
Patients with gout tend to wear shoes that lack cushioning, provide minimal stability and motion control, limit gait efficiency, fit poorly, and show excessive wear, according to a report in the October issue of Arthritis Care & Research.
Since all of these problems can exacerbate patients’ pain and impairment, "We suggest that footwear should be considered in the management plan of patients with gout," said Keith Rome, Ph.D., professor of podiatry at Auckland (New Zealand) University of Technology, and his associates.
The investigators performed a cross-sectional observational study to assess the characteristics of gout patients’ footwear. They enrolled 50 patients, predominantly middle-aged men who had longstanding disease that had been diagnosed by a physician according to American College of Rheumatology criteria.
Most of the study subjects had flat feet, and many were obese and had cardiovascular conditions. Seven had diabetes. Foot pain, foot-related functional limitation, and foot-related adverse impact on activities of daily living were assessed using the Foot Function Index and the Leeds Foot Impact Scale, both of which are brief self-administered questionnaires.
A podiatrist assessed the study subjects’ footwear during a typical visit to their rheumatology outpatient clinics. Subjects had not received any instructions regarding footwear before the assessment. All the subjects were seen during summer months in an urban setting.
In all, 21 patients (42%) were wearing shoes categorized as "poor" because they lacked support and sound structure, including sandals, flip-flops, slippers, mules, or moccasins. Even though such open footwear is common during the summer in New Zealand, it is likely that some patients chose it because they had difficulty finding closed shoes that fit properly and were comfortable, suggested Dr. Rome and his colleagues (Arthritis Care Res. 2011 Oct. 3 [doi:10.1002/acr.20582]).
One patient was wearing "average" footwear such as hard-soled or rubber-soled shoes. The remaining 28 patients (56%) were wearing "good" footwear such as walking shoes, athletic shoes, therapeutic shoes, or Oxford-type shoes.
However, even patients who had "good" footwear frequently had an improper fit. Many shoes were too long or too short for the patient and many were either too wide or too narrow, though most shoes had adequate depth.
More than 60% of shoes (30 pairs) had no cushioning at all, and another 36% (18 pairs) had only heel or forefoot cushioning. Only 13 pairs of shoes (26%) had adequate heel counter stiffness, 25 (50%) had adequate midfoot sole sagittal stability, and 21 (42%) had midfoot sole frontal stability. Twelve pairs of shoes (23%) had no fixation whatsoever.
More than half of the footwear had a flexion point distal to the level of the first metatarsophalangeal joint (MPJ). "This may limit gait efficiency due to altered kinematics, which result from inhibition of normal first MPJ function. We can postulate that a flexion point proximal may jeopardize the shoe’s stability and may exacerbate the problem of efficient toe-off observed in patients with chronic gout," the researchers noted.
In addition, most of the patients wore shoes that were more than 1 year old and showed excessive wear.
Patients who wore "poor" footwear reported higher levels of foot-related impairment and greater limitation of activities. Similarly, patients who wore "good" footwear but had improper fit reported higher levels of impairment and greater limitation of activities.
In all, 60% of patients said that the cost of footwear contributed to their choice of shoes. So "the wearing of poor shoes may be due to financial restrictions," Dr. Rome and his associates said.
"Future research should be focused on assessing the role of competitively priced footwear with adequate cushioning, motion control, and sufficient width at the forefoot," they added.
No financial conflicts of interest were reported.
foot pain diagnosis, Foot Function Index, metatarsophalangeal joint, chronic gout
Patients with gout tend to wear shoes that lack cushioning, provide minimal stability and motion control, limit gait efficiency, fit poorly, and show excessive wear, according to a report in the October issue of Arthritis Care & Research.
Since all of these problems can exacerbate patients’ pain and impairment, "We suggest that footwear should be considered in the management plan of patients with gout," said Keith Rome, Ph.D., professor of podiatry at Auckland (New Zealand) University of Technology, and his associates.
The investigators performed a cross-sectional observational study to assess the characteristics of gout patients’ footwear. They enrolled 50 patients, predominantly middle-aged men who had longstanding disease that had been diagnosed by a physician according to American College of Rheumatology criteria.
Most of the study subjects had flat feet, and many were obese and had cardiovascular conditions. Seven had diabetes. Foot pain, foot-related functional limitation, and foot-related adverse impact on activities of daily living were assessed using the Foot Function Index and the Leeds Foot Impact Scale, both of which are brief self-administered questionnaires.
A podiatrist assessed the study subjects’ footwear during a typical visit to their rheumatology outpatient clinics. Subjects had not received any instructions regarding footwear before the assessment. All the subjects were seen during summer months in an urban setting.
In all, 21 patients (42%) were wearing shoes categorized as "poor" because they lacked support and sound structure, including sandals, flip-flops, slippers, mules, or moccasins. Even though such open footwear is common during the summer in New Zealand, it is likely that some patients chose it because they had difficulty finding closed shoes that fit properly and were comfortable, suggested Dr. Rome and his colleagues (Arthritis Care Res. 2011 Oct. 3 [doi:10.1002/acr.20582]).
One patient was wearing "average" footwear such as hard-soled or rubber-soled shoes. The remaining 28 patients (56%) were wearing "good" footwear such as walking shoes, athletic shoes, therapeutic shoes, or Oxford-type shoes.
However, even patients who had "good" footwear frequently had an improper fit. Many shoes were too long or too short for the patient and many were either too wide or too narrow, though most shoes had adequate depth.
More than 60% of shoes (30 pairs) had no cushioning at all, and another 36% (18 pairs) had only heel or forefoot cushioning. Only 13 pairs of shoes (26%) had adequate heel counter stiffness, 25 (50%) had adequate midfoot sole sagittal stability, and 21 (42%) had midfoot sole frontal stability. Twelve pairs of shoes (23%) had no fixation whatsoever.
More than half of the footwear had a flexion point distal to the level of the first metatarsophalangeal joint (MPJ). "This may limit gait efficiency due to altered kinematics, which result from inhibition of normal first MPJ function. We can postulate that a flexion point proximal may jeopardize the shoe’s stability and may exacerbate the problem of efficient toe-off observed in patients with chronic gout," the researchers noted.
In addition, most of the patients wore shoes that were more than 1 year old and showed excessive wear.
Patients who wore "poor" footwear reported higher levels of foot-related impairment and greater limitation of activities. Similarly, patients who wore "good" footwear but had improper fit reported higher levels of impairment and greater limitation of activities.
In all, 60% of patients said that the cost of footwear contributed to their choice of shoes. So "the wearing of poor shoes may be due to financial restrictions," Dr. Rome and his associates said.
"Future research should be focused on assessing the role of competitively priced footwear with adequate cushioning, motion control, and sufficient width at the forefoot," they added.
No financial conflicts of interest were reported.
Patients with gout tend to wear shoes that lack cushioning, provide minimal stability and motion control, limit gait efficiency, fit poorly, and show excessive wear, according to a report in the October issue of Arthritis Care & Research.
Since all of these problems can exacerbate patients’ pain and impairment, "We suggest that footwear should be considered in the management plan of patients with gout," said Keith Rome, Ph.D., professor of podiatry at Auckland (New Zealand) University of Technology, and his associates.
The investigators performed a cross-sectional observational study to assess the characteristics of gout patients’ footwear. They enrolled 50 patients, predominantly middle-aged men who had longstanding disease that had been diagnosed by a physician according to American College of Rheumatology criteria.
Most of the study subjects had flat feet, and many were obese and had cardiovascular conditions. Seven had diabetes. Foot pain, foot-related functional limitation, and foot-related adverse impact on activities of daily living were assessed using the Foot Function Index and the Leeds Foot Impact Scale, both of which are brief self-administered questionnaires.
A podiatrist assessed the study subjects’ footwear during a typical visit to their rheumatology outpatient clinics. Subjects had not received any instructions regarding footwear before the assessment. All the subjects were seen during summer months in an urban setting.
In all, 21 patients (42%) were wearing shoes categorized as "poor" because they lacked support and sound structure, including sandals, flip-flops, slippers, mules, or moccasins. Even though such open footwear is common during the summer in New Zealand, it is likely that some patients chose it because they had difficulty finding closed shoes that fit properly and were comfortable, suggested Dr. Rome and his colleagues (Arthritis Care Res. 2011 Oct. 3 [doi:10.1002/acr.20582]).
One patient was wearing "average" footwear such as hard-soled or rubber-soled shoes. The remaining 28 patients (56%) were wearing "good" footwear such as walking shoes, athletic shoes, therapeutic shoes, or Oxford-type shoes.
However, even patients who had "good" footwear frequently had an improper fit. Many shoes were too long or too short for the patient and many were either too wide or too narrow, though most shoes had adequate depth.
More than 60% of shoes (30 pairs) had no cushioning at all, and another 36% (18 pairs) had only heel or forefoot cushioning. Only 13 pairs of shoes (26%) had adequate heel counter stiffness, 25 (50%) had adequate midfoot sole sagittal stability, and 21 (42%) had midfoot sole frontal stability. Twelve pairs of shoes (23%) had no fixation whatsoever.
More than half of the footwear had a flexion point distal to the level of the first metatarsophalangeal joint (MPJ). "This may limit gait efficiency due to altered kinematics, which result from inhibition of normal first MPJ function. We can postulate that a flexion point proximal may jeopardize the shoe’s stability and may exacerbate the problem of efficient toe-off observed in patients with chronic gout," the researchers noted.
In addition, most of the patients wore shoes that were more than 1 year old and showed excessive wear.
Patients who wore "poor" footwear reported higher levels of foot-related impairment and greater limitation of activities. Similarly, patients who wore "good" footwear but had improper fit reported higher levels of impairment and greater limitation of activities.
In all, 60% of patients said that the cost of footwear contributed to their choice of shoes. So "the wearing of poor shoes may be due to financial restrictions," Dr. Rome and his associates said.
"Future research should be focused on assessing the role of competitively priced footwear with adequate cushioning, motion control, and sufficient width at the forefoot," they added.
No financial conflicts of interest were reported.
foot pain diagnosis, Foot Function Index, metatarsophalangeal joint, chronic gout
foot pain diagnosis, Foot Function Index, metatarsophalangeal joint, chronic gout
FROM ARTHRITIS CARE & RESEARCH
Major Finding: Forty-two percent of gout patients’ shoes lacked support and sound structure, 96% had inadequate cushioning, 74% had inadequate heel stiffness, 50% had inadequate midfoot sole sagittal stability, 58% had inadequate midfoot sole frontal stability, and most showed excessive wear.
Data Source: A cross-sectional observational study of 50 gout patients seen at rheumatology outpatient clinics in Auckland.
Disclosures: No financial conflicts of interest were reported.
Deferasirox Can Improve Liver Fibrosis, Necroinflammation
Treatment with the iron chelator deferasirox for at least 3 years stabilized or improved liver fibrosis and necroinflammation and also reduced serum alanine aminotransferase levels in patients with beta-thalassemia and iron overload, Dr. Yves Deugnier and his colleagues reported in the October issue of Gastroenterology.
This improvement occurred independently of patients’ treatment response as measured by liver iron concentration, which suggests that some of the drug’s benefit is independent of its iron-clearing ability. The improvements also were seen regardless of patients’ hepatitis C virus (HCV) antibody status at baseline, said Dr. Deugnier of University Hospital Pontchaillou in Rennes, France, and his associates (Gastroenterology 2011 October [doi:10.1053/j.gastro.2011.06.065]).
"To our knowledge, this is the first analysis which demonstrates regression of fibrosis in patients with beta-thalassemia during iron chelation therapy," they noted. The study was sponsored by Novartis, maker of deferasirox.
"The overall improvement in liver fibrosis and necroinflammation with deferasirox treatment seen in this study is striking," given that there is scant evidence in the literature that any medication can reverse fibrosis.
The investigators performed a secondary analysis of data on 219 subjects participating in two 1-year trials. One trial involved patients with beta-thalassemia major who had received 1 year of either deferasirox or deferoxamine therapy, and the other involved patients with various transfusion-dependent anemias, including beta-thalassemia, who had received 1 year of treatment with deferasirox.
In Dr. Deugnier’s study, the subjects with beta-thalassemia were followed as they continued on deferasirox after the trials concluded, because the data collected so far showed that a 1-year course of therapy may not be long enough to reveal changes in the extent or severity of fibrosis.
All the study subjects were at least 2 years of age at baseline and were receiving eight or more blood transfusions per year. All underwent liver biopsy at baseline and after 3 years of deferasirox treatment, and 210 had evaluable biopsy samples.
A total of 134 patients were classified as response successes on the basis of their liver iron concentrations, while the other 76 were deemed to be response failures by this measure.
However, fibrosis staging scores improved in both groups – the response successes and failures. Overall, 122 patients, 56% of the entire study population, showed stabilization of liver fibrosis and another 59 (27%) showed regression of fibrosis.
Fibrosis stabilized in 60% of response successes and 49% of response failures, while it regressed in 26% of response successes and 30% of response failures.
This lack of correlation between liver iron concentrations and improvements in fibrosis suggests that deferasirox’s effect on fibrosis may be independent of its ability to remove iron. It is possible that the drug exerts a direct effect on the pathophysiologic mechanisms that moderate fibrosis, the researchers said.
"Recent studies have shown an inhibitory effect of deferasirox on the transcriptional nuclear factor NF-KB35, a protein which also has been shown to be implicated in the development of fibrosis of the lung. Further molecular biology studies are required to explore this hypothesis," they noted.
Deferasirox also improved liver fibrosis regardless of subjects’ HCV status. The fibrosis stabilized in 47% of HCV-positive and 57% of HCV-negative patients, and it regressed in 30% of HCV-positive and 27% of HCV-negative patients. Thus, infection with this virus does not appear to diminish the drug’s effectiveness.
Ishak necroinflammatory grading scores improved by a mean of 1.3 points in the study population overall. As with fibrosis, inflammation did not correlate with liver iron concentrations. And as with fibrosis, necroinflammatory scores improved in both HCV-positive and HCV-negative patients.
Patients who took deferasirox showed a mean decrease from 40.9 to 29.6 IU/L in mean serum alanine aminotransferase (ALT), a marker of hepatocellular damage. Improvements in ALT correlated with decreases in liver iron concentrations, so that only patients classified as response successes by this measure showed significant reductions in ALT as well.
The study findings "are encouraging and warrant further studies to investigate the potential effects of deferasirox in preventing iron-induced tissue fibrosis in organs other than the liver, such as endocrine organs or the heart. In support of this concept, recent preclinical data in which deferasirox treatment was administered to an iron-overloaded gerbil model were associated with attenuated cardiac fibrosis," Dr. Deugnier and his colleagues wrote.
Most of the authors disclosed relationships with Novartis; several authors are employees of the company and others receive honoraria, lecture fees, or grants from Novartis.
Treatment with the iron chelator deferasirox for at least 3 years stabilized or improved liver fibrosis and necroinflammation and also reduced serum alanine aminotransferase levels in patients with beta-thalassemia and iron overload, Dr. Yves Deugnier and his colleagues reported in the October issue of Gastroenterology.
This improvement occurred independently of patients’ treatment response as measured by liver iron concentration, which suggests that some of the drug’s benefit is independent of its iron-clearing ability. The improvements also were seen regardless of patients’ hepatitis C virus (HCV) antibody status at baseline, said Dr. Deugnier of University Hospital Pontchaillou in Rennes, France, and his associates (Gastroenterology 2011 October [doi:10.1053/j.gastro.2011.06.065]).
"To our knowledge, this is the first analysis which demonstrates regression of fibrosis in patients with beta-thalassemia during iron chelation therapy," they noted. The study was sponsored by Novartis, maker of deferasirox.
"The overall improvement in liver fibrosis and necroinflammation with deferasirox treatment seen in this study is striking," given that there is scant evidence in the literature that any medication can reverse fibrosis.
The investigators performed a secondary analysis of data on 219 subjects participating in two 1-year trials. One trial involved patients with beta-thalassemia major who had received 1 year of either deferasirox or deferoxamine therapy, and the other involved patients with various transfusion-dependent anemias, including beta-thalassemia, who had received 1 year of treatment with deferasirox.
In Dr. Deugnier’s study, the subjects with beta-thalassemia were followed as they continued on deferasirox after the trials concluded, because the data collected so far showed that a 1-year course of therapy may not be long enough to reveal changes in the extent or severity of fibrosis.
All the study subjects were at least 2 years of age at baseline and were receiving eight or more blood transfusions per year. All underwent liver biopsy at baseline and after 3 years of deferasirox treatment, and 210 had evaluable biopsy samples.
A total of 134 patients were classified as response successes on the basis of their liver iron concentrations, while the other 76 were deemed to be response failures by this measure.
However, fibrosis staging scores improved in both groups – the response successes and failures. Overall, 122 patients, 56% of the entire study population, showed stabilization of liver fibrosis and another 59 (27%) showed regression of fibrosis.
Fibrosis stabilized in 60% of response successes and 49% of response failures, while it regressed in 26% of response successes and 30% of response failures.
This lack of correlation between liver iron concentrations and improvements in fibrosis suggests that deferasirox’s effect on fibrosis may be independent of its ability to remove iron. It is possible that the drug exerts a direct effect on the pathophysiologic mechanisms that moderate fibrosis, the researchers said.
"Recent studies have shown an inhibitory effect of deferasirox on the transcriptional nuclear factor NF-KB35, a protein which also has been shown to be implicated in the development of fibrosis of the lung. Further molecular biology studies are required to explore this hypothesis," they noted.
Deferasirox also improved liver fibrosis regardless of subjects’ HCV status. The fibrosis stabilized in 47% of HCV-positive and 57% of HCV-negative patients, and it regressed in 30% of HCV-positive and 27% of HCV-negative patients. Thus, infection with this virus does not appear to diminish the drug’s effectiveness.
Ishak necroinflammatory grading scores improved by a mean of 1.3 points in the study population overall. As with fibrosis, inflammation did not correlate with liver iron concentrations. And as with fibrosis, necroinflammatory scores improved in both HCV-positive and HCV-negative patients.
Patients who took deferasirox showed a mean decrease from 40.9 to 29.6 IU/L in mean serum alanine aminotransferase (ALT), a marker of hepatocellular damage. Improvements in ALT correlated with decreases in liver iron concentrations, so that only patients classified as response successes by this measure showed significant reductions in ALT as well.
The study findings "are encouraging and warrant further studies to investigate the potential effects of deferasirox in preventing iron-induced tissue fibrosis in organs other than the liver, such as endocrine organs or the heart. In support of this concept, recent preclinical data in which deferasirox treatment was administered to an iron-overloaded gerbil model were associated with attenuated cardiac fibrosis," Dr. Deugnier and his colleagues wrote.
Most of the authors disclosed relationships with Novartis; several authors are employees of the company and others receive honoraria, lecture fees, or grants from Novartis.
Treatment with the iron chelator deferasirox for at least 3 years stabilized or improved liver fibrosis and necroinflammation and also reduced serum alanine aminotransferase levels in patients with beta-thalassemia and iron overload, Dr. Yves Deugnier and his colleagues reported in the October issue of Gastroenterology.
This improvement occurred independently of patients’ treatment response as measured by liver iron concentration, which suggests that some of the drug’s benefit is independent of its iron-clearing ability. The improvements also were seen regardless of patients’ hepatitis C virus (HCV) antibody status at baseline, said Dr. Deugnier of University Hospital Pontchaillou in Rennes, France, and his associates (Gastroenterology 2011 October [doi:10.1053/j.gastro.2011.06.065]).
"To our knowledge, this is the first analysis which demonstrates regression of fibrosis in patients with beta-thalassemia during iron chelation therapy," they noted. The study was sponsored by Novartis, maker of deferasirox.
"The overall improvement in liver fibrosis and necroinflammation with deferasirox treatment seen in this study is striking," given that there is scant evidence in the literature that any medication can reverse fibrosis.
The investigators performed a secondary analysis of data on 219 subjects participating in two 1-year trials. One trial involved patients with beta-thalassemia major who had received 1 year of either deferasirox or deferoxamine therapy, and the other involved patients with various transfusion-dependent anemias, including beta-thalassemia, who had received 1 year of treatment with deferasirox.
In Dr. Deugnier’s study, the subjects with beta-thalassemia were followed as they continued on deferasirox after the trials concluded, because the data collected so far showed that a 1-year course of therapy may not be long enough to reveal changes in the extent or severity of fibrosis.
All the study subjects were at least 2 years of age at baseline and were receiving eight or more blood transfusions per year. All underwent liver biopsy at baseline and after 3 years of deferasirox treatment, and 210 had evaluable biopsy samples.
A total of 134 patients were classified as response successes on the basis of their liver iron concentrations, while the other 76 were deemed to be response failures by this measure.
However, fibrosis staging scores improved in both groups – the response successes and failures. Overall, 122 patients, 56% of the entire study population, showed stabilization of liver fibrosis and another 59 (27%) showed regression of fibrosis.
Fibrosis stabilized in 60% of response successes and 49% of response failures, while it regressed in 26% of response successes and 30% of response failures.
This lack of correlation between liver iron concentrations and improvements in fibrosis suggests that deferasirox’s effect on fibrosis may be independent of its ability to remove iron. It is possible that the drug exerts a direct effect on the pathophysiologic mechanisms that moderate fibrosis, the researchers said.
"Recent studies have shown an inhibitory effect of deferasirox on the transcriptional nuclear factor NF-KB35, a protein which also has been shown to be implicated in the development of fibrosis of the lung. Further molecular biology studies are required to explore this hypothesis," they noted.
Deferasirox also improved liver fibrosis regardless of subjects’ HCV status. The fibrosis stabilized in 47% of HCV-positive and 57% of HCV-negative patients, and it regressed in 30% of HCV-positive and 27% of HCV-negative patients. Thus, infection with this virus does not appear to diminish the drug’s effectiveness.
Ishak necroinflammatory grading scores improved by a mean of 1.3 points in the study population overall. As with fibrosis, inflammation did not correlate with liver iron concentrations. And as with fibrosis, necroinflammatory scores improved in both HCV-positive and HCV-negative patients.
Patients who took deferasirox showed a mean decrease from 40.9 to 29.6 IU/L in mean serum alanine aminotransferase (ALT), a marker of hepatocellular damage. Improvements in ALT correlated with decreases in liver iron concentrations, so that only patients classified as response successes by this measure showed significant reductions in ALT as well.
The study findings "are encouraging and warrant further studies to investigate the potential effects of deferasirox in preventing iron-induced tissue fibrosis in organs other than the liver, such as endocrine organs or the heart. In support of this concept, recent preclinical data in which deferasirox treatment was administered to an iron-overloaded gerbil model were associated with attenuated cardiac fibrosis," Dr. Deugnier and his colleagues wrote.
Most of the authors disclosed relationships with Novartis; several authors are employees of the company and others receive honoraria, lecture fees, or grants from Novartis.
FROM GASTROENTEROLOGY
Mortality Low With Conservative Management of Necrotizing Pancreatitis
Approximately two-thirds of patients with necrotizing pancreatitis can be managed conservatively, and the mortality rate will remain relatively low, Dr. Hjalmar C. van Santvoort and his colleagues reported in Gastroenterology (2011;141:1254-63).
Even in patients who develop infected necrosis, delaying intervention as long as possible and using an approach that begins with simple percutaneous catheter drainage before attempting more invasive procedures will allow resolution in approximately 30% of patients, said Dr. van Santvoort of the department of surgery at University Medical Center Utrecht (the Netherlands) and his associates in the Dutch Pancreatitis Study Group.
Treatment of necrotizing pancreatitis has changed considerably in recent years, but studies of patient outcomes have not kept pace with those changes. Most of the recent studies have been small and retrospective, and most of their data comes from highly experienced, single centers. "It is questionable whether these results can be extrapolated to daily practice in nonexpert centers," the investigators said.
So they performed a prospective 4-year study in a nationwide Dutch cohort of 639 patients "covering the entire spectrum of necrotizing pancreatitis in recent years."
All patients received immediate rigorous fluid resuscitation and underwent full laboratory assessments within the first 3 days of hospitalization. Nasojejunal enteral feeding was initiated for those who couldn’t tolerate an oral diet, and parenteral nutrition was only initiated if the enteral route was not tolerated or provided insufficient nutrition.
Antibiotics were given only if infection was suspected or documented. And patients who appeared to be developing organ failure were treated in an ICU.
Further intervention was only undertaken if infection of pancreatic or peripancreatic necrosis was suspected or verified. "Whenever possible, intervention was postponed until approximately 4 weeks after the onset of the disease."
The preferred first intervention was percutaneous catheter drainage, or endoscopic transluminal catheter drainage. If that was unsuccessful, minimally invasive video-assisted retroperitoneal debridement or endoscopic transluminal necrosectomy were considered safe and feasible.
Open necrosectomy by laparotomy with continuous postoperative lavage was considered as a last resort, for extreme clinical deterioration thought to be caused by abdominal compartment syndrome, bowel ischemia, or perforation of a visceral organ.
The primary outcome of the study was mortality during hospitalization.
Approximately two-thirds of patients (62%) were treated with a conservative approach, without any radiologic, endoscopic, or surgical intervention. Mortality in this group was only 7%. Moreover, this group included 11 patients with infected necrosis who received only IV antibiotics "because of their extraordinary good clinical condition in the absence of sepsis or organ failure." Their mortality was 0%.
In contrast, mortality in the remaining one-third of patients who required an invasive procedure as their first intervention was 27%. However, the longer the interval was between admission and use of an invasive intervention, the lower the mortality. Mortality was 56% in patients who underwent invasive intervention at 0-14 days, 26% if the intervention was delayed until 14-29 days, and 15% if it was delayed until 29 days or more.
This linear association remained robust when the data were adjusted to account for baseline prognostic factors such as patient age, APACHE score, the severity of CT findings, and the presence or absence of organ failure.
Also among the 33% of patients who required a radiologic, endoscopic, or surgical intervention for infected necrosis, at a median of 28 days, the longer the interval before such intervention, the lower the risk of complications such as new-onset organ failure, intra-abdominal bleeding, and enterocutaneous fistula or perforation of a visceral organ. The risk of complications was 72% if the intervention was done at 0-14 days, 57% if it was delayed 14-29 days, and 39% if it was delayed 29 days or more.
In all, 5% of patients required emergency laparotomy, usually within 5 days of admission. Mortality was 78% in these patients. The deaths occurred almost exclusively in patients who had organ failure (35%), compared with those who did not have organ failure (2%).
"We confirmed that approximately half of the patients with necrotizing pancreatitis who die have sterile necrosis. Mortality in these patients is almost exclusively caused by multiple organ failure in the first week. There currently is no effective treatment to improve outcome in these patients," Dr. van Santvoort and his associates said.
"This supports the theory that organ failure early in the course of acute pancreatitis, which is associated with systemic release of cytokines and a systemic inflammatory response syndrome, is a different clinical entity than organ failure as a result of sepsis from infected necrosis at a later stage," they noted.
Approximately two-thirds of patients with necrotizing pancreatitis can be managed conservatively, and the mortality rate will remain relatively low, Dr. Hjalmar C. van Santvoort and his colleagues reported in Gastroenterology (2011;141:1254-63).
Even in patients who develop infected necrosis, delaying intervention as long as possible and using an approach that begins with simple percutaneous catheter drainage before attempting more invasive procedures will allow resolution in approximately 30% of patients, said Dr. van Santvoort of the department of surgery at University Medical Center Utrecht (the Netherlands) and his associates in the Dutch Pancreatitis Study Group.
Treatment of necrotizing pancreatitis has changed considerably in recent years, but studies of patient outcomes have not kept pace with those changes. Most of the recent studies have been small and retrospective, and most of their data comes from highly experienced, single centers. "It is questionable whether these results can be extrapolated to daily practice in nonexpert centers," the investigators said.
So they performed a prospective 4-year study in a nationwide Dutch cohort of 639 patients "covering the entire spectrum of necrotizing pancreatitis in recent years."
All patients received immediate rigorous fluid resuscitation and underwent full laboratory assessments within the first 3 days of hospitalization. Nasojejunal enteral feeding was initiated for those who couldn’t tolerate an oral diet, and parenteral nutrition was only initiated if the enteral route was not tolerated or provided insufficient nutrition.
Antibiotics were given only if infection was suspected or documented. And patients who appeared to be developing organ failure were treated in an ICU.
Further intervention was only undertaken if infection of pancreatic or peripancreatic necrosis was suspected or verified. "Whenever possible, intervention was postponed until approximately 4 weeks after the onset of the disease."
The preferred first intervention was percutaneous catheter drainage, or endoscopic transluminal catheter drainage. If that was unsuccessful, minimally invasive video-assisted retroperitoneal debridement or endoscopic transluminal necrosectomy were considered safe and feasible.
Open necrosectomy by laparotomy with continuous postoperative lavage was considered as a last resort, for extreme clinical deterioration thought to be caused by abdominal compartment syndrome, bowel ischemia, or perforation of a visceral organ.
The primary outcome of the study was mortality during hospitalization.
Approximately two-thirds of patients (62%) were treated with a conservative approach, without any radiologic, endoscopic, or surgical intervention. Mortality in this group was only 7%. Moreover, this group included 11 patients with infected necrosis who received only IV antibiotics "because of their extraordinary good clinical condition in the absence of sepsis or organ failure." Their mortality was 0%.
In contrast, mortality in the remaining one-third of patients who required an invasive procedure as their first intervention was 27%. However, the longer the interval was between admission and use of an invasive intervention, the lower the mortality. Mortality was 56% in patients who underwent invasive intervention at 0-14 days, 26% if the intervention was delayed until 14-29 days, and 15% if it was delayed until 29 days or more.
This linear association remained robust when the data were adjusted to account for baseline prognostic factors such as patient age, APACHE score, the severity of CT findings, and the presence or absence of organ failure.
Also among the 33% of patients who required a radiologic, endoscopic, or surgical intervention for infected necrosis, at a median of 28 days, the longer the interval before such intervention, the lower the risk of complications such as new-onset organ failure, intra-abdominal bleeding, and enterocutaneous fistula or perforation of a visceral organ. The risk of complications was 72% if the intervention was done at 0-14 days, 57% if it was delayed 14-29 days, and 39% if it was delayed 29 days or more.
In all, 5% of patients required emergency laparotomy, usually within 5 days of admission. Mortality was 78% in these patients. The deaths occurred almost exclusively in patients who had organ failure (35%), compared with those who did not have organ failure (2%).
"We confirmed that approximately half of the patients with necrotizing pancreatitis who die have sterile necrosis. Mortality in these patients is almost exclusively caused by multiple organ failure in the first week. There currently is no effective treatment to improve outcome in these patients," Dr. van Santvoort and his associates said.
"This supports the theory that organ failure early in the course of acute pancreatitis, which is associated with systemic release of cytokines and a systemic inflammatory response syndrome, is a different clinical entity than organ failure as a result of sepsis from infected necrosis at a later stage," they noted.
Approximately two-thirds of patients with necrotizing pancreatitis can be managed conservatively, and the mortality rate will remain relatively low, Dr. Hjalmar C. van Santvoort and his colleagues reported in Gastroenterology (2011;141:1254-63).
Even in patients who develop infected necrosis, delaying intervention as long as possible and using an approach that begins with simple percutaneous catheter drainage before attempting more invasive procedures will allow resolution in approximately 30% of patients, said Dr. van Santvoort of the department of surgery at University Medical Center Utrecht (the Netherlands) and his associates in the Dutch Pancreatitis Study Group.
Treatment of necrotizing pancreatitis has changed considerably in recent years, but studies of patient outcomes have not kept pace with those changes. Most of the recent studies have been small and retrospective, and most of their data comes from highly experienced, single centers. "It is questionable whether these results can be extrapolated to daily practice in nonexpert centers," the investigators said.
So they performed a prospective 4-year study in a nationwide Dutch cohort of 639 patients "covering the entire spectrum of necrotizing pancreatitis in recent years."
All patients received immediate rigorous fluid resuscitation and underwent full laboratory assessments within the first 3 days of hospitalization. Nasojejunal enteral feeding was initiated for those who couldn’t tolerate an oral diet, and parenteral nutrition was only initiated if the enteral route was not tolerated or provided insufficient nutrition.
Antibiotics were given only if infection was suspected or documented. And patients who appeared to be developing organ failure were treated in an ICU.
Further intervention was only undertaken if infection of pancreatic or peripancreatic necrosis was suspected or verified. "Whenever possible, intervention was postponed until approximately 4 weeks after the onset of the disease."
The preferred first intervention was percutaneous catheter drainage, or endoscopic transluminal catheter drainage. If that was unsuccessful, minimally invasive video-assisted retroperitoneal debridement or endoscopic transluminal necrosectomy were considered safe and feasible.
Open necrosectomy by laparotomy with continuous postoperative lavage was considered as a last resort, for extreme clinical deterioration thought to be caused by abdominal compartment syndrome, bowel ischemia, or perforation of a visceral organ.
The primary outcome of the study was mortality during hospitalization.
Approximately two-thirds of patients (62%) were treated with a conservative approach, without any radiologic, endoscopic, or surgical intervention. Mortality in this group was only 7%. Moreover, this group included 11 patients with infected necrosis who received only IV antibiotics "because of their extraordinary good clinical condition in the absence of sepsis or organ failure." Their mortality was 0%.
In contrast, mortality in the remaining one-third of patients who required an invasive procedure as their first intervention was 27%. However, the longer the interval was between admission and use of an invasive intervention, the lower the mortality. Mortality was 56% in patients who underwent invasive intervention at 0-14 days, 26% if the intervention was delayed until 14-29 days, and 15% if it was delayed until 29 days or more.
This linear association remained robust when the data were adjusted to account for baseline prognostic factors such as patient age, APACHE score, the severity of CT findings, and the presence or absence of organ failure.
Also among the 33% of patients who required a radiologic, endoscopic, or surgical intervention for infected necrosis, at a median of 28 days, the longer the interval before such intervention, the lower the risk of complications such as new-onset organ failure, intra-abdominal bleeding, and enterocutaneous fistula or perforation of a visceral organ. The risk of complications was 72% if the intervention was done at 0-14 days, 57% if it was delayed 14-29 days, and 39% if it was delayed 29 days or more.
In all, 5% of patients required emergency laparotomy, usually within 5 days of admission. Mortality was 78% in these patients. The deaths occurred almost exclusively in patients who had organ failure (35%), compared with those who did not have organ failure (2%).
"We confirmed that approximately half of the patients with necrotizing pancreatitis who die have sterile necrosis. Mortality in these patients is almost exclusively caused by multiple organ failure in the first week. There currently is no effective treatment to improve outcome in these patients," Dr. van Santvoort and his associates said.
"This supports the theory that organ failure early in the course of acute pancreatitis, which is associated with systemic release of cytokines and a systemic inflammatory response syndrome, is a different clinical entity than organ failure as a result of sepsis from infected necrosis at a later stage," they noted.
FROM GASTROENTEROLOGY
Mallory-Denk Bodies on Biopsy Predict Fibrosis Progression in HCV
In patients who have chronic hepatitis C and undergo liver biopsy, the presence of Mallory-Denk bodies – hepatocyte cytoplasmic inclusions that occur in several chronic liver diseases – is independently associated with progression of liver fibrosis, Dr. Mina O. Rakoski and her colleagues reported in the October issue of Clinical Gastroenterology and Hepatology.
In addition, patients in whom serial liver biopsies show an increase in the number of Mallory-Denk bodies over time are more likely to have clinical decompensation and progression to cirrhosis than patients who have no Mallory-Denk bodies or who have a stable or decreasing number of them over time.
Little is known about Mallory-Denk bodies, and it is still unclear whether they "represent a benign epiphenomenon of hepatocyte injury" or are actual modifiers of disease progression. Their major constituents are keratin polypeptides 8 and 18, which "likely play an essential cytoprotective role in the liver," said Dr. Rakoski of the University of Michigan, Ann Arbor, and her associates.
In mice, gender and genetic background play critical roles in the formation of Mallory-Denk bodies. In humans, genes that encode keratin, including KRT8 and KRT18, have been associated with susceptibility to end-stage liver disease, increased fibrosis in chronic hepatitis C, and increased severity of primary biliary cirrhosis, they noted.
To explore the potential prognostic value of Mallory-Denk bodies in biopsy samples, Dr. Rakoski and her colleagues analyzed data from the HALT-C (Hepatitis C Antiviral Long-Term Treatment Against Cirrhosis) trial. This was a multicenter prospective randomized controlled trial involving 1,050 patients with chronic hepatitis C and advanced fibrosis or cirrhosis.
The HALT-C study subjects underwent liver biopsy at baseline, 18 months, and 3 years, and were followed for a median of 6 years to track their clinical and histologic outcomes. The presence or absence of Mallory-Denk bodies was recorded by expert pathologists reviewing the biopsy samples.
A total of 158 subjects (15%) had Mallory-Denk bodies present in baseline biopsy samples. Their presence was associated with laboratory markers of severe disease, including low platelets, low albumin, high AFP, and high AST/ALT ratio. It also was associated with histologic markers of severe disease, including greater periportal fibrosis, greater pericellular fibrosis, steatosis, and higher inflammation scores.
A subset of 844 HALT-C patients was studied longitudinally, and 719 of these patients showed no Mallory-Denk bodies on baseline biopsy. In all, 61 of these subjects (8.5%) did show Mallory-Denk bodies on repeat biopsy ("MDB gain").
MDB gain was significantly associated with increased fibrosis and steatosis on repeat biopsy, as well as with diabetes, female gender, and Hispanic ethnicity. The association with gender and ethnicity suggest that genetic factors play an important but as yet unknown role in MDB formation.
Of 125 patients who had Mallory-Denk bodies on baseline biopsy, 101 (81%) showed fewer inclusions on repeat biopsy ("MDB loss"). This loss was associated with a lower BMI, less baseline fibrosis, the absence of diabetes, and the absence of smoking.
It is unknown why some patients showed MDB loss over time, nor why these patients did not show improved outcomes. It is possible that the loss of MDB actually reflected liver sampling errors. It also is possible that some unknown environmental or genetic factor caused the resolution of the inclusions but did not impact overall outcomes, Dr. Rakoski and her colleagues said.
In a subset of 58 patients with MDB gain over time, half developed an adverse clinical outcome. In contrast, only 15% of subjects who did not have MDB developed an adverse clinical outcome, a significant difference.
By comparison, in a subset of 88 patients with MDB loss over time, 23% developed an adverse clinical outcome, whereas 32% of those who did not have MDB loss developed an adverse clinical outcome – a nonsignificant difference. Thus, MDB loss was not associated with either good or adverse clinical outcomes.
Histologic outcomes also were assessed in a subset of 447 patients in the longitudinal analysis. In all, 67% of those who showed MDB gain over time developed an adverse histologic outcome, compared with only 28% of patients who did not show MDB gain over time. This difference was highly significant.
This study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Allergy and Infectious Diseases, the National Cancer Institute, the National Center for Minority Health and Health Disparities, the National Center for Research Resources, and Hoffmann-La Roche (now Genentech). The investigators reported no conflicts of interest.
In patients who have chronic hepatitis C and undergo liver biopsy, the presence of Mallory-Denk bodies – hepatocyte cytoplasmic inclusions that occur in several chronic liver diseases – is independently associated with progression of liver fibrosis, Dr. Mina O. Rakoski and her colleagues reported in the October issue of Clinical Gastroenterology and Hepatology.
In addition, patients in whom serial liver biopsies show an increase in the number of Mallory-Denk bodies over time are more likely to have clinical decompensation and progression to cirrhosis than patients who have no Mallory-Denk bodies or who have a stable or decreasing number of them over time.
Little is known about Mallory-Denk bodies, and it is still unclear whether they "represent a benign epiphenomenon of hepatocyte injury" or are actual modifiers of disease progression. Their major constituents are keratin polypeptides 8 and 18, which "likely play an essential cytoprotective role in the liver," said Dr. Rakoski of the University of Michigan, Ann Arbor, and her associates.
In mice, gender and genetic background play critical roles in the formation of Mallory-Denk bodies. In humans, genes that encode keratin, including KRT8 and KRT18, have been associated with susceptibility to end-stage liver disease, increased fibrosis in chronic hepatitis C, and increased severity of primary biliary cirrhosis, they noted.
To explore the potential prognostic value of Mallory-Denk bodies in biopsy samples, Dr. Rakoski and her colleagues analyzed data from the HALT-C (Hepatitis C Antiviral Long-Term Treatment Against Cirrhosis) trial. This was a multicenter prospective randomized controlled trial involving 1,050 patients with chronic hepatitis C and advanced fibrosis or cirrhosis.
The HALT-C study subjects underwent liver biopsy at baseline, 18 months, and 3 years, and were followed for a median of 6 years to track their clinical and histologic outcomes. The presence or absence of Mallory-Denk bodies was recorded by expert pathologists reviewing the biopsy samples.
A total of 158 subjects (15%) had Mallory-Denk bodies present in baseline biopsy samples. Their presence was associated with laboratory markers of severe disease, including low platelets, low albumin, high AFP, and high AST/ALT ratio. It also was associated with histologic markers of severe disease, including greater periportal fibrosis, greater pericellular fibrosis, steatosis, and higher inflammation scores.
A subset of 844 HALT-C patients was studied longitudinally, and 719 of these patients showed no Mallory-Denk bodies on baseline biopsy. In all, 61 of these subjects (8.5%) did show Mallory-Denk bodies on repeat biopsy ("MDB gain").
MDB gain was significantly associated with increased fibrosis and steatosis on repeat biopsy, as well as with diabetes, female gender, and Hispanic ethnicity. The association with gender and ethnicity suggest that genetic factors play an important but as yet unknown role in MDB formation.
Of 125 patients who had Mallory-Denk bodies on baseline biopsy, 101 (81%) showed fewer inclusions on repeat biopsy ("MDB loss"). This loss was associated with a lower BMI, less baseline fibrosis, the absence of diabetes, and the absence of smoking.
It is unknown why some patients showed MDB loss over time, nor why these patients did not show improved outcomes. It is possible that the loss of MDB actually reflected liver sampling errors. It also is possible that some unknown environmental or genetic factor caused the resolution of the inclusions but did not impact overall outcomes, Dr. Rakoski and her colleagues said.
In a subset of 58 patients with MDB gain over time, half developed an adverse clinical outcome. In contrast, only 15% of subjects who did not have MDB developed an adverse clinical outcome, a significant difference.
By comparison, in a subset of 88 patients with MDB loss over time, 23% developed an adverse clinical outcome, whereas 32% of those who did not have MDB loss developed an adverse clinical outcome – a nonsignificant difference. Thus, MDB loss was not associated with either good or adverse clinical outcomes.
Histologic outcomes also were assessed in a subset of 447 patients in the longitudinal analysis. In all, 67% of those who showed MDB gain over time developed an adverse histologic outcome, compared with only 28% of patients who did not show MDB gain over time. This difference was highly significant.
This study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Allergy and Infectious Diseases, the National Cancer Institute, the National Center for Minority Health and Health Disparities, the National Center for Research Resources, and Hoffmann-La Roche (now Genentech). The investigators reported no conflicts of interest.
In patients who have chronic hepatitis C and undergo liver biopsy, the presence of Mallory-Denk bodies – hepatocyte cytoplasmic inclusions that occur in several chronic liver diseases – is independently associated with progression of liver fibrosis, Dr. Mina O. Rakoski and her colleagues reported in the October issue of Clinical Gastroenterology and Hepatology.
In addition, patients in whom serial liver biopsies show an increase in the number of Mallory-Denk bodies over time are more likely to have clinical decompensation and progression to cirrhosis than patients who have no Mallory-Denk bodies or who have a stable or decreasing number of them over time.
Little is known about Mallory-Denk bodies, and it is still unclear whether they "represent a benign epiphenomenon of hepatocyte injury" or are actual modifiers of disease progression. Their major constituents are keratin polypeptides 8 and 18, which "likely play an essential cytoprotective role in the liver," said Dr. Rakoski of the University of Michigan, Ann Arbor, and her associates.
In mice, gender and genetic background play critical roles in the formation of Mallory-Denk bodies. In humans, genes that encode keratin, including KRT8 and KRT18, have been associated with susceptibility to end-stage liver disease, increased fibrosis in chronic hepatitis C, and increased severity of primary biliary cirrhosis, they noted.
To explore the potential prognostic value of Mallory-Denk bodies in biopsy samples, Dr. Rakoski and her colleagues analyzed data from the HALT-C (Hepatitis C Antiviral Long-Term Treatment Against Cirrhosis) trial. This was a multicenter prospective randomized controlled trial involving 1,050 patients with chronic hepatitis C and advanced fibrosis or cirrhosis.
The HALT-C study subjects underwent liver biopsy at baseline, 18 months, and 3 years, and were followed for a median of 6 years to track their clinical and histologic outcomes. The presence or absence of Mallory-Denk bodies was recorded by expert pathologists reviewing the biopsy samples.
A total of 158 subjects (15%) had Mallory-Denk bodies present in baseline biopsy samples. Their presence was associated with laboratory markers of severe disease, including low platelets, low albumin, high AFP, and high AST/ALT ratio. It also was associated with histologic markers of severe disease, including greater periportal fibrosis, greater pericellular fibrosis, steatosis, and higher inflammation scores.
A subset of 844 HALT-C patients was studied longitudinally, and 719 of these patients showed no Mallory-Denk bodies on baseline biopsy. In all, 61 of these subjects (8.5%) did show Mallory-Denk bodies on repeat biopsy ("MDB gain").
MDB gain was significantly associated with increased fibrosis and steatosis on repeat biopsy, as well as with diabetes, female gender, and Hispanic ethnicity. The association with gender and ethnicity suggest that genetic factors play an important but as yet unknown role in MDB formation.
Of 125 patients who had Mallory-Denk bodies on baseline biopsy, 101 (81%) showed fewer inclusions on repeat biopsy ("MDB loss"). This loss was associated with a lower BMI, less baseline fibrosis, the absence of diabetes, and the absence of smoking.
It is unknown why some patients showed MDB loss over time, nor why these patients did not show improved outcomes. It is possible that the loss of MDB actually reflected liver sampling errors. It also is possible that some unknown environmental or genetic factor caused the resolution of the inclusions but did not impact overall outcomes, Dr. Rakoski and her colleagues said.
In a subset of 58 patients with MDB gain over time, half developed an adverse clinical outcome. In contrast, only 15% of subjects who did not have MDB developed an adverse clinical outcome, a significant difference.
By comparison, in a subset of 88 patients with MDB loss over time, 23% developed an adverse clinical outcome, whereas 32% of those who did not have MDB loss developed an adverse clinical outcome – a nonsignificant difference. Thus, MDB loss was not associated with either good or adverse clinical outcomes.
Histologic outcomes also were assessed in a subset of 447 patients in the longitudinal analysis. In all, 67% of those who showed MDB gain over time developed an adverse histologic outcome, compared with only 28% of patients who did not show MDB gain over time. This difference was highly significant.
This study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Allergy and Infectious Diseases, the National Cancer Institute, the National Center for Minority Health and Health Disparities, the National Center for Research Resources, and Hoffmann-La Roche (now Genentech). The investigators reported no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Mortality Risk Doubles During Year After Hip Fracture
Mortality risk doubles during the year after hip fracture among women aged 65 years and older, then returns to baseline in many women; but this pattern doesn't apply in all cases, according to a large, prospective cohort study.
Mortality risk in patients who have sustained hip fracture differs bypage, underlying health, and the interval since the injury occurred in this population, said Dr. Erin S. LeBlanc of the center for health research at Kaiser Permanente Northwest Region, Portland, Ore., and her associates.
Previous studies of this issue have had methodological limitations and have yielded inconsistent results. Most have shown increased short-term mortality, but have had mixed findings on long-term mortality. “Our data suggest that previous mixed results…may have been the result of differences in the underlying age and health status of the population being studied,” the researchers said (Arch. Intern. Med. 2011 Sept. 26 [doi:10.1001/archinternmed.2011.447]).
They used data from the SOF (Study of Osteoporotic Fractures) to address these methodological limitations. The subjects were identified before hip fractures occurred, and extensive data on comorbidities allowed adjustment for potential confounders.
The SOF subjects were 5,580 community-dwelling women aged 65 and older who resided in Maryland, Minnesota, Oregon, and Pennsylvania at baseline in 1986–1988. This population included 1,116 women who sustained incident hip fractures during a mean follow-up of 14 years, and 4,464 age-matched control subjects without hip fracture.
Mortality risk was highest in the first year after hip fracture. The rate was 16.9% among cases, versus only 8.4% in controls. This doubling of risk persisted after adjustment to account for factors such as total hip bone mineral density.
Moreover, deaths in the control group were evenly spread throughout the year, whereas those in the case group were concentrated within the first 6 months of the year, with more than half the deaths occurring in the first 3 months. When thestubjects were categorized by age (younger than 70 years, 70–79 years, or 80 years and older), the youngest group showed a fivefold rise in mortality risk during the first year after hip fracture (16.3%), compared with women younger than 70 who did not sustain a hip fracture (3.7%). In contrast, the oldest women showed no increased mortality risk in the year following hip fracture, and the middle group showed an intermediate risk.
In addition, mortality risk remained elevated for years 1–10 in the youngest a group, but it was somewhat lower than the mortality risk in the first year. Imortality risk declined to baseline for the next 10 years in the two older age groups.
“We hypothesize that age influences the risk of death after hip fracture by affecting the baseline death rate in the population. Those who are younger … have a low risk of dying from other causes. Therefore, experiencing a hip fracture may increase their mortality risk compared with nonfracture controls.
“In contrast, octogenarians generally have a relatively high risk of dying from other causes; therefore, experiencing a hip fracture does not result in an increased risk of death during the next year compared with other women their age,” the researchers said.
This study was supported by the U.S. Public Health Service, the National Institute of Arthritis and Musculoskeletal and Skin Diseases, the National Institute on Aging, and the National Center for Research Resources. No financial conflicts of interest were reported.
Mortality risk doubles during the year after hip fracture among women aged 65 years and older, then returns to baseline in many women; but this pattern doesn't apply in all cases, according to a large, prospective cohort study.
Mortality risk in patients who have sustained hip fracture differs bypage, underlying health, and the interval since the injury occurred in this population, said Dr. Erin S. LeBlanc of the center for health research at Kaiser Permanente Northwest Region, Portland, Ore., and her associates.
Previous studies of this issue have had methodological limitations and have yielded inconsistent results. Most have shown increased short-term mortality, but have had mixed findings on long-term mortality. “Our data suggest that previous mixed results…may have been the result of differences in the underlying age and health status of the population being studied,” the researchers said (Arch. Intern. Med. 2011 Sept. 26 [doi:10.1001/archinternmed.2011.447]).
They used data from the SOF (Study of Osteoporotic Fractures) to address these methodological limitations. The subjects were identified before hip fractures occurred, and extensive data on comorbidities allowed adjustment for potential confounders.
The SOF subjects were 5,580 community-dwelling women aged 65 and older who resided in Maryland, Minnesota, Oregon, and Pennsylvania at baseline in 1986–1988. This population included 1,116 women who sustained incident hip fractures during a mean follow-up of 14 years, and 4,464 age-matched control subjects without hip fracture.
Mortality risk was highest in the first year after hip fracture. The rate was 16.9% among cases, versus only 8.4% in controls. This doubling of risk persisted after adjustment to account for factors such as total hip bone mineral density.
Moreover, deaths in the control group were evenly spread throughout the year, whereas those in the case group were concentrated within the first 6 months of the year, with more than half the deaths occurring in the first 3 months. When thestubjects were categorized by age (younger than 70 years, 70–79 years, or 80 years and older), the youngest group showed a fivefold rise in mortality risk during the first year after hip fracture (16.3%), compared with women younger than 70 who did not sustain a hip fracture (3.7%). In contrast, the oldest women showed no increased mortality risk in the year following hip fracture, and the middle group showed an intermediate risk.
In addition, mortality risk remained elevated for years 1–10 in the youngest a group, but it was somewhat lower than the mortality risk in the first year. Imortality risk declined to baseline for the next 10 years in the two older age groups.
“We hypothesize that age influences the risk of death after hip fracture by affecting the baseline death rate in the population. Those who are younger … have a low risk of dying from other causes. Therefore, experiencing a hip fracture may increase their mortality risk compared with nonfracture controls.
“In contrast, octogenarians generally have a relatively high risk of dying from other causes; therefore, experiencing a hip fracture does not result in an increased risk of death during the next year compared with other women their age,” the researchers said.
This study was supported by the U.S. Public Health Service, the National Institute of Arthritis and Musculoskeletal and Skin Diseases, the National Institute on Aging, and the National Center for Research Resources. No financial conflicts of interest were reported.
Mortality risk doubles during the year after hip fracture among women aged 65 years and older, then returns to baseline in many women; but this pattern doesn't apply in all cases, according to a large, prospective cohort study.
Mortality risk in patients who have sustained hip fracture differs bypage, underlying health, and the interval since the injury occurred in this population, said Dr. Erin S. LeBlanc of the center for health research at Kaiser Permanente Northwest Region, Portland, Ore., and her associates.
Previous studies of this issue have had methodological limitations and have yielded inconsistent results. Most have shown increased short-term mortality, but have had mixed findings on long-term mortality. “Our data suggest that previous mixed results…may have been the result of differences in the underlying age and health status of the population being studied,” the researchers said (Arch. Intern. Med. 2011 Sept. 26 [doi:10.1001/archinternmed.2011.447]).
They used data from the SOF (Study of Osteoporotic Fractures) to address these methodological limitations. The subjects were identified before hip fractures occurred, and extensive data on comorbidities allowed adjustment for potential confounders.
The SOF subjects were 5,580 community-dwelling women aged 65 and older who resided in Maryland, Minnesota, Oregon, and Pennsylvania at baseline in 1986–1988. This population included 1,116 women who sustained incident hip fractures during a mean follow-up of 14 years, and 4,464 age-matched control subjects without hip fracture.
Mortality risk was highest in the first year after hip fracture. The rate was 16.9% among cases, versus only 8.4% in controls. This doubling of risk persisted after adjustment to account for factors such as total hip bone mineral density.
Moreover, deaths in the control group were evenly spread throughout the year, whereas those in the case group were concentrated within the first 6 months of the year, with more than half the deaths occurring in the first 3 months. When thestubjects were categorized by age (younger than 70 years, 70–79 years, or 80 years and older), the youngest group showed a fivefold rise in mortality risk during the first year after hip fracture (16.3%), compared with women younger than 70 who did not sustain a hip fracture (3.7%). In contrast, the oldest women showed no increased mortality risk in the year following hip fracture, and the middle group showed an intermediate risk.
In addition, mortality risk remained elevated for years 1–10 in the youngest a group, but it was somewhat lower than the mortality risk in the first year. Imortality risk declined to baseline for the next 10 years in the two older age groups.
“We hypothesize that age influences the risk of death after hip fracture by affecting the baseline death rate in the population. Those who are younger … have a low risk of dying from other causes. Therefore, experiencing a hip fracture may increase their mortality risk compared with nonfracture controls.
“In contrast, octogenarians generally have a relatively high risk of dying from other causes; therefore, experiencing a hip fracture does not result in an increased risk of death during the next year compared with other women their age,” the researchers said.
This study was supported by the U.S. Public Health Service, the National Institute of Arthritis and Musculoskeletal and Skin Diseases, the National Institute on Aging, and the National Center for Research Resources. No financial conflicts of interest were reported.
From Archives of Internal Medicine