Insulin Resistance May Flag Ischemic Stroke Risk

Article Type
Changed
Display Headline
Insulin Resistance May Flag Ischemic Stroke Risk

Insulin resistance appears to be associated with a nearly threefold increased risk for ischemic stroke, independently of established cardiovascular risk factors such as diabetes, obesity, and the metabolic syndrome, according to a prospective cohort study.

If this conclusion is confirmed in further studies, “insulin resistance may [become] a novel therapeutic target for stroke prevention,” said Dr. Tatjana Rundek of the neurology department at the University of Miami and her associates.

They investigators used data from the Northern Manhattan Study, a prospective, population-based cohort study of stroke, to examine the issue. The study population comprised 1,509 older adults residing in a multiethnic urban community who were enrolled between 1993 and 2001 and followed for a mean of 8.5 years.

The study subjects had no stroke, MI, or diabetes at baseline. The mean age was 68 years. About 60% were Hispanic, 20% were black, and 20% were white. In all, 23% of the men and 26% of the women were estimated to have insulin resistance, as measured indirectly by the homeostasis model assessment (HOMA).

Overall, 180 subjects had one or more symptomatic vascular events, including 46 ischemic strokes, 45 MIs, and 121 vascular deaths.

Study subjects with insulin resistance – those in the highest quartile of HOMA scores – showed a significant 2.8-fold higher risk of ischemic stroke than those with lower HOMA scores. This association was stronger in men than in women, and it persisted when the data were adjusted to control for sociodemographic factors, the presence or absence of the metabolic syndrome, and vascular risk factors.

In contrast, neither the association between insulin resistance and MI nor the association between insulin resistance and vascular death were significant, Dr. Rundek and her colleagues said (Arch. Neurol. 2010;67:1195-200).

The findings should not be considered conclusive, since replication “with larger data sets and more end points” is still necessary, they added.

Support for the study included the Goddess Fund for Stroke Research in Women, the National Institute of Neurological Disorders and Stroke, the American Heart Association, and Columbia University. No financial conflicts of interest were reported.

View on the News

Suggestion, Not Proof

The findings do not prove that insulin resistance may be a significant causal risk factor for stroke, independent of otherriactors, noted Dr. Graeme J. Hankey and Dr. Tan Ze Feng.

If insulin resistance is confirmed as a causal risk factor rather than just a marker of increased risk, the implications are exciting “because insulin resistance cannot only be measured but also treated,” they said.

Measuring insulin resistance in certain cases may help refine prognostic estimates of stroke risk. “Alts measurement may have a role in particular cases in which traditional risk stratification schemes suggest that the patient is at intermediate risk of stroke …, and in whom an additional finding of insulin resistance may be sufficiently compelling to supplement lifestyle advice with pharmacological interventions to lower stroke risk,” they wrote.

DR. HANKEY AND DR. FENG are in the department of neurology at Royal Perth (Australia) Hospital. They reported no conflicts of interest. These comments are taken from their editorial (Arch. Neurol. 2010;67:1177-8).

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Insulin resistance appears to be associated with a nearly threefold increased risk for ischemic stroke, independently of established cardiovascular risk factors such as diabetes, obesity, and the metabolic syndrome, according to a prospective cohort study.

If this conclusion is confirmed in further studies, “insulin resistance may [become] a novel therapeutic target for stroke prevention,” said Dr. Tatjana Rundek of the neurology department at the University of Miami and her associates.

They investigators used data from the Northern Manhattan Study, a prospective, population-based cohort study of stroke, to examine the issue. The study population comprised 1,509 older adults residing in a multiethnic urban community who were enrolled between 1993 and 2001 and followed for a mean of 8.5 years.

The study subjects had no stroke, MI, or diabetes at baseline. The mean age was 68 years. About 60% were Hispanic, 20% were black, and 20% were white. In all, 23% of the men and 26% of the women were estimated to have insulin resistance, as measured indirectly by the homeostasis model assessment (HOMA).

Overall, 180 subjects had one or more symptomatic vascular events, including 46 ischemic strokes, 45 MIs, and 121 vascular deaths.

Study subjects with insulin resistance – those in the highest quartile of HOMA scores – showed a significant 2.8-fold higher risk of ischemic stroke than those with lower HOMA scores. This association was stronger in men than in women, and it persisted when the data were adjusted to control for sociodemographic factors, the presence or absence of the metabolic syndrome, and vascular risk factors.

In contrast, neither the association between insulin resistance and MI nor the association between insulin resistance and vascular death were significant, Dr. Rundek and her colleagues said (Arch. Neurol. 2010;67:1195-200).

The findings should not be considered conclusive, since replication “with larger data sets and more end points” is still necessary, they added.

Support for the study included the Goddess Fund for Stroke Research in Women, the National Institute of Neurological Disorders and Stroke, the American Heart Association, and Columbia University. No financial conflicts of interest were reported.

View on the News

Suggestion, Not Proof

The findings do not prove that insulin resistance may be a significant causal risk factor for stroke, independent of otherriactors, noted Dr. Graeme J. Hankey and Dr. Tan Ze Feng.

If insulin resistance is confirmed as a causal risk factor rather than just a marker of increased risk, the implications are exciting “because insulin resistance cannot only be measured but also treated,” they said.

Measuring insulin resistance in certain cases may help refine prognostic estimates of stroke risk. “Alts measurement may have a role in particular cases in which traditional risk stratification schemes suggest that the patient is at intermediate risk of stroke …, and in whom an additional finding of insulin resistance may be sufficiently compelling to supplement lifestyle advice with pharmacological interventions to lower stroke risk,” they wrote.

DR. HANKEY AND DR. FENG are in the department of neurology at Royal Perth (Australia) Hospital. They reported no conflicts of interest. These comments are taken from their editorial (Arch. Neurol. 2010;67:1177-8).

Insulin resistance appears to be associated with a nearly threefold increased risk for ischemic stroke, independently of established cardiovascular risk factors such as diabetes, obesity, and the metabolic syndrome, according to a prospective cohort study.

If this conclusion is confirmed in further studies, “insulin resistance may [become] a novel therapeutic target for stroke prevention,” said Dr. Tatjana Rundek of the neurology department at the University of Miami and her associates.

They investigators used data from the Northern Manhattan Study, a prospective, population-based cohort study of stroke, to examine the issue. The study population comprised 1,509 older adults residing in a multiethnic urban community who were enrolled between 1993 and 2001 and followed for a mean of 8.5 years.

The study subjects had no stroke, MI, or diabetes at baseline. The mean age was 68 years. About 60% were Hispanic, 20% were black, and 20% were white. In all, 23% of the men and 26% of the women were estimated to have insulin resistance, as measured indirectly by the homeostasis model assessment (HOMA).

Overall, 180 subjects had one or more symptomatic vascular events, including 46 ischemic strokes, 45 MIs, and 121 vascular deaths.

Study subjects with insulin resistance – those in the highest quartile of HOMA scores – showed a significant 2.8-fold higher risk of ischemic stroke than those with lower HOMA scores. This association was stronger in men than in women, and it persisted when the data were adjusted to control for sociodemographic factors, the presence or absence of the metabolic syndrome, and vascular risk factors.

In contrast, neither the association between insulin resistance and MI nor the association between insulin resistance and vascular death were significant, Dr. Rundek and her colleagues said (Arch. Neurol. 2010;67:1195-200).

The findings should not be considered conclusive, since replication “with larger data sets and more end points” is still necessary, they added.

Support for the study included the Goddess Fund for Stroke Research in Women, the National Institute of Neurological Disorders and Stroke, the American Heart Association, and Columbia University. No financial conflicts of interest were reported.

View on the News

Suggestion, Not Proof

The findings do not prove that insulin resistance may be a significant causal risk factor for stroke, independent of otherriactors, noted Dr. Graeme J. Hankey and Dr. Tan Ze Feng.

If insulin resistance is confirmed as a causal risk factor rather than just a marker of increased risk, the implications are exciting “because insulin resistance cannot only be measured but also treated,” they said.

Measuring insulin resistance in certain cases may help refine prognostic estimates of stroke risk. “Alts measurement may have a role in particular cases in which traditional risk stratification schemes suggest that the patient is at intermediate risk of stroke …, and in whom an additional finding of insulin resistance may be sufficiently compelling to supplement lifestyle advice with pharmacological interventions to lower stroke risk,” they wrote.

DR. HANKEY AND DR. FENG are in the department of neurology at Royal Perth (Australia) Hospital. They reported no conflicts of interest. These comments are taken from their editorial (Arch. Neurol. 2010;67:1177-8).

Publications
Publications
Topics
Article Type
Display Headline
Insulin Resistance May Flag Ischemic Stroke Risk
Display Headline
Insulin Resistance May Flag Ischemic Stroke Risk
Article Source

From the Archives of Neurology

PURLs Copyright

Inside the Article

Article PDF Media

Severe Hypoglycemia Signals Mortality Risk

Article Type
Changed
Display Headline
Severe Hypoglycemia Signals Mortality Risk

Major Finding: Patients with type 2 diabetes who had episodes of severe hypoglycemia were at increased risk of major macrovascular events (hazard ratio, 2.88), major microvascular events (HR, 1.81), death from cardiovascular causes (HR, 2.68), and death from any cause (HR, 2.69), compared with patients who did not have severe hypoglycemia episodes.

Data Source: ADVANCE, an international, double-blind, randomized clinical trial comparing standard vs. intensive glucose-lowering therapy in 11,140 adults with longstanding type 2 diabetes.

Disclosures: The ADVANCE study was supported by Servier and the National Health and Medical Research Council of Australia. Dr. Zoungas and her associates reported ties to Servier, Norvo Nordisk, Eli Lilly, Sanofi-Aventis, Takeda, Pfizer, Roche, Amgen, Astra Zeneca, GlaxoSmithKline, Tanabe, Merck Sharpe and Dolhm, Abbott, Johnson & Johnson, and Merck Schering Plough.

Severe hypoglycemia in patients with long-standing type 2 diabetes is strongly associated with adverse outcomes, including death from cardiovascular and noncardiovascular causes, according to a large analysis.

However, there is no close temporal relation between episodes of severe hypoglycemia and such adverse events, nor is there a dose-response relation in which more frequent episodes carry increasingly higher risks.

“Although our findings cannot exclude the possibility that severe hypoglycemia has a direct causal link with these outcomes, they suggest that it is as likely to be a marker of vulnerability to a wide range of clinical outcomes. In either case, the presence of severe hypoglycemia should raise clinical suspicion of the patient's susceptibility to adverse outcomes and prompt action to address this possibility,” said Dr. Sophia Zoungas of the George Institute for International Health, University of Sydney, and her associates in the ADVANCE trial.

The Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified-Release Controlled Evaluation assessed 11,140 patients aged 55 years and older who had type 2 diabetes and were followed at 215 medical centers in 20 countries for a median of 5 years.

The study subjects were randomly assigned to receive either intensive or standard glucose-lowering therapy. A total of 231 (about 2%) reported experiencing 299 severe hypoglycemic events: 150 (2.7%) receiving intensive therapy reported 195 events and 81 (1.5%) receiving standard therapy reported 104 events.

Major macrovascular or microvascular events occurred in 2,125 subjects, 87 of whom reported severe hypoglycemic events. And 1,031 subjects died, 45 of whom reported severe hypoglycemic events.

Nearly 17% of subjects who reported severe hypoglycemia subsequently had a major macrovascular event, 12% had a subsequent major microvascular event, and 20% died. In contrast, the corresponding proportions for subjects who did not report severe hypoglycemia were 10%, 10%, and 9%, respectively, the investigators said (N. Engl. J. Med. 2010;363:1410-8).

Also, risks for disorders of the respiratory system, digestive system, and skin were higher in patients who had severe hypoglycemic episodes than in those who did not.

Hypoglycemia conceivably could have contributed to both cardiovascular and noncardiovascular disorders and death by means of sympathoadrenal activation, abnormal cardiac repolarization, increased thrombogenesis, inflammation, or vasoconstriction. However, it is also possible, and more likely, that hypoglycemia merely reflected the effects of “coexisting conditions and unmeasured or incompletely quantified confounding variables,” making it a marker rather than a direct cause of adverse outcomes, the investigators noted.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: Patients with type 2 diabetes who had episodes of severe hypoglycemia were at increased risk of major macrovascular events (hazard ratio, 2.88), major microvascular events (HR, 1.81), death from cardiovascular causes (HR, 2.68), and death from any cause (HR, 2.69), compared with patients who did not have severe hypoglycemia episodes.

Data Source: ADVANCE, an international, double-blind, randomized clinical trial comparing standard vs. intensive glucose-lowering therapy in 11,140 adults with longstanding type 2 diabetes.

Disclosures: The ADVANCE study was supported by Servier and the National Health and Medical Research Council of Australia. Dr. Zoungas and her associates reported ties to Servier, Norvo Nordisk, Eli Lilly, Sanofi-Aventis, Takeda, Pfizer, Roche, Amgen, Astra Zeneca, GlaxoSmithKline, Tanabe, Merck Sharpe and Dolhm, Abbott, Johnson & Johnson, and Merck Schering Plough.

Severe hypoglycemia in patients with long-standing type 2 diabetes is strongly associated with adverse outcomes, including death from cardiovascular and noncardiovascular causes, according to a large analysis.

However, there is no close temporal relation between episodes of severe hypoglycemia and such adverse events, nor is there a dose-response relation in which more frequent episodes carry increasingly higher risks.

“Although our findings cannot exclude the possibility that severe hypoglycemia has a direct causal link with these outcomes, they suggest that it is as likely to be a marker of vulnerability to a wide range of clinical outcomes. In either case, the presence of severe hypoglycemia should raise clinical suspicion of the patient's susceptibility to adverse outcomes and prompt action to address this possibility,” said Dr. Sophia Zoungas of the George Institute for International Health, University of Sydney, and her associates in the ADVANCE trial.

The Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified-Release Controlled Evaluation assessed 11,140 patients aged 55 years and older who had type 2 diabetes and were followed at 215 medical centers in 20 countries for a median of 5 years.

The study subjects were randomly assigned to receive either intensive or standard glucose-lowering therapy. A total of 231 (about 2%) reported experiencing 299 severe hypoglycemic events: 150 (2.7%) receiving intensive therapy reported 195 events and 81 (1.5%) receiving standard therapy reported 104 events.

Major macrovascular or microvascular events occurred in 2,125 subjects, 87 of whom reported severe hypoglycemic events. And 1,031 subjects died, 45 of whom reported severe hypoglycemic events.

Nearly 17% of subjects who reported severe hypoglycemia subsequently had a major macrovascular event, 12% had a subsequent major microvascular event, and 20% died. In contrast, the corresponding proportions for subjects who did not report severe hypoglycemia were 10%, 10%, and 9%, respectively, the investigators said (N. Engl. J. Med. 2010;363:1410-8).

Also, risks for disorders of the respiratory system, digestive system, and skin were higher in patients who had severe hypoglycemic episodes than in those who did not.

Hypoglycemia conceivably could have contributed to both cardiovascular and noncardiovascular disorders and death by means of sympathoadrenal activation, abnormal cardiac repolarization, increased thrombogenesis, inflammation, or vasoconstriction. However, it is also possible, and more likely, that hypoglycemia merely reflected the effects of “coexisting conditions and unmeasured or incompletely quantified confounding variables,” making it a marker rather than a direct cause of adverse outcomes, the investigators noted.

Major Finding: Patients with type 2 diabetes who had episodes of severe hypoglycemia were at increased risk of major macrovascular events (hazard ratio, 2.88), major microvascular events (HR, 1.81), death from cardiovascular causes (HR, 2.68), and death from any cause (HR, 2.69), compared with patients who did not have severe hypoglycemia episodes.

Data Source: ADVANCE, an international, double-blind, randomized clinical trial comparing standard vs. intensive glucose-lowering therapy in 11,140 adults with longstanding type 2 diabetes.

Disclosures: The ADVANCE study was supported by Servier and the National Health and Medical Research Council of Australia. Dr. Zoungas and her associates reported ties to Servier, Norvo Nordisk, Eli Lilly, Sanofi-Aventis, Takeda, Pfizer, Roche, Amgen, Astra Zeneca, GlaxoSmithKline, Tanabe, Merck Sharpe and Dolhm, Abbott, Johnson & Johnson, and Merck Schering Plough.

Severe hypoglycemia in patients with long-standing type 2 diabetes is strongly associated with adverse outcomes, including death from cardiovascular and noncardiovascular causes, according to a large analysis.

However, there is no close temporal relation between episodes of severe hypoglycemia and such adverse events, nor is there a dose-response relation in which more frequent episodes carry increasingly higher risks.

“Although our findings cannot exclude the possibility that severe hypoglycemia has a direct causal link with these outcomes, they suggest that it is as likely to be a marker of vulnerability to a wide range of clinical outcomes. In either case, the presence of severe hypoglycemia should raise clinical suspicion of the patient's susceptibility to adverse outcomes and prompt action to address this possibility,” said Dr. Sophia Zoungas of the George Institute for International Health, University of Sydney, and her associates in the ADVANCE trial.

The Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified-Release Controlled Evaluation assessed 11,140 patients aged 55 years and older who had type 2 diabetes and were followed at 215 medical centers in 20 countries for a median of 5 years.

The study subjects were randomly assigned to receive either intensive or standard glucose-lowering therapy. A total of 231 (about 2%) reported experiencing 299 severe hypoglycemic events: 150 (2.7%) receiving intensive therapy reported 195 events and 81 (1.5%) receiving standard therapy reported 104 events.

Major macrovascular or microvascular events occurred in 2,125 subjects, 87 of whom reported severe hypoglycemic events. And 1,031 subjects died, 45 of whom reported severe hypoglycemic events.

Nearly 17% of subjects who reported severe hypoglycemia subsequently had a major macrovascular event, 12% had a subsequent major microvascular event, and 20% died. In contrast, the corresponding proportions for subjects who did not report severe hypoglycemia were 10%, 10%, and 9%, respectively, the investigators said (N. Engl. J. Med. 2010;363:1410-8).

Also, risks for disorders of the respiratory system, digestive system, and skin were higher in patients who had severe hypoglycemic episodes than in those who did not.

Hypoglycemia conceivably could have contributed to both cardiovascular and noncardiovascular disorders and death by means of sympathoadrenal activation, abnormal cardiac repolarization, increased thrombogenesis, inflammation, or vasoconstriction. However, it is also possible, and more likely, that hypoglycemia merely reflected the effects of “coexisting conditions and unmeasured or incompletely quantified confounding variables,” making it a marker rather than a direct cause of adverse outcomes, the investigators noted.

Publications
Publications
Topics
Article Type
Display Headline
Severe Hypoglycemia Signals Mortality Risk
Display Headline
Severe Hypoglycemia Signals Mortality Risk
Article Source

From the New England Journal of Medicine

PURLs Copyright

Inside the Article

Article PDF Media

Maternal Flu Vaccine Cuts Infants' Infection Risk

Article Type
Changed
Display Headline
Maternal Flu Vaccine Cuts Infants' Infection Risk

Major Finding: The incidence of influenza-like illness was 6.7 per 1,000 person-days for infants of mothers who had been vaccinated, compared with 7.2 per 1,000 person-days for infants of mothers who had not.

Data Source: A nonrandomized, prospective, observational cohort study involving 1,160 mother-infant pairs.

Disclosures: The study was funded by the U.S. Department of Health and Human Services' National Vaccine Program Office, the Office of Minority and Women's Health (now the Office of Health Disparities), the Centers for Disease Control and Prevention, Aventis-Pasteur, and Evans-Powderject. One of Dr. Eick's associates reported ties (unrelated to this study) with MedImmune, Pfizer, and Sanofi-Pasteur, all of which manufacture influenza vaccine.

Vaccinating pregnant women against seasonal influenza reduced the risk of laboratory-confirmed influenza infection in their infants by 41%, according to a study.

Maternal immunization similarly cut by 39% the risk that infants up to 6 months of age would be hospitalized for influenza-like illness, said Angelia A. Eick, Ph.D., of the Center for American Indian Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, and her associates.

Influenza vaccination is already recommended for pregnant women to reduce their risk of developing flu-related complications. “These findings provide support for the added benefit of protecting infants from influenza virus infection up to 6 months, the period when infants are not eligible for influenza vaccination but are at highest risk of severe influenza illness,” they noted.

Even though such immunization is recommended during pregnancy, it is not well accepted in the United States and many pregnant women do not get vaccinated. Since it would be unethical to perform a randomized, controlled study of maternal vaccination, Dr. Eick and her colleagues conducted a nonrandomized observational study to assess whether immunization during pregnancy conferred protection to infants.

The study subjects were 1,160 mother-infant pairs in which approximately half the mothers (573) had chosen to receive seasonal flu vaccine while pregnant and the other half (587) had declined the vaccine. All were enrolled after delivering healthy singleton infants at 7 hospitals serving the Navajo and White Mountain Apache Indian reservations in the southwestern United States during three flu seasons between 2002 and 2005.

A total of 605 infants developed influenza-like illness during the flu season following delivery. “We found a 41% reduction in the risk of laboratory-confirmed influenza virus infection for infants of influenza-vaccinated mothers compared with infants of unvaccinated mothers,” they said (Arch Pediatr. Adolesc. Med. 2010 [doi10.1001/archpediatrics.2010.192]).

The incidence of influenza-like illness was 6.7 per 1,000 person-days for infants of mothers who had been vaccinated, compared with 7.2 per 1,000 person-days for infants of mothers who had not. Among the infants whose mothers were vaccinated, there was a 41% reduction in the risk of laboratory-confirmed influenza virus infection compared with those whose mothers declined vaccination.

When the analysis was restricted only to cases of influenza that required hospitalization, a 39% reduction in risk was found for infants of women who had been vaccinated, compared with those of mothers who had not been vaccinated.

Cord blood samples or infant blood samples taken at 2–3 months of age were available for 160 study subjects. In this subgroup, the risk of influenza infection declined with increasing antibody titers.

The exact mechanism by which vaccination of the mother conferred protection to the infant is not certain. It may be due to maternal influenza antibodies being acquired transplacentally or through breastfeeding, or to reduced infant exposure to influenza in the mother.

It even could be due to residual confounding not accounted for in the statistical analyses, but the finding of significantly higher antibody titers in 2- to 3-month-old infants who did not develop illness argues against that possibility, they said.

View on the News

Barriers to Maternal Vaccination

This study confirms the potential for influenza vaccination of pregnant women to decrease newborn illness.

“In the United States, acceptance of vaccination during pregnancy is poor. Despite the fact that the U.S. Advisory Committee on Immunization Practices (ACIP) has recommended the use of influenza vaccine during pregnancy since 1997, there has been little appreciable change in vaccine use by the group from 1997 through 2009,” noted Dr. Justin R. Ortiz and Dr. Kathleen M. Neuzil.

Studies have indicated that some members of the public believe that influenza infection “is not serious” or hold misconceptions about vaccine safety during pregnancy. But decades of research have demonstrated substantial influenza-associated morbidity in pregnant women and have established the “excellent” safety profile of maternal trivalent inactivated influenza vaccination, they wrote.

 

 

JUSTIN R. ORTIZ, M.D., and KATHLEEN M. NEUZIL, M.D., and are both with the Vaccine Development Global Program at PATH, an international nonprofit organization dedicated to solving health care problems, and at the University of Washington, Seattle. They reported having no financial disclosures. These comments were summarized from their editorial (Arch. Pediatr. Adolesc. Med. 2010 [doi:10.1001/archpediatrics.2010.193]).

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: The incidence of influenza-like illness was 6.7 per 1,000 person-days for infants of mothers who had been vaccinated, compared with 7.2 per 1,000 person-days for infants of mothers who had not.

Data Source: A nonrandomized, prospective, observational cohort study involving 1,160 mother-infant pairs.

Disclosures: The study was funded by the U.S. Department of Health and Human Services' National Vaccine Program Office, the Office of Minority and Women's Health (now the Office of Health Disparities), the Centers for Disease Control and Prevention, Aventis-Pasteur, and Evans-Powderject. One of Dr. Eick's associates reported ties (unrelated to this study) with MedImmune, Pfizer, and Sanofi-Pasteur, all of which manufacture influenza vaccine.

Vaccinating pregnant women against seasonal influenza reduced the risk of laboratory-confirmed influenza infection in their infants by 41%, according to a study.

Maternal immunization similarly cut by 39% the risk that infants up to 6 months of age would be hospitalized for influenza-like illness, said Angelia A. Eick, Ph.D., of the Center for American Indian Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, and her associates.

Influenza vaccination is already recommended for pregnant women to reduce their risk of developing flu-related complications. “These findings provide support for the added benefit of protecting infants from influenza virus infection up to 6 months, the period when infants are not eligible for influenza vaccination but are at highest risk of severe influenza illness,” they noted.

Even though such immunization is recommended during pregnancy, it is not well accepted in the United States and many pregnant women do not get vaccinated. Since it would be unethical to perform a randomized, controlled study of maternal vaccination, Dr. Eick and her colleagues conducted a nonrandomized observational study to assess whether immunization during pregnancy conferred protection to infants.

The study subjects were 1,160 mother-infant pairs in which approximately half the mothers (573) had chosen to receive seasonal flu vaccine while pregnant and the other half (587) had declined the vaccine. All were enrolled after delivering healthy singleton infants at 7 hospitals serving the Navajo and White Mountain Apache Indian reservations in the southwestern United States during three flu seasons between 2002 and 2005.

A total of 605 infants developed influenza-like illness during the flu season following delivery. “We found a 41% reduction in the risk of laboratory-confirmed influenza virus infection for infants of influenza-vaccinated mothers compared with infants of unvaccinated mothers,” they said (Arch Pediatr. Adolesc. Med. 2010 [doi10.1001/archpediatrics.2010.192]).

The incidence of influenza-like illness was 6.7 per 1,000 person-days for infants of mothers who had been vaccinated, compared with 7.2 per 1,000 person-days for infants of mothers who had not. Among the infants whose mothers were vaccinated, there was a 41% reduction in the risk of laboratory-confirmed influenza virus infection compared with those whose mothers declined vaccination.

When the analysis was restricted only to cases of influenza that required hospitalization, a 39% reduction in risk was found for infants of women who had been vaccinated, compared with those of mothers who had not been vaccinated.

Cord blood samples or infant blood samples taken at 2–3 months of age were available for 160 study subjects. In this subgroup, the risk of influenza infection declined with increasing antibody titers.

The exact mechanism by which vaccination of the mother conferred protection to the infant is not certain. It may be due to maternal influenza antibodies being acquired transplacentally or through breastfeeding, or to reduced infant exposure to influenza in the mother.

It even could be due to residual confounding not accounted for in the statistical analyses, but the finding of significantly higher antibody titers in 2- to 3-month-old infants who did not develop illness argues against that possibility, they said.

View on the News

Barriers to Maternal Vaccination

This study confirms the potential for influenza vaccination of pregnant women to decrease newborn illness.

“In the United States, acceptance of vaccination during pregnancy is poor. Despite the fact that the U.S. Advisory Committee on Immunization Practices (ACIP) has recommended the use of influenza vaccine during pregnancy since 1997, there has been little appreciable change in vaccine use by the group from 1997 through 2009,” noted Dr. Justin R. Ortiz and Dr. Kathleen M. Neuzil.

Studies have indicated that some members of the public believe that influenza infection “is not serious” or hold misconceptions about vaccine safety during pregnancy. But decades of research have demonstrated substantial influenza-associated morbidity in pregnant women and have established the “excellent” safety profile of maternal trivalent inactivated influenza vaccination, they wrote.

 

 

JUSTIN R. ORTIZ, M.D., and KATHLEEN M. NEUZIL, M.D., and are both with the Vaccine Development Global Program at PATH, an international nonprofit organization dedicated to solving health care problems, and at the University of Washington, Seattle. They reported having no financial disclosures. These comments were summarized from their editorial (Arch. Pediatr. Adolesc. Med. 2010 [doi:10.1001/archpediatrics.2010.193]).

Major Finding: The incidence of influenza-like illness was 6.7 per 1,000 person-days for infants of mothers who had been vaccinated, compared with 7.2 per 1,000 person-days for infants of mothers who had not.

Data Source: A nonrandomized, prospective, observational cohort study involving 1,160 mother-infant pairs.

Disclosures: The study was funded by the U.S. Department of Health and Human Services' National Vaccine Program Office, the Office of Minority and Women's Health (now the Office of Health Disparities), the Centers for Disease Control and Prevention, Aventis-Pasteur, and Evans-Powderject. One of Dr. Eick's associates reported ties (unrelated to this study) with MedImmune, Pfizer, and Sanofi-Pasteur, all of which manufacture influenza vaccine.

Vaccinating pregnant women against seasonal influenza reduced the risk of laboratory-confirmed influenza infection in their infants by 41%, according to a study.

Maternal immunization similarly cut by 39% the risk that infants up to 6 months of age would be hospitalized for influenza-like illness, said Angelia A. Eick, Ph.D., of the Center for American Indian Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, and her associates.

Influenza vaccination is already recommended for pregnant women to reduce their risk of developing flu-related complications. “These findings provide support for the added benefit of protecting infants from influenza virus infection up to 6 months, the period when infants are not eligible for influenza vaccination but are at highest risk of severe influenza illness,” they noted.

Even though such immunization is recommended during pregnancy, it is not well accepted in the United States and many pregnant women do not get vaccinated. Since it would be unethical to perform a randomized, controlled study of maternal vaccination, Dr. Eick and her colleagues conducted a nonrandomized observational study to assess whether immunization during pregnancy conferred protection to infants.

The study subjects were 1,160 mother-infant pairs in which approximately half the mothers (573) had chosen to receive seasonal flu vaccine while pregnant and the other half (587) had declined the vaccine. All were enrolled after delivering healthy singleton infants at 7 hospitals serving the Navajo and White Mountain Apache Indian reservations in the southwestern United States during three flu seasons between 2002 and 2005.

A total of 605 infants developed influenza-like illness during the flu season following delivery. “We found a 41% reduction in the risk of laboratory-confirmed influenza virus infection for infants of influenza-vaccinated mothers compared with infants of unvaccinated mothers,” they said (Arch Pediatr. Adolesc. Med. 2010 [doi10.1001/archpediatrics.2010.192]).

The incidence of influenza-like illness was 6.7 per 1,000 person-days for infants of mothers who had been vaccinated, compared with 7.2 per 1,000 person-days for infants of mothers who had not. Among the infants whose mothers were vaccinated, there was a 41% reduction in the risk of laboratory-confirmed influenza virus infection compared with those whose mothers declined vaccination.

When the analysis was restricted only to cases of influenza that required hospitalization, a 39% reduction in risk was found for infants of women who had been vaccinated, compared with those of mothers who had not been vaccinated.

Cord blood samples or infant blood samples taken at 2–3 months of age were available for 160 study subjects. In this subgroup, the risk of influenza infection declined with increasing antibody titers.

The exact mechanism by which vaccination of the mother conferred protection to the infant is not certain. It may be due to maternal influenza antibodies being acquired transplacentally or through breastfeeding, or to reduced infant exposure to influenza in the mother.

It even could be due to residual confounding not accounted for in the statistical analyses, but the finding of significantly higher antibody titers in 2- to 3-month-old infants who did not develop illness argues against that possibility, they said.

View on the News

Barriers to Maternal Vaccination

This study confirms the potential for influenza vaccination of pregnant women to decrease newborn illness.

“In the United States, acceptance of vaccination during pregnancy is poor. Despite the fact that the U.S. Advisory Committee on Immunization Practices (ACIP) has recommended the use of influenza vaccine during pregnancy since 1997, there has been little appreciable change in vaccine use by the group from 1997 through 2009,” noted Dr. Justin R. Ortiz and Dr. Kathleen M. Neuzil.

Studies have indicated that some members of the public believe that influenza infection “is not serious” or hold misconceptions about vaccine safety during pregnancy. But decades of research have demonstrated substantial influenza-associated morbidity in pregnant women and have established the “excellent” safety profile of maternal trivalent inactivated influenza vaccination, they wrote.

 

 

JUSTIN R. ORTIZ, M.D., and KATHLEEN M. NEUZIL, M.D., and are both with the Vaccine Development Global Program at PATH, an international nonprofit organization dedicated to solving health care problems, and at the University of Washington, Seattle. They reported having no financial disclosures. These comments were summarized from their editorial (Arch. Pediatr. Adolesc. Med. 2010 [doi:10.1001/archpediatrics.2010.193]).

Publications
Publications
Topics
Article Type
Display Headline
Maternal Flu Vaccine Cuts Infants' Infection Risk
Display Headline
Maternal Flu Vaccine Cuts Infants' Infection Risk
Article Source

From the Archives of Pediatric and Adolescent Medicine

PURLs Copyright

Inside the Article

Article PDF Media

RA, Cardiovascular Markers Predict CV Events

Article Type
Changed
Display Headline
RA, Cardiovascular Markers Predict CV Events

Major Finding: Seven markers of RA severity and six markers of cardiovascular risk were important and independent predictors of MI, stroke, or TIA during 2 years of follow-up in patients with RA.

Data Source: Post hoc analysis of data on 10,156 patients with RA enrolled in CORRONA, a longitudinal cohort study involving 103 U.S. medical centers.

Disclosures: There was no specific support for this analysis. CORRONA has received general support in the last 2 years from Abbott, Amgen, BMS, Centocor, Genentech, Lilly, and Roche. Dr. Solomon receives support from the National Institutes of Health, the Agency for Healthcare Quality and Research, the Arthritis Foundation, Abbott, and Amgen.

Both markers of rheumatoid arthritis severity and traditional markers of cardiovascular risk are important and independent predictors of future CV events among patients who have RA, according to a report published in the Annals of the Rheumatic Diseases.

Clinicians therefore can target both types of markers to reduce the incidence of CV events, which are the major source of mortality in patients with RA, said Dr. Daniel H. Solomon, chief of the section of clinical sciences in the division of rheumatology, immunology, and allergy at the Brigham and Women's Hospital, Boston, and his associates.

The investigators examined the relative importance of the two types of markers in predicting CV events using a large, longitudinal cohort of RA patients: subjects enrolled in CORRONA (Consortium of Rheumatology Researchers of North America), which includes more than 17,000 patients treated by 268 academic and community rheumatologists at 103 medical centers across the United States. Enrollment began in 2002, and patients were followed through 2006.

For this analysis, 10,156 subjects were followed for a median of 22 months for the development of incident MI, stroke, or transient ischemic attack. Cases of heart failure, peripheral artery disease, and CV-related death were excluded from the study. The subjects' mean age was 59 years, and 75% were women. Median disease duration at baseline was 7 years.

There were 29 MIs and 47 strokes or TIAs during follow-up, for an event rate of about 4 per 1,000 person-years.

Six traditional markers of CV risk – hypertension, diabetes, hyperlipidemia, current tobacco use, known cardiovascular disease, and a family history of premature (at age 50 years or younger) CV events – were important predictors of CV events during follow-up. In addition, seven markers of RA severity – disease duration greater than 5 years, radiographically evident joint erosions, subcutaneous nodules, prior total joint replacement, a score of 2 or more on the modified Health Assessment Questionnaire, a score of 23 or more on the Clinical Disease Activity Index, and seropositivity for rheumatoid factor – were strong, independent predictors of CV risk.

Moreover, the incidence of CV events escalated as the number of either type of risk factor increased. The incidence was 0 among patients with no CV risk factors and no markers of RA severity, and it rose to 7.5 per 1,000 person-years in patients with two or more CV risk factors and three or more markers of RA severity, Dr. Solomon and his colleagues said (Ann. Rheum. Dis. 2010;69:1920–5).

In statistical models that incorporated both types of risk factors plus patient age and sex, the predictive value was comparable to that calculated using the Framingham risk score, they noted.

“These results suggest that strategies to reduce CV risk should focus on a strategy of controlling both traditional CV risk factors as well as controlling RA severity,” the investigators said.

A large clinical trial statin use for primary prevention of CV events in RA patients is now under way, they added.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: Seven markers of RA severity and six markers of cardiovascular risk were important and independent predictors of MI, stroke, or TIA during 2 years of follow-up in patients with RA.

Data Source: Post hoc analysis of data on 10,156 patients with RA enrolled in CORRONA, a longitudinal cohort study involving 103 U.S. medical centers.

Disclosures: There was no specific support for this analysis. CORRONA has received general support in the last 2 years from Abbott, Amgen, BMS, Centocor, Genentech, Lilly, and Roche. Dr. Solomon receives support from the National Institutes of Health, the Agency for Healthcare Quality and Research, the Arthritis Foundation, Abbott, and Amgen.

Both markers of rheumatoid arthritis severity and traditional markers of cardiovascular risk are important and independent predictors of future CV events among patients who have RA, according to a report published in the Annals of the Rheumatic Diseases.

Clinicians therefore can target both types of markers to reduce the incidence of CV events, which are the major source of mortality in patients with RA, said Dr. Daniel H. Solomon, chief of the section of clinical sciences in the division of rheumatology, immunology, and allergy at the Brigham and Women's Hospital, Boston, and his associates.

The investigators examined the relative importance of the two types of markers in predicting CV events using a large, longitudinal cohort of RA patients: subjects enrolled in CORRONA (Consortium of Rheumatology Researchers of North America), which includes more than 17,000 patients treated by 268 academic and community rheumatologists at 103 medical centers across the United States. Enrollment began in 2002, and patients were followed through 2006.

For this analysis, 10,156 subjects were followed for a median of 22 months for the development of incident MI, stroke, or transient ischemic attack. Cases of heart failure, peripheral artery disease, and CV-related death were excluded from the study. The subjects' mean age was 59 years, and 75% were women. Median disease duration at baseline was 7 years.

There were 29 MIs and 47 strokes or TIAs during follow-up, for an event rate of about 4 per 1,000 person-years.

Six traditional markers of CV risk – hypertension, diabetes, hyperlipidemia, current tobacco use, known cardiovascular disease, and a family history of premature (at age 50 years or younger) CV events – were important predictors of CV events during follow-up. In addition, seven markers of RA severity – disease duration greater than 5 years, radiographically evident joint erosions, subcutaneous nodules, prior total joint replacement, a score of 2 or more on the modified Health Assessment Questionnaire, a score of 23 or more on the Clinical Disease Activity Index, and seropositivity for rheumatoid factor – were strong, independent predictors of CV risk.

Moreover, the incidence of CV events escalated as the number of either type of risk factor increased. The incidence was 0 among patients with no CV risk factors and no markers of RA severity, and it rose to 7.5 per 1,000 person-years in patients with two or more CV risk factors and three or more markers of RA severity, Dr. Solomon and his colleagues said (Ann. Rheum. Dis. 2010;69:1920–5).

In statistical models that incorporated both types of risk factors plus patient age and sex, the predictive value was comparable to that calculated using the Framingham risk score, they noted.

“These results suggest that strategies to reduce CV risk should focus on a strategy of controlling both traditional CV risk factors as well as controlling RA severity,” the investigators said.

A large clinical trial statin use for primary prevention of CV events in RA patients is now under way, they added.

Major Finding: Seven markers of RA severity and six markers of cardiovascular risk were important and independent predictors of MI, stroke, or TIA during 2 years of follow-up in patients with RA.

Data Source: Post hoc analysis of data on 10,156 patients with RA enrolled in CORRONA, a longitudinal cohort study involving 103 U.S. medical centers.

Disclosures: There was no specific support for this analysis. CORRONA has received general support in the last 2 years from Abbott, Amgen, BMS, Centocor, Genentech, Lilly, and Roche. Dr. Solomon receives support from the National Institutes of Health, the Agency for Healthcare Quality and Research, the Arthritis Foundation, Abbott, and Amgen.

Both markers of rheumatoid arthritis severity and traditional markers of cardiovascular risk are important and independent predictors of future CV events among patients who have RA, according to a report published in the Annals of the Rheumatic Diseases.

Clinicians therefore can target both types of markers to reduce the incidence of CV events, which are the major source of mortality in patients with RA, said Dr. Daniel H. Solomon, chief of the section of clinical sciences in the division of rheumatology, immunology, and allergy at the Brigham and Women's Hospital, Boston, and his associates.

The investigators examined the relative importance of the two types of markers in predicting CV events using a large, longitudinal cohort of RA patients: subjects enrolled in CORRONA (Consortium of Rheumatology Researchers of North America), which includes more than 17,000 patients treated by 268 academic and community rheumatologists at 103 medical centers across the United States. Enrollment began in 2002, and patients were followed through 2006.

For this analysis, 10,156 subjects were followed for a median of 22 months for the development of incident MI, stroke, or transient ischemic attack. Cases of heart failure, peripheral artery disease, and CV-related death were excluded from the study. The subjects' mean age was 59 years, and 75% were women. Median disease duration at baseline was 7 years.

There were 29 MIs and 47 strokes or TIAs during follow-up, for an event rate of about 4 per 1,000 person-years.

Six traditional markers of CV risk – hypertension, diabetes, hyperlipidemia, current tobacco use, known cardiovascular disease, and a family history of premature (at age 50 years or younger) CV events – were important predictors of CV events during follow-up. In addition, seven markers of RA severity – disease duration greater than 5 years, radiographically evident joint erosions, subcutaneous nodules, prior total joint replacement, a score of 2 or more on the modified Health Assessment Questionnaire, a score of 23 or more on the Clinical Disease Activity Index, and seropositivity for rheumatoid factor – were strong, independent predictors of CV risk.

Moreover, the incidence of CV events escalated as the number of either type of risk factor increased. The incidence was 0 among patients with no CV risk factors and no markers of RA severity, and it rose to 7.5 per 1,000 person-years in patients with two or more CV risk factors and three or more markers of RA severity, Dr. Solomon and his colleagues said (Ann. Rheum. Dis. 2010;69:1920–5).

In statistical models that incorporated both types of risk factors plus patient age and sex, the predictive value was comparable to that calculated using the Framingham risk score, they noted.

“These results suggest that strategies to reduce CV risk should focus on a strategy of controlling both traditional CV risk factors as well as controlling RA severity,” the investigators said.

A large clinical trial statin use for primary prevention of CV events in RA patients is now under way, they added.

Publications
Publications
Topics
Article Type
Display Headline
RA, Cardiovascular Markers Predict CV Events
Display Headline
RA, Cardiovascular Markers Predict CV Events
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Intensive BP Control Didn't Shine in Chronic Kidney Disease

Findings Offer Hope for Some
Article Type
Changed
Display Headline
Intensive BP Control Didn't Shine in Chronic Kidney Disease

Intensive blood pressure control didn't slow the progression of chronic kidney disease any better than standard blood pressure control in most patients, according to a report in the New England Journal of Medicine.

It appears that the more intensive approach may benefit only patients who have proteinuria with a protein-creatinine ratio greater than 0.22, a value that is compatible with the widely accepted threshold of 300 mg/day for absolute urinary protein excretion, said Dr. Lawrence J. Appel of Johns Hopkins University, Baltimore, and his associates in the AASK (African-American Study of Kidney Disease and Hypertension) Collaborative Research Group.

Until now, “few trials have tested the effects of intensive blood pressure control [compared with conventional control] on the progression of chronic kidney disease, and the findings from such trials have been inconsistent. Despite a lack of compelling evidence, numerous guidelines recommend a reduced blood pressure target in patients with [chronic kidney disease],” they wrote.

Previous studies have rarely followed patients beyond 5 years, even though it typically takes longer than that for end-stage renal disease (ESRD) to develop in patients with CKD, the researchers noted.

The AASK study compared outcomes between the two approaches to blood pressure control in 1,094 black adults with mild to moderate hypertensive chronic kidney disease (defined as diastolic BP greater than 95 mm Hg and a glomerular filtration rate of 20-65 mL/min) but without marked proteinuria.

Patients with diabetes were excluded from the clinical trial.

In the first phase of the AASK investigation, patients were randomly assigned to either intensive BP control with a target of 92 mm Hg or lower mean arterial pressure (that is, lower than the usual target of 130/80 mm Hg recommended for CKD patients) or to conventional BP control with a target of 102-107 mm Hg mean arterial pressure (which corresponds to the conventional BP target of 140/90 mm Hg).

Throughout this initial phase of the trial, which lasted approximately 4 years, mean blood pressure was significantly lower in the intensive-control group (130/78 mm Hg) than in the standard-control group (141/86 mm Hg).

However, there was no significant difference in the primary outcome of progression of kidney disease, development of ESRD, or death.

Likewise, there was no significant difference between the two approaches in secondary or clinical outcomes, the investigators noted.

In the second phase of the AASK investigation, patients who had not yet developed ESRD were invited to continue in a cohort portion of the trial, in which the BP target was 140/90 mm Hg.

In 2004, when national guidelines were changed, this target was amended to lower than 130/80 mm Hg.

After a cumulative follow-up of 8-12 years, there still was no significant difference in primary or secondary outcomes between those who were initially assigned to the intensive-control and the standard-control groups.

More intensive blood pressure control did not slow the rate of progression of CKD, Dr. Appel and his associates reported (N. Engl. J. Med. 2010;363:918-29).

However, the intensive-control approach did benefit one subgroup of patients with proteinuria: those who had a protein-creatinine ratio of more than 0.22 at baseline, the study investigators said.

These patients showed a significant reduction in the primary outcome of progression of kidney disease, development of ESRD, or death, as well as in secondary and clinical outcomes, the researchers added.

The reason for this discrepancy is not known.

“Overall, it is hard to develop a coherent, biologically plausible argument for a qualitative interaction between harm in patients without proteinuria and benefit in those with proteinuria,” the study authors said.

Article PDF
Body

This study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria.

Data from other studies also support the conclusion that intensive BP control is beneficial in select patients.

The Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) trial also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline.

In addition, intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria.

Name
JULIE R. INGELFINGER, M.D., is chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine. These comments were summarized from her editorial accompanying the report (N. Engl. J. Med.
Author and Disclosure Information

Publications
Topics
Legacy Keywords
intensive blood pressure control, chronic kidney disease, standard blood pressure control, blood pressure, New England Journal of Medicine
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF
Body

This study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria.

Data from other studies also support the conclusion that intensive BP control is beneficial in select patients.

The Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) trial also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline.

In addition, intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria.

Body

This study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria.

Data from other studies also support the conclusion that intensive BP control is beneficial in select patients.

The Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) trial also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline.

In addition, intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria.

Name
JULIE R. INGELFINGER, M.D., is chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine. These comments were summarized from her editorial accompanying the report (N. Engl. J. Med.
Name
JULIE R. INGELFINGER, M.D., is chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine. These comments were summarized from her editorial accompanying the report (N. Engl. J. Med.
Title
Findings Offer Hope for Some
Findings Offer Hope for Some

Intensive blood pressure control didn't slow the progression of chronic kidney disease any better than standard blood pressure control in most patients, according to a report in the New England Journal of Medicine.

It appears that the more intensive approach may benefit only patients who have proteinuria with a protein-creatinine ratio greater than 0.22, a value that is compatible with the widely accepted threshold of 300 mg/day for absolute urinary protein excretion, said Dr. Lawrence J. Appel of Johns Hopkins University, Baltimore, and his associates in the AASK (African-American Study of Kidney Disease and Hypertension) Collaborative Research Group.

Until now, “few trials have tested the effects of intensive blood pressure control [compared with conventional control] on the progression of chronic kidney disease, and the findings from such trials have been inconsistent. Despite a lack of compelling evidence, numerous guidelines recommend a reduced blood pressure target in patients with [chronic kidney disease],” they wrote.

Previous studies have rarely followed patients beyond 5 years, even though it typically takes longer than that for end-stage renal disease (ESRD) to develop in patients with CKD, the researchers noted.

The AASK study compared outcomes between the two approaches to blood pressure control in 1,094 black adults with mild to moderate hypertensive chronic kidney disease (defined as diastolic BP greater than 95 mm Hg and a glomerular filtration rate of 20-65 mL/min) but without marked proteinuria.

Patients with diabetes were excluded from the clinical trial.

In the first phase of the AASK investigation, patients were randomly assigned to either intensive BP control with a target of 92 mm Hg or lower mean arterial pressure (that is, lower than the usual target of 130/80 mm Hg recommended for CKD patients) or to conventional BP control with a target of 102-107 mm Hg mean arterial pressure (which corresponds to the conventional BP target of 140/90 mm Hg).

Throughout this initial phase of the trial, which lasted approximately 4 years, mean blood pressure was significantly lower in the intensive-control group (130/78 mm Hg) than in the standard-control group (141/86 mm Hg).

However, there was no significant difference in the primary outcome of progression of kidney disease, development of ESRD, or death.

Likewise, there was no significant difference between the two approaches in secondary or clinical outcomes, the investigators noted.

In the second phase of the AASK investigation, patients who had not yet developed ESRD were invited to continue in a cohort portion of the trial, in which the BP target was 140/90 mm Hg.

In 2004, when national guidelines were changed, this target was amended to lower than 130/80 mm Hg.

After a cumulative follow-up of 8-12 years, there still was no significant difference in primary or secondary outcomes between those who were initially assigned to the intensive-control and the standard-control groups.

More intensive blood pressure control did not slow the rate of progression of CKD, Dr. Appel and his associates reported (N. Engl. J. Med. 2010;363:918-29).

However, the intensive-control approach did benefit one subgroup of patients with proteinuria: those who had a protein-creatinine ratio of more than 0.22 at baseline, the study investigators said.

These patients showed a significant reduction in the primary outcome of progression of kidney disease, development of ESRD, or death, as well as in secondary and clinical outcomes, the researchers added.

The reason for this discrepancy is not known.

“Overall, it is hard to develop a coherent, biologically plausible argument for a qualitative interaction between harm in patients without proteinuria and benefit in those with proteinuria,” the study authors said.

Intensive blood pressure control didn't slow the progression of chronic kidney disease any better than standard blood pressure control in most patients, according to a report in the New England Journal of Medicine.

It appears that the more intensive approach may benefit only patients who have proteinuria with a protein-creatinine ratio greater than 0.22, a value that is compatible with the widely accepted threshold of 300 mg/day for absolute urinary protein excretion, said Dr. Lawrence J. Appel of Johns Hopkins University, Baltimore, and his associates in the AASK (African-American Study of Kidney Disease and Hypertension) Collaborative Research Group.

Until now, “few trials have tested the effects of intensive blood pressure control [compared with conventional control] on the progression of chronic kidney disease, and the findings from such trials have been inconsistent. Despite a lack of compelling evidence, numerous guidelines recommend a reduced blood pressure target in patients with [chronic kidney disease],” they wrote.

Previous studies have rarely followed patients beyond 5 years, even though it typically takes longer than that for end-stage renal disease (ESRD) to develop in patients with CKD, the researchers noted.

The AASK study compared outcomes between the two approaches to blood pressure control in 1,094 black adults with mild to moderate hypertensive chronic kidney disease (defined as diastolic BP greater than 95 mm Hg and a glomerular filtration rate of 20-65 mL/min) but without marked proteinuria.

Patients with diabetes were excluded from the clinical trial.

In the first phase of the AASK investigation, patients were randomly assigned to either intensive BP control with a target of 92 mm Hg or lower mean arterial pressure (that is, lower than the usual target of 130/80 mm Hg recommended for CKD patients) or to conventional BP control with a target of 102-107 mm Hg mean arterial pressure (which corresponds to the conventional BP target of 140/90 mm Hg).

Throughout this initial phase of the trial, which lasted approximately 4 years, mean blood pressure was significantly lower in the intensive-control group (130/78 mm Hg) than in the standard-control group (141/86 mm Hg).

However, there was no significant difference in the primary outcome of progression of kidney disease, development of ESRD, or death.

Likewise, there was no significant difference between the two approaches in secondary or clinical outcomes, the investigators noted.

In the second phase of the AASK investigation, patients who had not yet developed ESRD were invited to continue in a cohort portion of the trial, in which the BP target was 140/90 mm Hg.

In 2004, when national guidelines were changed, this target was amended to lower than 130/80 mm Hg.

After a cumulative follow-up of 8-12 years, there still was no significant difference in primary or secondary outcomes between those who were initially assigned to the intensive-control and the standard-control groups.

More intensive blood pressure control did not slow the rate of progression of CKD, Dr. Appel and his associates reported (N. Engl. J. Med. 2010;363:918-29).

However, the intensive-control approach did benefit one subgroup of patients with proteinuria: those who had a protein-creatinine ratio of more than 0.22 at baseline, the study investigators said.

These patients showed a significant reduction in the primary outcome of progression of kidney disease, development of ESRD, or death, as well as in secondary and clinical outcomes, the researchers added.

The reason for this discrepancy is not known.

“Overall, it is hard to develop a coherent, biologically plausible argument for a qualitative interaction between harm in patients without proteinuria and benefit in those with proteinuria,” the study authors said.

Publications
Publications
Topics
Article Type
Display Headline
Intensive BP Control Didn't Shine in Chronic Kidney Disease
Display Headline
Intensive BP Control Didn't Shine in Chronic Kidney Disease
Legacy Keywords
intensive blood pressure control, chronic kidney disease, standard blood pressure control, blood pressure, New England Journal of Medicine
Legacy Keywords
intensive blood pressure control, chronic kidney disease, standard blood pressure control, blood pressure, New England Journal of Medicine
Article Source

PURLs Copyright

Inside the Article

Vitals

Major Finding: Compared with standard BP control, intensive BP

control failed to slow the progression of CKD, prevent the development

of end-stage renal disease, or prevent death in most patients who had

mild to moderate chronic kidney disease. Intensive BP control was

beneficial only in the subgroup of patients who had proteinuria with a

protein-creatinine ratio greater than 0.22 at baseline.

Data

Source: AASK, a clinical trial with an initial 4-year randomized

phase comparing intensive BP control with standard BP control in 1,094

black adults, as well as an observational cohort phase with a further

4-8 years of extended follow-up.

Disclosures: This study

was funded by the National Institute of Diabetes and Digestive and

Kidney Diseases, the Office of Research in Minority Health, and the

National Institutes of Health. King Pharmaceuticals provided financial

support and donated antihypertensive medications to each clinical

center. AstraZeneca, GlaxoSmithKline, Forest Laboratories, Pharmacia,

Pfizer, and Upjohn also donated antihypertensive drugs. None of these

companies had any role in the design of the study, the accrual or

analysis of data, or the preparation of the manuscript. Some of the

investigators reported being in consultant and/or advisory board roles

or receiving funds from numerous companies including Daiichi-Sankyo,

Novartis, Amgen, King Pharmaceuticals, Abbott, Boehringer-Ingelheim,

Litholink, Eli Lilly, Takeda, Merck, and Watson.

Article PDF Media

Insulin Resistance Predicted Early Alzheimer's

Article Type
Changed
Display Headline
Insulin Resistance Predicted Early Alzheimer's

Major Finding: Insulin-resistant adults with newly diagnosed prediabetes or type 2 diabetes showed reduced glucose metabolism in brain regions known to be similarly affected in mild cognitive impairment and early AD. They also showed an aberrant pattern of cerebral activation during a memory-encoding task and had poorer recall, compared with subjects who were not insulin resistant.

Data Source: A study of fluorodeoxyglucose PET scans in 23 subjects with insulin resistance and 6 healthy controls matched for age and education level.

Disclosures: The study was supported by a grant from the National Institute of Diabetes and Digestive and Kidney Diseases and by the U.S. Department of Veterans Affairs. The investigators reported no financial conflicts of interest.

Insulin resistance in cognitively normal subjects is associated with a pattern of reduced regional cerebral glucose metabolism that is characteristic of mild cognitive impairment and early Alzheimer's disease, according to a study published online Sept. 13.

On PET scanning, subjects with insulin resistance also showed an unusual activation pattern in the brain during a memory encoding task. This coincided with their poorer performance in recalling words, compared with healthy adults who were not insulin resistant.

“Taken together, these results suggest that increased insulin resistance may be a marker of AD [Alzheimer's disease] risk that is associated with reduced regional cerebral glucose metabolism and subtle cognitive impairments at the earliest stage of disease, even before the onset of MCI [mild cognitive impairment],” wrote Laura D. Baker, Ph.D., of the Veterans Affairs Puget Sound Health Care System and the University of Washington, Seattle, and her associates.

Since insulin resistance is known to cause type 2 diabetes and to raise the risk of Alzheimer's disease, Dr. Baker and her colleagues tested the hypothesis that cognitively normal adults with newly diagnosed prediabetes or type 2 diabetes and insulin resistance would already show abnormal cerebral glucose metabolism in regions known to predict susceptibility to AD.

The study subjects were adults with newly diagnosed, as-yet untreated prediabetes (11 patients) or type 2 diabetes (12 patients) – all of whom had insulin resistance – as well as a control group of 6 adults matched for age and education level who had normal glucose values and no insulin resistance. The subjects underwent PET imaging in a resting state and during a 35-minute memory-encoding task, and were tested for delayed free recall after the scanning was completed.

Insulin resistance was associated with reduced glucose metabolism in the posterior cingulate cortex, precuneus region, parietal cortices, temporal/angular gyri, and anterior and inferior prefrontal cortices. In contrast, no such impairment was seen in the control subjects.

“This pattern of hypometabolism has also been observed in patients with MCI and AD, in middle-aged carriers of the APOE e4 genetic risk factor who do not have dementia, and in presymptomatic adults with the AD-causative presenilin-1 gene,” Dr. Baker and her colleagues wrote (Arch. Neurol. 2010 [doi:10.1001/archneurol.2010.225]).

The link between insulin resistance and reduced glucose metabolism was not affected by age, fasting glucose values obtained just before PET scanning, degree of hyperglycemia after oral glucose tolerance testing, or APOE e4 allele status.

In addition, patients with insulin resistance showed a diffuse rather than a more focused pattern of brain activation during the memory encoding task, including activation of areas adjacent to the regions that were activated in the control group. They also showed activation or hyperactivation of areas “not typically engaged in a cognitive task,” a finding that has been reported in patients with prodromal or early AD and non-symptomatic carriers of the APOE e4 allele. This pattern may represent “a compensatory mechanism invoked following dysfunction of the neuroarchitectural network that typically would support a cognitive task,” the researchers noted.

Although the insulin-resistant subjects were not cognitively impaired according to current criteria, their recall ability was poorer than that of the control group in the post-scanning test.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: Insulin-resistant adults with newly diagnosed prediabetes or type 2 diabetes showed reduced glucose metabolism in brain regions known to be similarly affected in mild cognitive impairment and early AD. They also showed an aberrant pattern of cerebral activation during a memory-encoding task and had poorer recall, compared with subjects who were not insulin resistant.

Data Source: A study of fluorodeoxyglucose PET scans in 23 subjects with insulin resistance and 6 healthy controls matched for age and education level.

Disclosures: The study was supported by a grant from the National Institute of Diabetes and Digestive and Kidney Diseases and by the U.S. Department of Veterans Affairs. The investigators reported no financial conflicts of interest.

Insulin resistance in cognitively normal subjects is associated with a pattern of reduced regional cerebral glucose metabolism that is characteristic of mild cognitive impairment and early Alzheimer's disease, according to a study published online Sept. 13.

On PET scanning, subjects with insulin resistance also showed an unusual activation pattern in the brain during a memory encoding task. This coincided with their poorer performance in recalling words, compared with healthy adults who were not insulin resistant.

“Taken together, these results suggest that increased insulin resistance may be a marker of AD [Alzheimer's disease] risk that is associated with reduced regional cerebral glucose metabolism and subtle cognitive impairments at the earliest stage of disease, even before the onset of MCI [mild cognitive impairment],” wrote Laura D. Baker, Ph.D., of the Veterans Affairs Puget Sound Health Care System and the University of Washington, Seattle, and her associates.

Since insulin resistance is known to cause type 2 diabetes and to raise the risk of Alzheimer's disease, Dr. Baker and her colleagues tested the hypothesis that cognitively normal adults with newly diagnosed prediabetes or type 2 diabetes and insulin resistance would already show abnormal cerebral glucose metabolism in regions known to predict susceptibility to AD.

The study subjects were adults with newly diagnosed, as-yet untreated prediabetes (11 patients) or type 2 diabetes (12 patients) – all of whom had insulin resistance – as well as a control group of 6 adults matched for age and education level who had normal glucose values and no insulin resistance. The subjects underwent PET imaging in a resting state and during a 35-minute memory-encoding task, and were tested for delayed free recall after the scanning was completed.

Insulin resistance was associated with reduced glucose metabolism in the posterior cingulate cortex, precuneus region, parietal cortices, temporal/angular gyri, and anterior and inferior prefrontal cortices. In contrast, no such impairment was seen in the control subjects.

“This pattern of hypometabolism has also been observed in patients with MCI and AD, in middle-aged carriers of the APOE e4 genetic risk factor who do not have dementia, and in presymptomatic adults with the AD-causative presenilin-1 gene,” Dr. Baker and her colleagues wrote (Arch. Neurol. 2010 [doi:10.1001/archneurol.2010.225]).

The link between insulin resistance and reduced glucose metabolism was not affected by age, fasting glucose values obtained just before PET scanning, degree of hyperglycemia after oral glucose tolerance testing, or APOE e4 allele status.

In addition, patients with insulin resistance showed a diffuse rather than a more focused pattern of brain activation during the memory encoding task, including activation of areas adjacent to the regions that were activated in the control group. They also showed activation or hyperactivation of areas “not typically engaged in a cognitive task,” a finding that has been reported in patients with prodromal or early AD and non-symptomatic carriers of the APOE e4 allele. This pattern may represent “a compensatory mechanism invoked following dysfunction of the neuroarchitectural network that typically would support a cognitive task,” the researchers noted.

Although the insulin-resistant subjects were not cognitively impaired according to current criteria, their recall ability was poorer than that of the control group in the post-scanning test.

Major Finding: Insulin-resistant adults with newly diagnosed prediabetes or type 2 diabetes showed reduced glucose metabolism in brain regions known to be similarly affected in mild cognitive impairment and early AD. They also showed an aberrant pattern of cerebral activation during a memory-encoding task and had poorer recall, compared with subjects who were not insulin resistant.

Data Source: A study of fluorodeoxyglucose PET scans in 23 subjects with insulin resistance and 6 healthy controls matched for age and education level.

Disclosures: The study was supported by a grant from the National Institute of Diabetes and Digestive and Kidney Diseases and by the U.S. Department of Veterans Affairs. The investigators reported no financial conflicts of interest.

Insulin resistance in cognitively normal subjects is associated with a pattern of reduced regional cerebral glucose metabolism that is characteristic of mild cognitive impairment and early Alzheimer's disease, according to a study published online Sept. 13.

On PET scanning, subjects with insulin resistance also showed an unusual activation pattern in the brain during a memory encoding task. This coincided with their poorer performance in recalling words, compared with healthy adults who were not insulin resistant.

“Taken together, these results suggest that increased insulin resistance may be a marker of AD [Alzheimer's disease] risk that is associated with reduced regional cerebral glucose metabolism and subtle cognitive impairments at the earliest stage of disease, even before the onset of MCI [mild cognitive impairment],” wrote Laura D. Baker, Ph.D., of the Veterans Affairs Puget Sound Health Care System and the University of Washington, Seattle, and her associates.

Since insulin resistance is known to cause type 2 diabetes and to raise the risk of Alzheimer's disease, Dr. Baker and her colleagues tested the hypothesis that cognitively normal adults with newly diagnosed prediabetes or type 2 diabetes and insulin resistance would already show abnormal cerebral glucose metabolism in regions known to predict susceptibility to AD.

The study subjects were adults with newly diagnosed, as-yet untreated prediabetes (11 patients) or type 2 diabetes (12 patients) – all of whom had insulin resistance – as well as a control group of 6 adults matched for age and education level who had normal glucose values and no insulin resistance. The subjects underwent PET imaging in a resting state and during a 35-minute memory-encoding task, and were tested for delayed free recall after the scanning was completed.

Insulin resistance was associated with reduced glucose metabolism in the posterior cingulate cortex, precuneus region, parietal cortices, temporal/angular gyri, and anterior and inferior prefrontal cortices. In contrast, no such impairment was seen in the control subjects.

“This pattern of hypometabolism has also been observed in patients with MCI and AD, in middle-aged carriers of the APOE e4 genetic risk factor who do not have dementia, and in presymptomatic adults with the AD-causative presenilin-1 gene,” Dr. Baker and her colleagues wrote (Arch. Neurol. 2010 [doi:10.1001/archneurol.2010.225]).

The link between insulin resistance and reduced glucose metabolism was not affected by age, fasting glucose values obtained just before PET scanning, degree of hyperglycemia after oral glucose tolerance testing, or APOE e4 allele status.

In addition, patients with insulin resistance showed a diffuse rather than a more focused pattern of brain activation during the memory encoding task, including activation of areas adjacent to the regions that were activated in the control group. They also showed activation or hyperactivation of areas “not typically engaged in a cognitive task,” a finding that has been reported in patients with prodromal or early AD and non-symptomatic carriers of the APOE e4 allele. This pattern may represent “a compensatory mechanism invoked following dysfunction of the neuroarchitectural network that typically would support a cognitive task,” the researchers noted.

Although the insulin-resistant subjects were not cognitively impaired according to current criteria, their recall ability was poorer than that of the control group in the post-scanning test.

Publications
Publications
Topics
Article Type
Display Headline
Insulin Resistance Predicted Early Alzheimer's
Display Headline
Insulin Resistance Predicted Early Alzheimer's
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Self-Management Techniques Failed to Improve Heart Failure

Article Type
Changed
Display Headline
Self-Management Techniques Failed to Improve Heart Failure

Major Finding: Patients with chronic heart failure who participated in a self-management intervention later showed no difference from a control group in the rate of death and HF hospitalization.

Data Source: A partially blinded, randomized, controlled trial involving 902 Chicago residents with mild to moderate HF who were followed for 2-3 years.

Disclosures: The HART study was funded by the National Institutes of Health. An associate of Dr. Powell reported receiving research funding from Novartis after the HART study was concluded.

An intervention to teach patients self-management of their chronic heart failure failed to reduce mortality or hospitalizations for the disorder, compared with patient education alone.

Nonadherence to heart failure medications is 30%-60%, and nonadherence to lifestyle recommendationsris 50%-80% in the general population. Previous assessments of self-management techniques to improve adherencehaere limited bytheall samples siz inadequate follow-up times, said Lynda H. Powell, Ph.D., of the department of preventive medicine at Rush University Medical Center, Chicago, and her associates.

The investigators designed HART (Heart Failure Adherence and Retention Trial) to have the size, duration, methodologic rigor, and representation of typical HF patients. They assessed mortality and HF hospitalizations after 1 year of self-management and another 1-2 years of follow-up in 902 patients with mild to moderate HF.

In all, 451 patients (average age, 64 years) were randomized to receive thesintervention, and the other 451 served asa controls.

Slightly fewer than half of the study subjects were women, and 40% were members of racial/ethnic minority groups. Overall, 23% had preserved systolic function, and the remainder had impaired systolic function, making the sample “representative of typical clinical populations.”

At baseline, patients were taking an average of seven medications. Nearly 40% did not adhere to the prescribed dosage of either an ACE inhibitor or a beta-blocker. Median sodium intake was almost twice as high as is recommended for HF patients.

The intervention included 18 2-hour group meetings over the course of a year. Patients were educated about medication adherence, sudden weight gain, sodium restriction, moderate physical activity, and stress management, and were given American Heart Association tip sheets concerning HF. They also were counseled to help them develop mastery in problem-solving skills and in five self-management skills: self-monitoring, environmental restructuring, elicitation of support from family and friends, cognitive restructuring, and the relaxation response.

The control group received the AHA tip sheets by mail, and discussed the material by phone with study counselors.

The intervention did not improve the primary end point, which was hospitalization for HF events or death. There were 163 events in the intervention group (40%) and 171 in the control group (41%); the annual event rates were 18% and 19%, respectively. Both differences were nonsignificant.

Both study groups had a mean of 0.7 HF hospitalizations. At the study's conclusion, there were no differences between groups in 6-minute walk time, change in New York Heart Association class, heart rate, respiratory rate, blood pressure, or body mass index.

Nonadherence to prescribed ACE inhibitor or beta-blocker therapy had risen by 7% in both groups, the researchers said (JAMA 2010;304:1331-8).

View on the News

Telemonitoring Is the Wave of the Future

Unlike the self-management strategy used in this study, “new technologies to empower patients who have long-term medical conditions such as heart failure may motivate them to take a more active role in their own health care and may promote adherence to treatment,” said Dr. John G.F. Cleland and Inger Ekman, Ph.D.

The self-management intervention in the current study, which included 18 2-hour meetings over a year's time, incurred considerable cost and inconvenience to patients. “Ultimately, electronic media, rather than in-person meetings with nurses and physicians, may become the predominant method of delivering health information, ensuring implementation of advice and treatment, and sending motivational messages efficiently and effectively,” they said. Home telemonitoring also would allow patients to inform clinicians about symptoms, weight, heart rate, heart rhythm, and blood pressure on a daily or weekly basis.

The medical and nursing professions should be a catalyst to the “inevitable” changeover to telemonitoring, they said.

JOHN G.F. CLELAND, M.D., is a cardiologist at the University of Hull (England). INGER EKMAN, PH.D., R.N., is at Göteborg (Sweden) University. Dr. Cleland reported receiving research funding from Phillips, a manufacturer of telemonitoring equipment. These comments are taken from their editorial accompanying Dr. Powell's report (JAMA 2010;304:1383-4).

Vitals

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: Patients with chronic heart failure who participated in a self-management intervention later showed no difference from a control group in the rate of death and HF hospitalization.

Data Source: A partially blinded, randomized, controlled trial involving 902 Chicago residents with mild to moderate HF who were followed for 2-3 years.

Disclosures: The HART study was funded by the National Institutes of Health. An associate of Dr. Powell reported receiving research funding from Novartis after the HART study was concluded.

An intervention to teach patients self-management of their chronic heart failure failed to reduce mortality or hospitalizations for the disorder, compared with patient education alone.

Nonadherence to heart failure medications is 30%-60%, and nonadherence to lifestyle recommendationsris 50%-80% in the general population. Previous assessments of self-management techniques to improve adherencehaere limited bytheall samples siz inadequate follow-up times, said Lynda H. Powell, Ph.D., of the department of preventive medicine at Rush University Medical Center, Chicago, and her associates.

The investigators designed HART (Heart Failure Adherence and Retention Trial) to have the size, duration, methodologic rigor, and representation of typical HF patients. They assessed mortality and HF hospitalizations after 1 year of self-management and another 1-2 years of follow-up in 902 patients with mild to moderate HF.

In all, 451 patients (average age, 64 years) were randomized to receive thesintervention, and the other 451 served asa controls.

Slightly fewer than half of the study subjects were women, and 40% were members of racial/ethnic minority groups. Overall, 23% had preserved systolic function, and the remainder had impaired systolic function, making the sample “representative of typical clinical populations.”

At baseline, patients were taking an average of seven medications. Nearly 40% did not adhere to the prescribed dosage of either an ACE inhibitor or a beta-blocker. Median sodium intake was almost twice as high as is recommended for HF patients.

The intervention included 18 2-hour group meetings over the course of a year. Patients were educated about medication adherence, sudden weight gain, sodium restriction, moderate physical activity, and stress management, and were given American Heart Association tip sheets concerning HF. They also were counseled to help them develop mastery in problem-solving skills and in five self-management skills: self-monitoring, environmental restructuring, elicitation of support from family and friends, cognitive restructuring, and the relaxation response.

The control group received the AHA tip sheets by mail, and discussed the material by phone with study counselors.

The intervention did not improve the primary end point, which was hospitalization for HF events or death. There were 163 events in the intervention group (40%) and 171 in the control group (41%); the annual event rates were 18% and 19%, respectively. Both differences were nonsignificant.

Both study groups had a mean of 0.7 HF hospitalizations. At the study's conclusion, there were no differences between groups in 6-minute walk time, change in New York Heart Association class, heart rate, respiratory rate, blood pressure, or body mass index.

Nonadherence to prescribed ACE inhibitor or beta-blocker therapy had risen by 7% in both groups, the researchers said (JAMA 2010;304:1331-8).

View on the News

Telemonitoring Is the Wave of the Future

Unlike the self-management strategy used in this study, “new technologies to empower patients who have long-term medical conditions such as heart failure may motivate them to take a more active role in their own health care and may promote adherence to treatment,” said Dr. John G.F. Cleland and Inger Ekman, Ph.D.

The self-management intervention in the current study, which included 18 2-hour meetings over a year's time, incurred considerable cost and inconvenience to patients. “Ultimately, electronic media, rather than in-person meetings with nurses and physicians, may become the predominant method of delivering health information, ensuring implementation of advice and treatment, and sending motivational messages efficiently and effectively,” they said. Home telemonitoring also would allow patients to inform clinicians about symptoms, weight, heart rate, heart rhythm, and blood pressure on a daily or weekly basis.

The medical and nursing professions should be a catalyst to the “inevitable” changeover to telemonitoring, they said.

JOHN G.F. CLELAND, M.D., is a cardiologist at the University of Hull (England). INGER EKMAN, PH.D., R.N., is at Göteborg (Sweden) University. Dr. Cleland reported receiving research funding from Phillips, a manufacturer of telemonitoring equipment. These comments are taken from their editorial accompanying Dr. Powell's report (JAMA 2010;304:1383-4).

Vitals

Major Finding: Patients with chronic heart failure who participated in a self-management intervention later showed no difference from a control group in the rate of death and HF hospitalization.

Data Source: A partially blinded, randomized, controlled trial involving 902 Chicago residents with mild to moderate HF who were followed for 2-3 years.

Disclosures: The HART study was funded by the National Institutes of Health. An associate of Dr. Powell reported receiving research funding from Novartis after the HART study was concluded.

An intervention to teach patients self-management of their chronic heart failure failed to reduce mortality or hospitalizations for the disorder, compared with patient education alone.

Nonadherence to heart failure medications is 30%-60%, and nonadherence to lifestyle recommendationsris 50%-80% in the general population. Previous assessments of self-management techniques to improve adherencehaere limited bytheall samples siz inadequate follow-up times, said Lynda H. Powell, Ph.D., of the department of preventive medicine at Rush University Medical Center, Chicago, and her associates.

The investigators designed HART (Heart Failure Adherence and Retention Trial) to have the size, duration, methodologic rigor, and representation of typical HF patients. They assessed mortality and HF hospitalizations after 1 year of self-management and another 1-2 years of follow-up in 902 patients with mild to moderate HF.

In all, 451 patients (average age, 64 years) were randomized to receive thesintervention, and the other 451 served asa controls.

Slightly fewer than half of the study subjects were women, and 40% were members of racial/ethnic minority groups. Overall, 23% had preserved systolic function, and the remainder had impaired systolic function, making the sample “representative of typical clinical populations.”

At baseline, patients were taking an average of seven medications. Nearly 40% did not adhere to the prescribed dosage of either an ACE inhibitor or a beta-blocker. Median sodium intake was almost twice as high as is recommended for HF patients.

The intervention included 18 2-hour group meetings over the course of a year. Patients were educated about medication adherence, sudden weight gain, sodium restriction, moderate physical activity, and stress management, and were given American Heart Association tip sheets concerning HF. They also were counseled to help them develop mastery in problem-solving skills and in five self-management skills: self-monitoring, environmental restructuring, elicitation of support from family and friends, cognitive restructuring, and the relaxation response.

The control group received the AHA tip sheets by mail, and discussed the material by phone with study counselors.

The intervention did not improve the primary end point, which was hospitalization for HF events or death. There were 163 events in the intervention group (40%) and 171 in the control group (41%); the annual event rates were 18% and 19%, respectively. Both differences were nonsignificant.

Both study groups had a mean of 0.7 HF hospitalizations. At the study's conclusion, there were no differences between groups in 6-minute walk time, change in New York Heart Association class, heart rate, respiratory rate, blood pressure, or body mass index.

Nonadherence to prescribed ACE inhibitor or beta-blocker therapy had risen by 7% in both groups, the researchers said (JAMA 2010;304:1331-8).

View on the News

Telemonitoring Is the Wave of the Future

Unlike the self-management strategy used in this study, “new technologies to empower patients who have long-term medical conditions such as heart failure may motivate them to take a more active role in their own health care and may promote adherence to treatment,” said Dr. John G.F. Cleland and Inger Ekman, Ph.D.

The self-management intervention in the current study, which included 18 2-hour meetings over a year's time, incurred considerable cost and inconvenience to patients. “Ultimately, electronic media, rather than in-person meetings with nurses and physicians, may become the predominant method of delivering health information, ensuring implementation of advice and treatment, and sending motivational messages efficiently and effectively,” they said. Home telemonitoring also would allow patients to inform clinicians about symptoms, weight, heart rate, heart rhythm, and blood pressure on a daily or weekly basis.

The medical and nursing professions should be a catalyst to the “inevitable” changeover to telemonitoring, they said.

JOHN G.F. CLELAND, M.D., is a cardiologist at the University of Hull (England). INGER EKMAN, PH.D., R.N., is at Göteborg (Sweden) University. Dr. Cleland reported receiving research funding from Phillips, a manufacturer of telemonitoring equipment. These comments are taken from their editorial accompanying Dr. Powell's report (JAMA 2010;304:1383-4).

Vitals

Publications
Publications
Topics
Article Type
Display Headline
Self-Management Techniques Failed to Improve Heart Failure
Display Headline
Self-Management Techniques Failed to Improve Heart Failure
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

'Resect and Discard' Would Cut Colorectal Screening Costs

Article Type
Changed
Display Headline
'Resect and Discard' Would Cut Colorectal Screening Costs

The cost of colorectal cancer screening could be cut substantially without impairing its effectiveness by adopting a “resect and discard” approach for the smallest polyps, Dr. Cesare Hassan and his colleagues said.

A major portion of the cost of colorectal cancer screening is attributed to pathologic examination of polyps that are identified and resected. Among patients at average risk, more than 60% of all polyps detected at screening are diminutive (5 mm or smaller) and have an extremely low likelihood of being cancerous, said Dr. Hassan of Nuovo Regina Margherita Hospital, Rome, and his associates (Clin. Gastroenterol. Hepatol. 2010;8:865-9).

The “resect and discard” approach calls for simply discarding diminutive lesions rather than performing pathology exams on them. This approach is facilitated by the use of new colonoscopy technology that incorporates narrow-band imaging, which allows for better characterization of the smallest polyps and could conceivably avert further histologic assessment.

The investigators used mathematical modeling to create a cost-effectiveness simulation that would assess the potential savings of adopting a “resect and discard” approach for diminutive polyps in a hypothetical cohort of 100,000 average-risk American men and women aged 50-100 years. The hypothetical costs were calculated by using Medicare reimbursement data.

The model assumed that 85% of colorectal cancers develop from a polypoid precursor, and the remaining 15% are de novo tumors. It incorporated several possible health states: no colorectal neoplasia; diminutive (5 mm or smaller), small (6-9 mm), or large (10 mm or larger) adenomatous polyps; localized, regional, or distant colorectal cancer; and colorectal cancer–related death. Hyperplastic polyps also were included in the simulation.

“In order not to overestimate the efficacy of [narrow-band imaging],” the investigators used performance statistics derived from the literature and assumed an 84% rate of high-confidence classification of polyps, with a 94% sensitivity and an 89% specificity for identifying adenomas.

The model further assumed that a “resect and discard” policy was followed for all cases in which a high-confidence diagnosis was achieved using narrow-band imaging, and that all diminutive polyps in which a high-confidence diagnosis could not be made were removed and sent for formal histologic assessment.

The simulation first tested the cost-effectiveness of standard colonoscopy with pathology evaluations of all resected polyps in the cohort. The procedure was found to reduce colorectal cancer incidence by 75% and mortality by 79%.

When these outcomes were projected onto the U.S. population using 2009 census data and assuming a 23% rate of adherence to screening colonoscopy in the general population, standard colonoscopy screening was found to save $451 million annually, compared with no colonoscopy screening.

The simulation then tested the “resect and discard” approach and found an additional annual benefit of $25 per person screened. When this outcome was projected onto the U.S. population, this approach added an estimated $33 million in cost savings to the standard colonoscopy approach.

Importantly, the “resect and discard” approach showed no meaningful effect on the efficacy of colorectal cancer screening, Dr. Hassan and his colleagues noted.

“In theory, the 'resect and discard' strategy could affect the efficacy of colonoscopy screening. On one hand, the imperfect narrow-band imaging sensitivity for diminutive adenomas would misclassify some polyps as hyperplastic, preventing the standard follow-up strategy, whilst on the other, the misclassification of hyperplastic polyps as adenomatous lesions caused by the suboptimal specificity would lead to a more intensive and inappropriate 5-year colonoscopy surveillance in some individuals.

“However, the net effect of these two opposing forces was found to be meaningless, mainly because of the marginal efficacy associated with postpolypectomy surveillance, especially for diminutive lesions, compared with the substantial efficacy associated with polypectomy in preventing colorectal cancer,” they explained.

No industry funding supported this study. Dr. Hassan's colleagues reported ties to Medicsight, Viatronix, and Philips, as well as Olympus.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

The cost of colorectal cancer screening could be cut substantially without impairing its effectiveness by adopting a “resect and discard” approach for the smallest polyps, Dr. Cesare Hassan and his colleagues said.

A major portion of the cost of colorectal cancer screening is attributed to pathologic examination of polyps that are identified and resected. Among patients at average risk, more than 60% of all polyps detected at screening are diminutive (5 mm or smaller) and have an extremely low likelihood of being cancerous, said Dr. Hassan of Nuovo Regina Margherita Hospital, Rome, and his associates (Clin. Gastroenterol. Hepatol. 2010;8:865-9).

The “resect and discard” approach calls for simply discarding diminutive lesions rather than performing pathology exams on them. This approach is facilitated by the use of new colonoscopy technology that incorporates narrow-band imaging, which allows for better characterization of the smallest polyps and could conceivably avert further histologic assessment.

The investigators used mathematical modeling to create a cost-effectiveness simulation that would assess the potential savings of adopting a “resect and discard” approach for diminutive polyps in a hypothetical cohort of 100,000 average-risk American men and women aged 50-100 years. The hypothetical costs were calculated by using Medicare reimbursement data.

The model assumed that 85% of colorectal cancers develop from a polypoid precursor, and the remaining 15% are de novo tumors. It incorporated several possible health states: no colorectal neoplasia; diminutive (5 mm or smaller), small (6-9 mm), or large (10 mm or larger) adenomatous polyps; localized, regional, or distant colorectal cancer; and colorectal cancer–related death. Hyperplastic polyps also were included in the simulation.

“In order not to overestimate the efficacy of [narrow-band imaging],” the investigators used performance statistics derived from the literature and assumed an 84% rate of high-confidence classification of polyps, with a 94% sensitivity and an 89% specificity for identifying adenomas.

The model further assumed that a “resect and discard” policy was followed for all cases in which a high-confidence diagnosis was achieved using narrow-band imaging, and that all diminutive polyps in which a high-confidence diagnosis could not be made were removed and sent for formal histologic assessment.

The simulation first tested the cost-effectiveness of standard colonoscopy with pathology evaluations of all resected polyps in the cohort. The procedure was found to reduce colorectal cancer incidence by 75% and mortality by 79%.

When these outcomes were projected onto the U.S. population using 2009 census data and assuming a 23% rate of adherence to screening colonoscopy in the general population, standard colonoscopy screening was found to save $451 million annually, compared with no colonoscopy screening.

The simulation then tested the “resect and discard” approach and found an additional annual benefit of $25 per person screened. When this outcome was projected onto the U.S. population, this approach added an estimated $33 million in cost savings to the standard colonoscopy approach.

Importantly, the “resect and discard” approach showed no meaningful effect on the efficacy of colorectal cancer screening, Dr. Hassan and his colleagues noted.

“In theory, the 'resect and discard' strategy could affect the efficacy of colonoscopy screening. On one hand, the imperfect narrow-band imaging sensitivity for diminutive adenomas would misclassify some polyps as hyperplastic, preventing the standard follow-up strategy, whilst on the other, the misclassification of hyperplastic polyps as adenomatous lesions caused by the suboptimal specificity would lead to a more intensive and inappropriate 5-year colonoscopy surveillance in some individuals.

“However, the net effect of these two opposing forces was found to be meaningless, mainly because of the marginal efficacy associated with postpolypectomy surveillance, especially for diminutive lesions, compared with the substantial efficacy associated with polypectomy in preventing colorectal cancer,” they explained.

No industry funding supported this study. Dr. Hassan's colleagues reported ties to Medicsight, Viatronix, and Philips, as well as Olympus.

The cost of colorectal cancer screening could be cut substantially without impairing its effectiveness by adopting a “resect and discard” approach for the smallest polyps, Dr. Cesare Hassan and his colleagues said.

A major portion of the cost of colorectal cancer screening is attributed to pathologic examination of polyps that are identified and resected. Among patients at average risk, more than 60% of all polyps detected at screening are diminutive (5 mm or smaller) and have an extremely low likelihood of being cancerous, said Dr. Hassan of Nuovo Regina Margherita Hospital, Rome, and his associates (Clin. Gastroenterol. Hepatol. 2010;8:865-9).

The “resect and discard” approach calls for simply discarding diminutive lesions rather than performing pathology exams on them. This approach is facilitated by the use of new colonoscopy technology that incorporates narrow-band imaging, which allows for better characterization of the smallest polyps and could conceivably avert further histologic assessment.

The investigators used mathematical modeling to create a cost-effectiveness simulation that would assess the potential savings of adopting a “resect and discard” approach for diminutive polyps in a hypothetical cohort of 100,000 average-risk American men and women aged 50-100 years. The hypothetical costs were calculated by using Medicare reimbursement data.

The model assumed that 85% of colorectal cancers develop from a polypoid precursor, and the remaining 15% are de novo tumors. It incorporated several possible health states: no colorectal neoplasia; diminutive (5 mm or smaller), small (6-9 mm), or large (10 mm or larger) adenomatous polyps; localized, regional, or distant colorectal cancer; and colorectal cancer–related death. Hyperplastic polyps also were included in the simulation.

“In order not to overestimate the efficacy of [narrow-band imaging],” the investigators used performance statistics derived from the literature and assumed an 84% rate of high-confidence classification of polyps, with a 94% sensitivity and an 89% specificity for identifying adenomas.

The model further assumed that a “resect and discard” policy was followed for all cases in which a high-confidence diagnosis was achieved using narrow-band imaging, and that all diminutive polyps in which a high-confidence diagnosis could not be made were removed and sent for formal histologic assessment.

The simulation first tested the cost-effectiveness of standard colonoscopy with pathology evaluations of all resected polyps in the cohort. The procedure was found to reduce colorectal cancer incidence by 75% and mortality by 79%.

When these outcomes were projected onto the U.S. population using 2009 census data and assuming a 23% rate of adherence to screening colonoscopy in the general population, standard colonoscopy screening was found to save $451 million annually, compared with no colonoscopy screening.

The simulation then tested the “resect and discard” approach and found an additional annual benefit of $25 per person screened. When this outcome was projected onto the U.S. population, this approach added an estimated $33 million in cost savings to the standard colonoscopy approach.

Importantly, the “resect and discard” approach showed no meaningful effect on the efficacy of colorectal cancer screening, Dr. Hassan and his colleagues noted.

“In theory, the 'resect and discard' strategy could affect the efficacy of colonoscopy screening. On one hand, the imperfect narrow-band imaging sensitivity for diminutive adenomas would misclassify some polyps as hyperplastic, preventing the standard follow-up strategy, whilst on the other, the misclassification of hyperplastic polyps as adenomatous lesions caused by the suboptimal specificity would lead to a more intensive and inappropriate 5-year colonoscopy surveillance in some individuals.

“However, the net effect of these two opposing forces was found to be meaningless, mainly because of the marginal efficacy associated with postpolypectomy surveillance, especially for diminutive lesions, compared with the substantial efficacy associated with polypectomy in preventing colorectal cancer,” they explained.

No industry funding supported this study. Dr. Hassan's colleagues reported ties to Medicsight, Viatronix, and Philips, as well as Olympus.

Publications
Publications
Topics
Article Type
Display Headline
'Resect and Discard' Would Cut Colorectal Screening Costs
Display Headline
'Resect and Discard' Would Cut Colorectal Screening Costs
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Buprenorphine Implants Reduce Opioid Dependence

Article Type
Changed
Display Headline
Buprenorphine Implants Reduce Opioid Dependence

Major Finding: Among patients addicted to opioids, 40% were able to discontinue illicit drug use for 4 months and 37% for 6 months after receiving buprenorphine implants, while only 28% and 22%, respectively, discontinued illicit drug use after receiving placebo implants.

Data Source: A phase III, randomized placebo-controlled trial involving 163 patients treated at 18 U.S. clinical centers and followed for 6 months.

Disclosures: This study was funded by Titan Pharmaceuticals, maker of the buprenorphine implants, which was involved in the design and management of the study, data collection and analysis, and preparation and approval of the manuscript. Dr. Ling and his associates reported numerous ties to drug and device manufacturers.

Buprenorphine implants helped approximately 40% of patients addicted to opioids markedly reduce their drug use for 6 months in a phase III study of this new method of delivery.

Also, two-thirds of the study subjects who received the implants completed 24 weeks of treatment without cravings or withdrawal symptoms compelling them to drop out, said Dr. Walter Ling of the UCLA Integrated Substance Abuse Programs, Los Angeles, and associates.

In comparison, studies of sublingual buprenorphine found a median adherence of only 40 days in clinical settings, and 6-month clinical trials report subject retention rates of 35%-38%, they noted.

The implantable formulation of buprenorphine was developed to address dependent patients' problems with adherence and “diversion,” or using the drug for some purpose other than treatment, such as selling it. The implants deliver an initial pulse of buprenorphine followed by the release of a constant, low level for 6 months. This avoids the peaks and troughs in plasma levels that occur with other methods of delivery.

Dr. Ling and his colleagues performed their industry-sponsored phase III study at 18 community addiction treatment centers across the United States. In all, 108 subjects were randomly assigned to receive four buprenorphine implants and 55 to receive four placebo implants in the subdermal space in the inner side of the nondominant arm.

The study subjects were allowed to receive supplemental sublingual buprenorphine-plus-naloxone tablets if they experienced significant withdrawal symptoms or cravings. They also were allowed to get one additional implant if necessary. All received individual drug counseling twice a week for 3 months and weekly thereafter.

The patients' use of illicit drugs was monitored throughout the study by urinalyses done 3 times per week.

The primary outcome measure was early treatment response, assessed as the percentage of the 48 urine samples from the first 16 weeks of the trial that were negative for illicit opioids. This rate was 40% with the buprenorphine implants, vs. 28% with the placebo implants (JAMA 2010;304:1576-83).

For the full 6-month treatment period, in which 72 urine samples were analyzed for each subject, 37% were negative for illicit opioids in the buprenorphine group, vs. 22% in the placebo group.

Adherence was significantly better with the active treatment at 16 weeks (82% with buprenorphine vs. 51% with placebo) and at the conclusion of the study (66% vs. 31%). Throughout the study, the implant group also had significantly lower scores on measures of opiate withdrawal and opioid craving.

No patients with buprenorphine implants were classified as treatment failures; 31% with placebo implants were.

Adverse reactions at the treatment site were common and expected in both groups, and resolved without incident in all but three patients. One serious adverse event may have been related to treatment: A pulmonary embolism and exacerbation of chronic obstructive pulmonary disease occurred in a patient with a history of pulmonary embolism and COPD, whose respiratory function might have been impaired by the buprenorphine. One patient in the placebo group also had a serious adverse event, cellulitis at the implant site.

“There were no clinically meaningful changes” in vital signs, physical exam findings, electrocardiograms, hematology values, or coagulation values.

There was no evidence of attempted removal of the implants, so “diversion” appears unlikely with this method of delivery, Dr. Ling and his associates said.

They cited among studyslimitations the fact that. llof tients received psychosocial counseling, and that. Alir trial is not “statistically powered to examine efficacy within subgroups of patients.”

View on The News

'Promising' New Delivery Method

These findings suggest that a promising new approach to opioid addiction may be close at hand. If further study shows that buprenorphine implants are as good as or better than current treatments, this study would represent a major advance, said Dr. Patrick G. O'Connor.

However, further improvement in the implant delivery system appears to be warranted, given the low plasma levels of buprenorphine that the study subjects attained and the degree to which they required supplemental sublingual drug.

 

 

In addition, the treatment is complex and resource intense, requiring implantation and removal procedures as well as specialized counseling. This study tested its use in special treatment centers with close medical supervision, but provided “relatively little information about how implants might be used in office practice,” he said.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: Among patients addicted to opioids, 40% were able to discontinue illicit drug use for 4 months and 37% for 6 months after receiving buprenorphine implants, while only 28% and 22%, respectively, discontinued illicit drug use after receiving placebo implants.

Data Source: A phase III, randomized placebo-controlled trial involving 163 patients treated at 18 U.S. clinical centers and followed for 6 months.

Disclosures: This study was funded by Titan Pharmaceuticals, maker of the buprenorphine implants, which was involved in the design and management of the study, data collection and analysis, and preparation and approval of the manuscript. Dr. Ling and his associates reported numerous ties to drug and device manufacturers.

Buprenorphine implants helped approximately 40% of patients addicted to opioids markedly reduce their drug use for 6 months in a phase III study of this new method of delivery.

Also, two-thirds of the study subjects who received the implants completed 24 weeks of treatment without cravings or withdrawal symptoms compelling them to drop out, said Dr. Walter Ling of the UCLA Integrated Substance Abuse Programs, Los Angeles, and associates.

In comparison, studies of sublingual buprenorphine found a median adherence of only 40 days in clinical settings, and 6-month clinical trials report subject retention rates of 35%-38%, they noted.

The implantable formulation of buprenorphine was developed to address dependent patients' problems with adherence and “diversion,” or using the drug for some purpose other than treatment, such as selling it. The implants deliver an initial pulse of buprenorphine followed by the release of a constant, low level for 6 months. This avoids the peaks and troughs in plasma levels that occur with other methods of delivery.

Dr. Ling and his colleagues performed their industry-sponsored phase III study at 18 community addiction treatment centers across the United States. In all, 108 subjects were randomly assigned to receive four buprenorphine implants and 55 to receive four placebo implants in the subdermal space in the inner side of the nondominant arm.

The study subjects were allowed to receive supplemental sublingual buprenorphine-plus-naloxone tablets if they experienced significant withdrawal symptoms or cravings. They also were allowed to get one additional implant if necessary. All received individual drug counseling twice a week for 3 months and weekly thereafter.

The patients' use of illicit drugs was monitored throughout the study by urinalyses done 3 times per week.

The primary outcome measure was early treatment response, assessed as the percentage of the 48 urine samples from the first 16 weeks of the trial that were negative for illicit opioids. This rate was 40% with the buprenorphine implants, vs. 28% with the placebo implants (JAMA 2010;304:1576-83).

For the full 6-month treatment period, in which 72 urine samples were analyzed for each subject, 37% were negative for illicit opioids in the buprenorphine group, vs. 22% in the placebo group.

Adherence was significantly better with the active treatment at 16 weeks (82% with buprenorphine vs. 51% with placebo) and at the conclusion of the study (66% vs. 31%). Throughout the study, the implant group also had significantly lower scores on measures of opiate withdrawal and opioid craving.

No patients with buprenorphine implants were classified as treatment failures; 31% with placebo implants were.

Adverse reactions at the treatment site were common and expected in both groups, and resolved without incident in all but three patients. One serious adverse event may have been related to treatment: A pulmonary embolism and exacerbation of chronic obstructive pulmonary disease occurred in a patient with a history of pulmonary embolism and COPD, whose respiratory function might have been impaired by the buprenorphine. One patient in the placebo group also had a serious adverse event, cellulitis at the implant site.

“There were no clinically meaningful changes” in vital signs, physical exam findings, electrocardiograms, hematology values, or coagulation values.

There was no evidence of attempted removal of the implants, so “diversion” appears unlikely with this method of delivery, Dr. Ling and his associates said.

They cited among studyslimitations the fact that. llof tients received psychosocial counseling, and that. Alir trial is not “statistically powered to examine efficacy within subgroups of patients.”

View on The News

'Promising' New Delivery Method

These findings suggest that a promising new approach to opioid addiction may be close at hand. If further study shows that buprenorphine implants are as good as or better than current treatments, this study would represent a major advance, said Dr. Patrick G. O'Connor.

However, further improvement in the implant delivery system appears to be warranted, given the low plasma levels of buprenorphine that the study subjects attained and the degree to which they required supplemental sublingual drug.

 

 

In addition, the treatment is complex and resource intense, requiring implantation and removal procedures as well as specialized counseling. This study tested its use in special treatment centers with close medical supervision, but provided “relatively little information about how implants might be used in office practice,” he said.

Major Finding: Among patients addicted to opioids, 40% were able to discontinue illicit drug use for 4 months and 37% for 6 months after receiving buprenorphine implants, while only 28% and 22%, respectively, discontinued illicit drug use after receiving placebo implants.

Data Source: A phase III, randomized placebo-controlled trial involving 163 patients treated at 18 U.S. clinical centers and followed for 6 months.

Disclosures: This study was funded by Titan Pharmaceuticals, maker of the buprenorphine implants, which was involved in the design and management of the study, data collection and analysis, and preparation and approval of the manuscript. Dr. Ling and his associates reported numerous ties to drug and device manufacturers.

Buprenorphine implants helped approximately 40% of patients addicted to opioids markedly reduce their drug use for 6 months in a phase III study of this new method of delivery.

Also, two-thirds of the study subjects who received the implants completed 24 weeks of treatment without cravings or withdrawal symptoms compelling them to drop out, said Dr. Walter Ling of the UCLA Integrated Substance Abuse Programs, Los Angeles, and associates.

In comparison, studies of sublingual buprenorphine found a median adherence of only 40 days in clinical settings, and 6-month clinical trials report subject retention rates of 35%-38%, they noted.

The implantable formulation of buprenorphine was developed to address dependent patients' problems with adherence and “diversion,” or using the drug for some purpose other than treatment, such as selling it. The implants deliver an initial pulse of buprenorphine followed by the release of a constant, low level for 6 months. This avoids the peaks and troughs in plasma levels that occur with other methods of delivery.

Dr. Ling and his colleagues performed their industry-sponsored phase III study at 18 community addiction treatment centers across the United States. In all, 108 subjects were randomly assigned to receive four buprenorphine implants and 55 to receive four placebo implants in the subdermal space in the inner side of the nondominant arm.

The study subjects were allowed to receive supplemental sublingual buprenorphine-plus-naloxone tablets if they experienced significant withdrawal symptoms or cravings. They also were allowed to get one additional implant if necessary. All received individual drug counseling twice a week for 3 months and weekly thereafter.

The patients' use of illicit drugs was monitored throughout the study by urinalyses done 3 times per week.

The primary outcome measure was early treatment response, assessed as the percentage of the 48 urine samples from the first 16 weeks of the trial that were negative for illicit opioids. This rate was 40% with the buprenorphine implants, vs. 28% with the placebo implants (JAMA 2010;304:1576-83).

For the full 6-month treatment period, in which 72 urine samples were analyzed for each subject, 37% were negative for illicit opioids in the buprenorphine group, vs. 22% in the placebo group.

Adherence was significantly better with the active treatment at 16 weeks (82% with buprenorphine vs. 51% with placebo) and at the conclusion of the study (66% vs. 31%). Throughout the study, the implant group also had significantly lower scores on measures of opiate withdrawal and opioid craving.

No patients with buprenorphine implants were classified as treatment failures; 31% with placebo implants were.

Adverse reactions at the treatment site were common and expected in both groups, and resolved without incident in all but three patients. One serious adverse event may have been related to treatment: A pulmonary embolism and exacerbation of chronic obstructive pulmonary disease occurred in a patient with a history of pulmonary embolism and COPD, whose respiratory function might have been impaired by the buprenorphine. One patient in the placebo group also had a serious adverse event, cellulitis at the implant site.

“There were no clinically meaningful changes” in vital signs, physical exam findings, electrocardiograms, hematology values, or coagulation values.

There was no evidence of attempted removal of the implants, so “diversion” appears unlikely with this method of delivery, Dr. Ling and his associates said.

They cited among studyslimitations the fact that. llof tients received psychosocial counseling, and that. Alir trial is not “statistically powered to examine efficacy within subgroups of patients.”

View on The News

'Promising' New Delivery Method

These findings suggest that a promising new approach to opioid addiction may be close at hand. If further study shows that buprenorphine implants are as good as or better than current treatments, this study would represent a major advance, said Dr. Patrick G. O'Connor.

However, further improvement in the implant delivery system appears to be warranted, given the low plasma levels of buprenorphine that the study subjects attained and the degree to which they required supplemental sublingual drug.

 

 

In addition, the treatment is complex and resource intense, requiring implantation and removal procedures as well as specialized counseling. This study tested its use in special treatment centers with close medical supervision, but provided “relatively little information about how implants might be used in office practice,” he said.

Publications
Publications
Topics
Article Type
Display Headline
Buprenorphine Implants Reduce Opioid Dependence
Display Headline
Buprenorphine Implants Reduce Opioid Dependence
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Crizotinib Shows Promise Against Non-Small-Cell Lung Cancer

Reason for Optimism
Article Type
Changed
Display Headline
Crizotinib Shows Promise Against Non-Small-Cell Lung Cancer

More than half of select patients with advanced non–small-cell lung cancers responded to treatment with crizotinib, according to findings of a phase I multicenter clinical trial reported in the Oct. 28 issue of the New England Journal of Medicine.

Crizotinib inhibits the anaplastic lymphoma kinase (ALK) gene’s receptor, tyrosine kinase, which has been linked to several types of cancer.

In a group of 82 patients, many of whom had undergone numerous anticancer therapies for advanced ALK-positive tumors, the overall partial and complete response rate was 57%, and disease stabilized in an additional 33%. These results are "impressive" compared with the approximately 10% response rate seen in similar cancers treated with second-line multiagent chemotherapy, said Dr. Eunice L. Kwak of Massachusetts General Hospital Cancer Center, Boston, and her associates.

The probability of 6-month progression-free survival was estimated to be 72% with crizotinib therapy, compared with a rate of 27% for similar tumors treated with second-line chemotherapy.

The dose of oral crizotinib was escalated from 50 mg once daily to 300 mg twice daily. Dose-limiting fatigue was noted at this level, so the maximal dose was cut back to 250 mg twice daily.

A total of 46 patients met RECIST criteria for a partial response and 1 for a complete response to the drug, for an overall response rate of 57%. An additional 27 patients met criteria for stable disease.

Treatment response was quite rapid, with a disease-control rate of 87% at 8 weeks.

The rapid response to crizotinib "suggests that ALK-positive tumors constitute a second genetically defined subgroup of oncogene-driven lung cancer that is highly susceptible to targeted therapy," the investigators noted.

Nausea and diarrhea were the most common adverse effects. Thirty-four patients (41%) reported mild visual disturbances, but no abnormalities were detected on ophthalmologic examination.

Four patients showed elevated alanine aminotransferase (ALT) levels and 5 showed elevated aspartate amino transferase (AST) levels, all of which reverted to normal when crizotinib was discontinued. Four of these patients were able to resume the treatment at a lower dose without recurrence of this toxicity.

A total of 63 patients (77%) continued to receive crizotinib after the conclusion of the study and continue to be followed.

These findings demonstrate the importance and feasibility of genotyping to individualize treatment. They also show that non–small-cell lung tumors with ALK rearrangements – which occurred in approximately 5% of the patients screened for participation in this trial – are highly sensitive to ALK kinase inhibition, Dr. Kwak and her colleagues said (N. Engl. J. Med. 2010;363:1693-703).

Two separate case reports published in the same issue of the journal further delineated outcomes with crizotinib therapy.

In the first, a 28-year-old man with large tumor nodules in one lung, multiple enlarged lymph nodes in the mediastinum, atelectasis, and massive effusion in the right pleural cavity showed a marked initial response to the crizotinib within 1 week. However, after 5 months of treatment his tumor "abruptly started to grow again, resulting in a rapid expansion of the pleural effusion and the development of tumors in both lungs," said Dr. Young Lim Choi of Jichi Medical University, Tochigi, Japan, and associates.

Suspecting that the cancer may have acquired genetic changes that conferred resistance to crizotinib, the researchers identified in sputum and effusion specimens two de novo mutations within the kinase domain of the ALK gene. "We do not know whether the resistant clones were present initially or developed secondarily, during treatment," they noted (N. Engl. J. Med. 2010;363:1734-9).

It is likely that an as-yet unidentified mechanism is present in oncogenic tyrosine kinases, which facilitates the development of further mutations that confer resistance to many ALK inhibitors. Further research should shed light on this process and lead to the development of next-generation ALK inhibitors that address the mutations and the resistance they confer, Dr. Choi and colleagues added.

In the second case study, two patients with another type of cancer – inflammatory myofibroblastic tumors (IMTs) – were given crizotinib empirically. It was hoped that the drug would show activity in these patients because approximately half of IMTs carry rearrangements of the ALK gene, said Dr. James E. Butrynski of the Dana-Farber Cancer Institute, Boston, and his associates.

The first patient was a 44-year-old man whose extensive peritoneal and mesenteric cancer had recurred after surgical excision; peritoneal perfusion with combined cisplatin, doxorubicin, and mitomycin C; further treatment with doxorubicin and ifosfamide; and maintenance therapy with imatinib. The patient responded to daily crizotinib beginning in December 2008 and has maintained complete radiographic remission until press time.

 

 

The second patient, a 21-year-old man with IMT involving the stomach, large intestine, gall bladder, and spleen, did not respond to crizotinib, instead showing continued disease progression.

Further analysis showed that the tumor in patient 1 had ALK rearrangements while that in patient 2 did not. Together, these two cases indicate that crizotinib’s mechanism of action is to disrupt mutations in ALK signaling pathways that certain cancers require for continued growth. The drug is effective only in those IMTs with ALK rearrangements (N. Engl. J. Med. 2010;363:1727-33).

"Patient 1 continues to have an excellent performance status and only mild side effects, supporting the tolerability of the long-term administration of crizotinib," Dr. Butrynski and his colleagues added.

Dr. Kwak’s study was supported by Pfizer, Massachusetts General Hospital Cancer Center, the Aid for Cancer Research Foundation, National Cancer Institute, Dana-Farber Cancer Institute, Beth Israel Deaconess Medical Center, National Institutes of Health, American Society for Clinical Oncology Cancer Foundation, Memorial Sloan-Kettering Cancer Center, and the University of Colorado Cancer Center. Dr. Kwak and her associates reported numerous financial ties to 67 drug, device, and technology companies.

Dr. Choi’s study was supported in part by the Ministry of Health, Labor, and Welfare of Japan, the Ministry of Education, Culture, Sports, Science, and Technology of Japan, and the Japan Society for the Promotion of Science. Dr. Choi reported ties to Astellas Pharmaceuticals, and associates reported ties to 15 drug, device, and technology companies.

Dr. Butrynski’s study was supported by Pfizer, the National Institutes of Health, the National Cancer Institute–American Society of Clinical Oncology Cancer Foundation, and Cycle for Survival. His associates reported ties to 29 drug, device, and technology companies.

Body

"Together, these three studies provide an optimistic view of the successful treatment of ALK-positive cancers. ... Clearly, in groups of patients with cancers in which ALK is implicated, a standard genotyping approach will be important for a more personalized therapeutic protocol," said Bengt Hallberg, Ph.D., and Ruth H. Palmer, Ph.D.

Given that approximately 5% of patients with non–small-cell lung cancer have tumors with ALK rearrangements, the number of potential recipients of crizotinib with that disease alone approaches 10,000 every year in the United States.

The case report by Dr. Butrynski and associates showed that at least one other malignancy, inflammatory myofibroblastic tumor, will respond to crizotinib, and Dr. Kwak and colleagues note that mutations or translocations of the ALK gene also have been implicated in anaplastic large-cell lymphoma and neuroblastoma. The latter, a devastating childhood cancer in which ALK mutations have been reported in approximately 10% of cases, makes a particularly attractive target for crizotinib, especially in view of the drug’s tolerability during long-term use in these phase I studies.

Dr. Choi and associates demonstrated that mutations conferring resistance to crizotinib are likely to emerge, much like resistance to other tyrosine kinase inhibitors. This "familiar story line" of emerging resistance "highlights the need for basic scientists and clinicians to work together to plan a step ahead of the evolving tumor.

"It is encouraging that some progress in this area has already been made, and a number of such drugs are in the pipeline, including a new ALK inhibitor," they noted.

Dr. Hallberg and Dr. Palmer are in the department of molecular biology at Umea (Sweden) University. Dr. Hallberg reported receiving support from the Swedish Cancer Society and the Swedish Research Council. These comments were summarized from their editorial accompanying the three reports (N. Engl. J. Med. 2010;363:1760-2).

Author and Disclosure Information

Topics
Legacy Keywords
non–small-cell lung cancer, crizotinib, anaplastic lymphoma kinase, ALK, tyrosine kinase, chemotherapy, Eunice L. Kwak, RECIST , ALT, AST, Young Lim Choi, James E. Butrynski, Bengt Hallberg, Ruth H. Palmer
Author and Disclosure Information

Author and Disclosure Information

Body

"Together, these three studies provide an optimistic view of the successful treatment of ALK-positive cancers. ... Clearly, in groups of patients with cancers in which ALK is implicated, a standard genotyping approach will be important for a more personalized therapeutic protocol," said Bengt Hallberg, Ph.D., and Ruth H. Palmer, Ph.D.

Given that approximately 5% of patients with non–small-cell lung cancer have tumors with ALK rearrangements, the number of potential recipients of crizotinib with that disease alone approaches 10,000 every year in the United States.

The case report by Dr. Butrynski and associates showed that at least one other malignancy, inflammatory myofibroblastic tumor, will respond to crizotinib, and Dr. Kwak and colleagues note that mutations or translocations of the ALK gene also have been implicated in anaplastic large-cell lymphoma and neuroblastoma. The latter, a devastating childhood cancer in which ALK mutations have been reported in approximately 10% of cases, makes a particularly attractive target for crizotinib, especially in view of the drug’s tolerability during long-term use in these phase I studies.

Dr. Choi and associates demonstrated that mutations conferring resistance to crizotinib are likely to emerge, much like resistance to other tyrosine kinase inhibitors. This "familiar story line" of emerging resistance "highlights the need for basic scientists and clinicians to work together to plan a step ahead of the evolving tumor.

"It is encouraging that some progress in this area has already been made, and a number of such drugs are in the pipeline, including a new ALK inhibitor," they noted.

Dr. Hallberg and Dr. Palmer are in the department of molecular biology at Umea (Sweden) University. Dr. Hallberg reported receiving support from the Swedish Cancer Society and the Swedish Research Council. These comments were summarized from their editorial accompanying the three reports (N. Engl. J. Med. 2010;363:1760-2).

Body

"Together, these three studies provide an optimistic view of the successful treatment of ALK-positive cancers. ... Clearly, in groups of patients with cancers in which ALK is implicated, a standard genotyping approach will be important for a more personalized therapeutic protocol," said Bengt Hallberg, Ph.D., and Ruth H. Palmer, Ph.D.

Given that approximately 5% of patients with non–small-cell lung cancer have tumors with ALK rearrangements, the number of potential recipients of crizotinib with that disease alone approaches 10,000 every year in the United States.

The case report by Dr. Butrynski and associates showed that at least one other malignancy, inflammatory myofibroblastic tumor, will respond to crizotinib, and Dr. Kwak and colleagues note that mutations or translocations of the ALK gene also have been implicated in anaplastic large-cell lymphoma and neuroblastoma. The latter, a devastating childhood cancer in which ALK mutations have been reported in approximately 10% of cases, makes a particularly attractive target for crizotinib, especially in view of the drug’s tolerability during long-term use in these phase I studies.

Dr. Choi and associates demonstrated that mutations conferring resistance to crizotinib are likely to emerge, much like resistance to other tyrosine kinase inhibitors. This "familiar story line" of emerging resistance "highlights the need for basic scientists and clinicians to work together to plan a step ahead of the evolving tumor.

"It is encouraging that some progress in this area has already been made, and a number of such drugs are in the pipeline, including a new ALK inhibitor," they noted.

Dr. Hallberg and Dr. Palmer are in the department of molecular biology at Umea (Sweden) University. Dr. Hallberg reported receiving support from the Swedish Cancer Society and the Swedish Research Council. These comments were summarized from their editorial accompanying the three reports (N. Engl. J. Med. 2010;363:1760-2).

Title
Reason for Optimism
Reason for Optimism

More than half of select patients with advanced non–small-cell lung cancers responded to treatment with crizotinib, according to findings of a phase I multicenter clinical trial reported in the Oct. 28 issue of the New England Journal of Medicine.

Crizotinib inhibits the anaplastic lymphoma kinase (ALK) gene’s receptor, tyrosine kinase, which has been linked to several types of cancer.

In a group of 82 patients, many of whom had undergone numerous anticancer therapies for advanced ALK-positive tumors, the overall partial and complete response rate was 57%, and disease stabilized in an additional 33%. These results are "impressive" compared with the approximately 10% response rate seen in similar cancers treated with second-line multiagent chemotherapy, said Dr. Eunice L. Kwak of Massachusetts General Hospital Cancer Center, Boston, and her associates.

The probability of 6-month progression-free survival was estimated to be 72% with crizotinib therapy, compared with a rate of 27% for similar tumors treated with second-line chemotherapy.

The dose of oral crizotinib was escalated from 50 mg once daily to 300 mg twice daily. Dose-limiting fatigue was noted at this level, so the maximal dose was cut back to 250 mg twice daily.

A total of 46 patients met RECIST criteria for a partial response and 1 for a complete response to the drug, for an overall response rate of 57%. An additional 27 patients met criteria for stable disease.

Treatment response was quite rapid, with a disease-control rate of 87% at 8 weeks.

The rapid response to crizotinib "suggests that ALK-positive tumors constitute a second genetically defined subgroup of oncogene-driven lung cancer that is highly susceptible to targeted therapy," the investigators noted.

Nausea and diarrhea were the most common adverse effects. Thirty-four patients (41%) reported mild visual disturbances, but no abnormalities were detected on ophthalmologic examination.

Four patients showed elevated alanine aminotransferase (ALT) levels and 5 showed elevated aspartate amino transferase (AST) levels, all of which reverted to normal when crizotinib was discontinued. Four of these patients were able to resume the treatment at a lower dose without recurrence of this toxicity.

A total of 63 patients (77%) continued to receive crizotinib after the conclusion of the study and continue to be followed.

These findings demonstrate the importance and feasibility of genotyping to individualize treatment. They also show that non–small-cell lung tumors with ALK rearrangements – which occurred in approximately 5% of the patients screened for participation in this trial – are highly sensitive to ALK kinase inhibition, Dr. Kwak and her colleagues said (N. Engl. J. Med. 2010;363:1693-703).

Two separate case reports published in the same issue of the journal further delineated outcomes with crizotinib therapy.

In the first, a 28-year-old man with large tumor nodules in one lung, multiple enlarged lymph nodes in the mediastinum, atelectasis, and massive effusion in the right pleural cavity showed a marked initial response to the crizotinib within 1 week. However, after 5 months of treatment his tumor "abruptly started to grow again, resulting in a rapid expansion of the pleural effusion and the development of tumors in both lungs," said Dr. Young Lim Choi of Jichi Medical University, Tochigi, Japan, and associates.

Suspecting that the cancer may have acquired genetic changes that conferred resistance to crizotinib, the researchers identified in sputum and effusion specimens two de novo mutations within the kinase domain of the ALK gene. "We do not know whether the resistant clones were present initially or developed secondarily, during treatment," they noted (N. Engl. J. Med. 2010;363:1734-9).

It is likely that an as-yet unidentified mechanism is present in oncogenic tyrosine kinases, which facilitates the development of further mutations that confer resistance to many ALK inhibitors. Further research should shed light on this process and lead to the development of next-generation ALK inhibitors that address the mutations and the resistance they confer, Dr. Choi and colleagues added.

In the second case study, two patients with another type of cancer – inflammatory myofibroblastic tumors (IMTs) – were given crizotinib empirically. It was hoped that the drug would show activity in these patients because approximately half of IMTs carry rearrangements of the ALK gene, said Dr. James E. Butrynski of the Dana-Farber Cancer Institute, Boston, and his associates.

The first patient was a 44-year-old man whose extensive peritoneal and mesenteric cancer had recurred after surgical excision; peritoneal perfusion with combined cisplatin, doxorubicin, and mitomycin C; further treatment with doxorubicin and ifosfamide; and maintenance therapy with imatinib. The patient responded to daily crizotinib beginning in December 2008 and has maintained complete radiographic remission until press time.

 

 

The second patient, a 21-year-old man with IMT involving the stomach, large intestine, gall bladder, and spleen, did not respond to crizotinib, instead showing continued disease progression.

Further analysis showed that the tumor in patient 1 had ALK rearrangements while that in patient 2 did not. Together, these two cases indicate that crizotinib’s mechanism of action is to disrupt mutations in ALK signaling pathways that certain cancers require for continued growth. The drug is effective only in those IMTs with ALK rearrangements (N. Engl. J. Med. 2010;363:1727-33).

"Patient 1 continues to have an excellent performance status and only mild side effects, supporting the tolerability of the long-term administration of crizotinib," Dr. Butrynski and his colleagues added.

Dr. Kwak’s study was supported by Pfizer, Massachusetts General Hospital Cancer Center, the Aid for Cancer Research Foundation, National Cancer Institute, Dana-Farber Cancer Institute, Beth Israel Deaconess Medical Center, National Institutes of Health, American Society for Clinical Oncology Cancer Foundation, Memorial Sloan-Kettering Cancer Center, and the University of Colorado Cancer Center. Dr. Kwak and her associates reported numerous financial ties to 67 drug, device, and technology companies.

Dr. Choi’s study was supported in part by the Ministry of Health, Labor, and Welfare of Japan, the Ministry of Education, Culture, Sports, Science, and Technology of Japan, and the Japan Society for the Promotion of Science. Dr. Choi reported ties to Astellas Pharmaceuticals, and associates reported ties to 15 drug, device, and technology companies.

Dr. Butrynski’s study was supported by Pfizer, the National Institutes of Health, the National Cancer Institute–American Society of Clinical Oncology Cancer Foundation, and Cycle for Survival. His associates reported ties to 29 drug, device, and technology companies.

More than half of select patients with advanced non–small-cell lung cancers responded to treatment with crizotinib, according to findings of a phase I multicenter clinical trial reported in the Oct. 28 issue of the New England Journal of Medicine.

Crizotinib inhibits the anaplastic lymphoma kinase (ALK) gene’s receptor, tyrosine kinase, which has been linked to several types of cancer.

In a group of 82 patients, many of whom had undergone numerous anticancer therapies for advanced ALK-positive tumors, the overall partial and complete response rate was 57%, and disease stabilized in an additional 33%. These results are "impressive" compared with the approximately 10% response rate seen in similar cancers treated with second-line multiagent chemotherapy, said Dr. Eunice L. Kwak of Massachusetts General Hospital Cancer Center, Boston, and her associates.

The probability of 6-month progression-free survival was estimated to be 72% with crizotinib therapy, compared with a rate of 27% for similar tumors treated with second-line chemotherapy.

The dose of oral crizotinib was escalated from 50 mg once daily to 300 mg twice daily. Dose-limiting fatigue was noted at this level, so the maximal dose was cut back to 250 mg twice daily.

A total of 46 patients met RECIST criteria for a partial response and 1 for a complete response to the drug, for an overall response rate of 57%. An additional 27 patients met criteria for stable disease.

Treatment response was quite rapid, with a disease-control rate of 87% at 8 weeks.

The rapid response to crizotinib "suggests that ALK-positive tumors constitute a second genetically defined subgroup of oncogene-driven lung cancer that is highly susceptible to targeted therapy," the investigators noted.

Nausea and diarrhea were the most common adverse effects. Thirty-four patients (41%) reported mild visual disturbances, but no abnormalities were detected on ophthalmologic examination.

Four patients showed elevated alanine aminotransferase (ALT) levels and 5 showed elevated aspartate amino transferase (AST) levels, all of which reverted to normal when crizotinib was discontinued. Four of these patients were able to resume the treatment at a lower dose without recurrence of this toxicity.

A total of 63 patients (77%) continued to receive crizotinib after the conclusion of the study and continue to be followed.

These findings demonstrate the importance and feasibility of genotyping to individualize treatment. They also show that non–small-cell lung tumors with ALK rearrangements – which occurred in approximately 5% of the patients screened for participation in this trial – are highly sensitive to ALK kinase inhibition, Dr. Kwak and her colleagues said (N. Engl. J. Med. 2010;363:1693-703).

Two separate case reports published in the same issue of the journal further delineated outcomes with crizotinib therapy.

In the first, a 28-year-old man with large tumor nodules in one lung, multiple enlarged lymph nodes in the mediastinum, atelectasis, and massive effusion in the right pleural cavity showed a marked initial response to the crizotinib within 1 week. However, after 5 months of treatment his tumor "abruptly started to grow again, resulting in a rapid expansion of the pleural effusion and the development of tumors in both lungs," said Dr. Young Lim Choi of Jichi Medical University, Tochigi, Japan, and associates.

Suspecting that the cancer may have acquired genetic changes that conferred resistance to crizotinib, the researchers identified in sputum and effusion specimens two de novo mutations within the kinase domain of the ALK gene. "We do not know whether the resistant clones were present initially or developed secondarily, during treatment," they noted (N. Engl. J. Med. 2010;363:1734-9).

It is likely that an as-yet unidentified mechanism is present in oncogenic tyrosine kinases, which facilitates the development of further mutations that confer resistance to many ALK inhibitors. Further research should shed light on this process and lead to the development of next-generation ALK inhibitors that address the mutations and the resistance they confer, Dr. Choi and colleagues added.

In the second case study, two patients with another type of cancer – inflammatory myofibroblastic tumors (IMTs) – were given crizotinib empirically. It was hoped that the drug would show activity in these patients because approximately half of IMTs carry rearrangements of the ALK gene, said Dr. James E. Butrynski of the Dana-Farber Cancer Institute, Boston, and his associates.

The first patient was a 44-year-old man whose extensive peritoneal and mesenteric cancer had recurred after surgical excision; peritoneal perfusion with combined cisplatin, doxorubicin, and mitomycin C; further treatment with doxorubicin and ifosfamide; and maintenance therapy with imatinib. The patient responded to daily crizotinib beginning in December 2008 and has maintained complete radiographic remission until press time.

 

 

The second patient, a 21-year-old man with IMT involving the stomach, large intestine, gall bladder, and spleen, did not respond to crizotinib, instead showing continued disease progression.

Further analysis showed that the tumor in patient 1 had ALK rearrangements while that in patient 2 did not. Together, these two cases indicate that crizotinib’s mechanism of action is to disrupt mutations in ALK signaling pathways that certain cancers require for continued growth. The drug is effective only in those IMTs with ALK rearrangements (N. Engl. J. Med. 2010;363:1727-33).

"Patient 1 continues to have an excellent performance status and only mild side effects, supporting the tolerability of the long-term administration of crizotinib," Dr. Butrynski and his colleagues added.

Dr. Kwak’s study was supported by Pfizer, Massachusetts General Hospital Cancer Center, the Aid for Cancer Research Foundation, National Cancer Institute, Dana-Farber Cancer Institute, Beth Israel Deaconess Medical Center, National Institutes of Health, American Society for Clinical Oncology Cancer Foundation, Memorial Sloan-Kettering Cancer Center, and the University of Colorado Cancer Center. Dr. Kwak and her associates reported numerous financial ties to 67 drug, device, and technology companies.

Dr. Choi’s study was supported in part by the Ministry of Health, Labor, and Welfare of Japan, the Ministry of Education, Culture, Sports, Science, and Technology of Japan, and the Japan Society for the Promotion of Science. Dr. Choi reported ties to Astellas Pharmaceuticals, and associates reported ties to 15 drug, device, and technology companies.

Dr. Butrynski’s study was supported by Pfizer, the National Institutes of Health, the National Cancer Institute–American Society of Clinical Oncology Cancer Foundation, and Cycle for Survival. His associates reported ties to 29 drug, device, and technology companies.

Topics
Article Type
Display Headline
Crizotinib Shows Promise Against Non-Small-Cell Lung Cancer
Display Headline
Crizotinib Shows Promise Against Non-Small-Cell Lung Cancer
Legacy Keywords
non–small-cell lung cancer, crizotinib, anaplastic lymphoma kinase, ALK, tyrosine kinase, chemotherapy, Eunice L. Kwak, RECIST , ALT, AST, Young Lim Choi, James E. Butrynski, Bengt Hallberg, Ruth H. Palmer
Legacy Keywords
non–small-cell lung cancer, crizotinib, anaplastic lymphoma kinase, ALK, tyrosine kinase, chemotherapy, Eunice L. Kwak, RECIST , ALT, AST, Young Lim Choi, James E. Butrynski, Bengt Hallberg, Ruth H. Palmer
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

PURLs Copyright

Inside the Article