User login
Severity of 2009 H1N1 Infection Found Similar to Previous Years
The severity of the illness caused by the 2009 H1N1 pandemic was no worse than that caused by seasonal influenza A in a Wisconsin population of 50,000 children and adults, according to a report in the Sept. 8 JAMA.
"Our results suggest that the clinical manifestations and risk of hospital admission are similar for the 2009 H1N1 and other seasonal influenza A strains among those presenting for medical care and documented to have influenza infection," said Dr. Edward A. Belongia of the Marshfield (Wisc.) Clinic Research Foundation and his associates.
Previous studies have been unable to make direct comparisons of the spectrum of influenza illnesses because all the data on H1N1 have been surveillance data, particularly reports on hospital admissions and fatalities. "These reports have provided valuable descriptive information, but differing criteria for influenza testing by season and the lack of uniform standards for data collection and reporting limit comparisons with other influenza viruses," Dr. Belongia and his colleagues said.
By contrast, their study made comparisons using a defined population in which enrollment and laboratory methods were consistent across all subjects. The researchers compared the characteristics of the illness caused by the pandemic with those of the illness caused by seasonal flu infection using both inpatient and outpatient medical records for all residents of 14 ZIP code areas surrounding Marshfield, an area home to approximately 50,000 people who receive all their medical care at the clinic facilities.
All patients who presented for medical care with at least one flu symptom were tested for influenza using nasopharyngeal or nasal swabs. The study periods encompassed 10 weeks during the 2007-2008 flu season, 12 weeks during the 2008-2009 flu season, and 27 weeks during the 2009 pandemic.
There were 545 cases of 2009 H1N1 infection, 221 cases of seasonal H1N1 in 2008-2009, and 632 cases of H3N2 infection in 2007-2008.
Symptom severity scores were calculated for each patient based on self- or parental report of the severity of cough, fever, chills, fatigue, nasal congestion, wheezing, vomiting, headache, muscle ache, sore throat, ear pain, and nausea. Possible scores ranged from 1 for a single mild symptom to 36 for 12 severe symptoms.
Symptom severity scores were comparable for the three infections, with a median score of 14 for 2009 H1N1, 16 for the 2008-2009 seasonal flu, and 17 for 2007-2008 seasonal flu. The proportion of patients who received antiviral medications was similar in all three study groups.
The cumulative incidence of hospital admission within 30 days of onset per 1,000 residents was 0.25 for H1N1, 0.15 for seasonal flu in 2008-2009, and 0.50 for seasonal flu in 2007-2008, which are nonsignificant differences, Dr. Belongia and his associates wrote (JAMA 2010;304:1091-8).
"Other published studies have reported a higher than expected incidence of hospitalization and death associated with 2009 H1N1 infection, particularly in children. This finding may be due to the greatly elevated incidence of 2009 H1N1 influenza in a highly susceptible population of children and young adults rather than increased virulence of 2009 H1N1 relative to seasonal influenza A viruses," they said.
The investigators cautioned that their findings regarding the most serious outcomes, such as pneumonia, were limited by the fact that there was a very low number of such outcomes in all the study groups.
This study was funded by a grant from the Centers for Disease Control and Prevention. No financial conflicts of interest were reported.
The severity of the illness caused by the 2009 H1N1 pandemic was no worse than that caused by seasonal influenza A in a Wisconsin population of 50,000 children and adults, according to a report in the Sept. 8 JAMA.
"Our results suggest that the clinical manifestations and risk of hospital admission are similar for the 2009 H1N1 and other seasonal influenza A strains among those presenting for medical care and documented to have influenza infection," said Dr. Edward A. Belongia of the Marshfield (Wisc.) Clinic Research Foundation and his associates.
Previous studies have been unable to make direct comparisons of the spectrum of influenza illnesses because all the data on H1N1 have been surveillance data, particularly reports on hospital admissions and fatalities. "These reports have provided valuable descriptive information, but differing criteria for influenza testing by season and the lack of uniform standards for data collection and reporting limit comparisons with other influenza viruses," Dr. Belongia and his colleagues said.
By contrast, their study made comparisons using a defined population in which enrollment and laboratory methods were consistent across all subjects. The researchers compared the characteristics of the illness caused by the pandemic with those of the illness caused by seasonal flu infection using both inpatient and outpatient medical records for all residents of 14 ZIP code areas surrounding Marshfield, an area home to approximately 50,000 people who receive all their medical care at the clinic facilities.
All patients who presented for medical care with at least one flu symptom were tested for influenza using nasopharyngeal or nasal swabs. The study periods encompassed 10 weeks during the 2007-2008 flu season, 12 weeks during the 2008-2009 flu season, and 27 weeks during the 2009 pandemic.
There were 545 cases of 2009 H1N1 infection, 221 cases of seasonal H1N1 in 2008-2009, and 632 cases of H3N2 infection in 2007-2008.
Symptom severity scores were calculated for each patient based on self- or parental report of the severity of cough, fever, chills, fatigue, nasal congestion, wheezing, vomiting, headache, muscle ache, sore throat, ear pain, and nausea. Possible scores ranged from 1 for a single mild symptom to 36 for 12 severe symptoms.
Symptom severity scores were comparable for the three infections, with a median score of 14 for 2009 H1N1, 16 for the 2008-2009 seasonal flu, and 17 for 2007-2008 seasonal flu. The proportion of patients who received antiviral medications was similar in all three study groups.
The cumulative incidence of hospital admission within 30 days of onset per 1,000 residents was 0.25 for H1N1, 0.15 for seasonal flu in 2008-2009, and 0.50 for seasonal flu in 2007-2008, which are nonsignificant differences, Dr. Belongia and his associates wrote (JAMA 2010;304:1091-8).
"Other published studies have reported a higher than expected incidence of hospitalization and death associated with 2009 H1N1 infection, particularly in children. This finding may be due to the greatly elevated incidence of 2009 H1N1 influenza in a highly susceptible population of children and young adults rather than increased virulence of 2009 H1N1 relative to seasonal influenza A viruses," they said.
The investigators cautioned that their findings regarding the most serious outcomes, such as pneumonia, were limited by the fact that there was a very low number of such outcomes in all the study groups.
This study was funded by a grant from the Centers for Disease Control and Prevention. No financial conflicts of interest were reported.
The severity of the illness caused by the 2009 H1N1 pandemic was no worse than that caused by seasonal influenza A in a Wisconsin population of 50,000 children and adults, according to a report in the Sept. 8 JAMA.
"Our results suggest that the clinical manifestations and risk of hospital admission are similar for the 2009 H1N1 and other seasonal influenza A strains among those presenting for medical care and documented to have influenza infection," said Dr. Edward A. Belongia of the Marshfield (Wisc.) Clinic Research Foundation and his associates.
Previous studies have been unable to make direct comparisons of the spectrum of influenza illnesses because all the data on H1N1 have been surveillance data, particularly reports on hospital admissions and fatalities. "These reports have provided valuable descriptive information, but differing criteria for influenza testing by season and the lack of uniform standards for data collection and reporting limit comparisons with other influenza viruses," Dr. Belongia and his colleagues said.
By contrast, their study made comparisons using a defined population in which enrollment and laboratory methods were consistent across all subjects. The researchers compared the characteristics of the illness caused by the pandemic with those of the illness caused by seasonal flu infection using both inpatient and outpatient medical records for all residents of 14 ZIP code areas surrounding Marshfield, an area home to approximately 50,000 people who receive all their medical care at the clinic facilities.
All patients who presented for medical care with at least one flu symptom were tested for influenza using nasopharyngeal or nasal swabs. The study periods encompassed 10 weeks during the 2007-2008 flu season, 12 weeks during the 2008-2009 flu season, and 27 weeks during the 2009 pandemic.
There were 545 cases of 2009 H1N1 infection, 221 cases of seasonal H1N1 in 2008-2009, and 632 cases of H3N2 infection in 2007-2008.
Symptom severity scores were calculated for each patient based on self- or parental report of the severity of cough, fever, chills, fatigue, nasal congestion, wheezing, vomiting, headache, muscle ache, sore throat, ear pain, and nausea. Possible scores ranged from 1 for a single mild symptom to 36 for 12 severe symptoms.
Symptom severity scores were comparable for the three infections, with a median score of 14 for 2009 H1N1, 16 for the 2008-2009 seasonal flu, and 17 for 2007-2008 seasonal flu. The proportion of patients who received antiviral medications was similar in all three study groups.
The cumulative incidence of hospital admission within 30 days of onset per 1,000 residents was 0.25 for H1N1, 0.15 for seasonal flu in 2008-2009, and 0.50 for seasonal flu in 2007-2008, which are nonsignificant differences, Dr. Belongia and his associates wrote (JAMA 2010;304:1091-8).
"Other published studies have reported a higher than expected incidence of hospitalization and death associated with 2009 H1N1 infection, particularly in children. This finding may be due to the greatly elevated incidence of 2009 H1N1 influenza in a highly susceptible population of children and young adults rather than increased virulence of 2009 H1N1 relative to seasonal influenza A viruses," they said.
The investigators cautioned that their findings regarding the most serious outcomes, such as pneumonia, were limited by the fact that there was a very low number of such outcomes in all the study groups.
This study was funded by a grant from the Centers for Disease Control and Prevention. No financial conflicts of interest were reported.
Major Finding: On a scale from 1-36, the median score of symptom severity was 14 for 2009 H1N1 influenza compared with 16 for the seasonal flu in 2008-2009, and 17 for the seasonal flu in 2007-2008.
Data Source: A single-center, prospective study of all patients who tested positive for H1N1 or seasonal influenza A out of a population of approximately 50,000 adults and children residing in Wisconsin.
Disclosures: The study was funded by a grant from the Centers for Disease Control and Prevention, Atlanta. No financial conflicts of interest were reported.
Adjuvant Gemcitabine Comparable to Fluorouracil for Resected Pancreatic Cancer
Gemcitabine as adjuvant chemotherapy following complete resection for pancreatic cancer did not improve overall survival when compared with standard chemotherapy with fluorouracil plus folinic acid in a randomized, controlled, open-label phase III trial, according to a report in the Sept. 8 issue of JAMA.
In what they described as “the largest ever adjuvant trial conducted in pancreatic cancer,” median overall survival and median progression-free survival were essentially the same between the two chemotherapies, said Dr. John P. Neoptolemos of the University of Liverpool (England) and his associates.
These findings do not confirm the results of a much smaller study involving patients with nonresected advanced pancreatic cancer, in which gemcitabine (Gemzar) conferred a survival benefit compared with fluorouracil.
The 1,088 patients in the European Study Group for Pancreatic Cancer–3 (ESPAC-3) trial were treated at 159 centers in 17 countries after undergoing complete macroscopic resection for ductal adenocarcinoma of the pancreas. The study subjects had a life expectancy of at least 3 months and showed no evidence of malignant ascites, peritoneal metastasis, or spread to the liver or other abdominal or extra-abdominal organs.
A total of 551 patients were randomly assigned to receive 6 months of standard fluorouracil plus folinic acid and 537 to receive 6 months of gemcitabine in the open-label trial. Patients were followed for a median of 2 years.
At the time of data analysis, 753 (69%) of the study subjects had died. Median survival was 23 months for the fluorouracil group and 23.6 months for the gemcitabine group, a nonsignificant difference.
Interim survival estimates for the fluorouracil group were 79% at 12 months and 48% at 24 months. Comparable estimates for the gemcitabine group were 80% at 12 months and 49% at 24 months, again a nonsignificant difference.
Similarly, median progression-free survival was 14 months for both study groups. Interim progression-free survival rates for fluorouracil was 56% at 12 months and 31% at 24 months, which was not significantly different from 61% and 30%, respectively, with gemcitabine, Dr. Neoptolemos and his colleagues said (JAMA 2010;304:1073-81).
About twice as many patients receiving fluorouracil (14%) as those receiving gemcitabine (7.5%) reported serious adverse effects related to their treatment. Patients in the fluorouracil group reported significantly more stomatitis and diarrhea, while those in the gemcitabine group reported more hematologic toxicity.
Quality of life scores were similar between the two groups.
The investigators currently are conducting another study to compare combined gemcitabine plus capecitabine (Xeloda) (a fluoropyrimidine) against gemcitabine alone for pancreatic cancer.
In an editorial, Dr. Eileen M. O’Reilly of Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York, said that even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer.
It is now clear, she added, that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said (JAMA 2010;304:1124-5).
No financial conflicts of interest were reported by the study authors. This study was supported by Cancer Research UK, National Cancer Institute of Canada, Canadian Cancer Society, Fonds de Recherche de la Societe Nationale Francaise de Gastroenterologie, Fondazioone Italiana Malattie del Pancreas, Health and Medical Research Council of Australia, Cancer Councils of New South Wales, Queensland, Victoria, and South Austalia, and the UK National Institute for Health Research at Royal Marsden Hospital.
Dr. O’Reilly reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech.
Even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer, Dr. O’Reilly said.
It is now clear that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said.
Eileen M. O’Reilly, M.D., is at Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York. She reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech. These comments are taken from her editorial accompanying Dr. Neoptolemos’ report (JAMA 2010;304:1124-5).
Even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer, Dr. O’Reilly said.
It is now clear that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said.
Eileen M. O’Reilly, M.D., is at Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York. She reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech. These comments are taken from her editorial accompanying Dr. Neoptolemos’ report (JAMA 2010;304:1124-5).
Even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer, Dr. O’Reilly said.
It is now clear that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said.
Eileen M. O’Reilly, M.D., is at Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York. She reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech. These comments are taken from her editorial accompanying Dr. Neoptolemos’ report (JAMA 2010;304:1124-5).
Gemcitabine as adjuvant chemotherapy following complete resection for pancreatic cancer did not improve overall survival when compared with standard chemotherapy with fluorouracil plus folinic acid in a randomized, controlled, open-label phase III trial, according to a report in the Sept. 8 issue of JAMA.
In what they described as “the largest ever adjuvant trial conducted in pancreatic cancer,” median overall survival and median progression-free survival were essentially the same between the two chemotherapies, said Dr. John P. Neoptolemos of the University of Liverpool (England) and his associates.
These findings do not confirm the results of a much smaller study involving patients with nonresected advanced pancreatic cancer, in which gemcitabine (Gemzar) conferred a survival benefit compared with fluorouracil.
The 1,088 patients in the European Study Group for Pancreatic Cancer–3 (ESPAC-3) trial were treated at 159 centers in 17 countries after undergoing complete macroscopic resection for ductal adenocarcinoma of the pancreas. The study subjects had a life expectancy of at least 3 months and showed no evidence of malignant ascites, peritoneal metastasis, or spread to the liver or other abdominal or extra-abdominal organs.
A total of 551 patients were randomly assigned to receive 6 months of standard fluorouracil plus folinic acid and 537 to receive 6 months of gemcitabine in the open-label trial. Patients were followed for a median of 2 years.
At the time of data analysis, 753 (69%) of the study subjects had died. Median survival was 23 months for the fluorouracil group and 23.6 months for the gemcitabine group, a nonsignificant difference.
Interim survival estimates for the fluorouracil group were 79% at 12 months and 48% at 24 months. Comparable estimates for the gemcitabine group were 80% at 12 months and 49% at 24 months, again a nonsignificant difference.
Similarly, median progression-free survival was 14 months for both study groups. Interim progression-free survival rates for fluorouracil was 56% at 12 months and 31% at 24 months, which was not significantly different from 61% and 30%, respectively, with gemcitabine, Dr. Neoptolemos and his colleagues said (JAMA 2010;304:1073-81).
About twice as many patients receiving fluorouracil (14%) as those receiving gemcitabine (7.5%) reported serious adverse effects related to their treatment. Patients in the fluorouracil group reported significantly more stomatitis and diarrhea, while those in the gemcitabine group reported more hematologic toxicity.
Quality of life scores were similar between the two groups.
The investigators currently are conducting another study to compare combined gemcitabine plus capecitabine (Xeloda) (a fluoropyrimidine) against gemcitabine alone for pancreatic cancer.
In an editorial, Dr. Eileen M. O’Reilly of Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York, said that even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer.
It is now clear, she added, that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said (JAMA 2010;304:1124-5).
No financial conflicts of interest were reported by the study authors. This study was supported by Cancer Research UK, National Cancer Institute of Canada, Canadian Cancer Society, Fonds de Recherche de la Societe Nationale Francaise de Gastroenterologie, Fondazioone Italiana Malattie del Pancreas, Health and Medical Research Council of Australia, Cancer Councils of New South Wales, Queensland, Victoria, and South Austalia, and the UK National Institute for Health Research at Royal Marsden Hospital.
Dr. O’Reilly reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech.
Gemcitabine as adjuvant chemotherapy following complete resection for pancreatic cancer did not improve overall survival when compared with standard chemotherapy with fluorouracil plus folinic acid in a randomized, controlled, open-label phase III trial, according to a report in the Sept. 8 issue of JAMA.
In what they described as “the largest ever adjuvant trial conducted in pancreatic cancer,” median overall survival and median progression-free survival were essentially the same between the two chemotherapies, said Dr. John P. Neoptolemos of the University of Liverpool (England) and his associates.
These findings do not confirm the results of a much smaller study involving patients with nonresected advanced pancreatic cancer, in which gemcitabine (Gemzar) conferred a survival benefit compared with fluorouracil.
The 1,088 patients in the European Study Group for Pancreatic Cancer–3 (ESPAC-3) trial were treated at 159 centers in 17 countries after undergoing complete macroscopic resection for ductal adenocarcinoma of the pancreas. The study subjects had a life expectancy of at least 3 months and showed no evidence of malignant ascites, peritoneal metastasis, or spread to the liver or other abdominal or extra-abdominal organs.
A total of 551 patients were randomly assigned to receive 6 months of standard fluorouracil plus folinic acid and 537 to receive 6 months of gemcitabine in the open-label trial. Patients were followed for a median of 2 years.
At the time of data analysis, 753 (69%) of the study subjects had died. Median survival was 23 months for the fluorouracil group and 23.6 months for the gemcitabine group, a nonsignificant difference.
Interim survival estimates for the fluorouracil group were 79% at 12 months and 48% at 24 months. Comparable estimates for the gemcitabine group were 80% at 12 months and 49% at 24 months, again a nonsignificant difference.
Similarly, median progression-free survival was 14 months for both study groups. Interim progression-free survival rates for fluorouracil was 56% at 12 months and 31% at 24 months, which was not significantly different from 61% and 30%, respectively, with gemcitabine, Dr. Neoptolemos and his colleagues said (JAMA 2010;304:1073-81).
About twice as many patients receiving fluorouracil (14%) as those receiving gemcitabine (7.5%) reported serious adverse effects related to their treatment. Patients in the fluorouracil group reported significantly more stomatitis and diarrhea, while those in the gemcitabine group reported more hematologic toxicity.
Quality of life scores were similar between the two groups.
The investigators currently are conducting another study to compare combined gemcitabine plus capecitabine (Xeloda) (a fluoropyrimidine) against gemcitabine alone for pancreatic cancer.
In an editorial, Dr. Eileen M. O’Reilly of Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York, said that even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer.
It is now clear, she added, that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said (JAMA 2010;304:1124-5).
No financial conflicts of interest were reported by the study authors. This study was supported by Cancer Research UK, National Cancer Institute of Canada, Canadian Cancer Society, Fonds de Recherche de la Societe Nationale Francaise de Gastroenterologie, Fondazioone Italiana Malattie del Pancreas, Health and Medical Research Council of Australia, Cancer Councils of New South Wales, Queensland, Victoria, and South Austalia, and the UK National Institute for Health Research at Royal Marsden Hospital.
Dr. O’Reilly reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech.
From JAMA
Adjuvant Gemcitabine Comparable to Fluorouracil for Resected Pancreatic Cancer
Gemcitabine as adjuvant chemotherapy following complete resection for pancreatic cancer did not improve overall survival when compared with standard chemotherapy with fluorouracil plus folinic acid in a randomized, controlled, open-label phase III trial, according to a report in the Sept. 8 issue of JAMA.
In what they described as “the largest ever adjuvant trial conducted in pancreatic cancer,” median overall survival and median progression-free survival were essentially the same between the two chemotherapies, said Dr. John P. Neoptolemos of the University of Liverpool (England) and his associates.
These findings do not confirm the results of a much smaller study involving patients with nonresected advanced pancreatic cancer, in which gemcitabine (Gemzar) conferred a survival benefit compared with fluorouracil.
The 1,088 patients in the European Study Group for Pancreatic Cancer–3 (ESPAC-3) trial were treated at 159 centers in 17 countries after undergoing complete macroscopic resection for ductal adenocarcinoma of the pancreas. The study subjects had a life expectancy of at least 3 months and showed no evidence of malignant ascites, peritoneal metastasis, or spread to the liver or other abdominal or extra-abdominal organs.
A total of 551 patients were randomly assigned to receive 6 months of standard fluorouracil plus folinic acid and 537 to receive 6 months of gemcitabine in the open-label trial. Patients were followed for a median of 2 years.
At the time of data analysis, 753 (69%) of the study subjects had died. Median survival was 23 months for the fluorouracil group and 23.6 months for the gemcitabine group, a nonsignificant difference.
Interim survival estimates for the fluorouracil group were 79% at 12 months and 48% at 24 months. Comparable estimates for the gemcitabine group were 80% at 12 months and 49% at 24 months, again a nonsignificant difference.
Similarly, median progression-free survival was 14 months for both study groups. Interim progression-free survival rates for fluorouracil was 56% at 12 months and 31% at 24 months, which was not significantly different from 61% and 30%, respectively, with gemcitabine, Dr. Neoptolemos and his colleagues said (JAMA 2010;304:1073-81).
About twice as many patients receiving fluorouracil (14%) as those receiving gemcitabine (7.5%) reported serious adverse effects related to their treatment. Patients in the fluorouracil group reported significantly more stomatitis and diarrhea, while those in the gemcitabine group reported more hematologic toxicity.
Quality of life scores were similar between the two groups.
The investigators currently are conducting another study to compare combined gemcitabine plus capecitabine (Xeloda) (a fluoropyrimidine) against gemcitabine alone for pancreatic cancer.
In an editorial, Dr. Eileen M. O’Reilly of Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York, said that even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer.
It is now clear, she added, that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said (JAMA 2010;304:1124-5).
No financial conflicts of interest were reported by the study authors. This study was supported by Cancer Research UK, National Cancer Institute of Canada, Canadian Cancer Society, Fonds de Recherche de la Societe Nationale Francaise de Gastroenterologie, Fondazioone Italiana Malattie del Pancreas, Health and Medical Research Council of Australia, Cancer Councils of New South Wales, Queensland, Victoria, and South Austalia, and the UK National Institute for Health Research at Royal Marsden Hospital.
Dr. O’Reilly reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech.
Even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer, Dr. O’Reilly said.
It is now clear that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said.
Eileen M. O’Reilly, M.D., is at Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York. She reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech. These comments are taken from her editorial accompanying Dr. Neoptolemos’ report (JAMA 2010;304:1124-5).
Even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer, Dr. O’Reilly said.
It is now clear that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said.
Eileen M. O’Reilly, M.D., is at Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York. She reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech. These comments are taken from her editorial accompanying Dr. Neoptolemos’ report (JAMA 2010;304:1124-5).
Even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer, Dr. O’Reilly said.
It is now clear that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said.
Eileen M. O’Reilly, M.D., is at Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York. She reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech. These comments are taken from her editorial accompanying Dr. Neoptolemos’ report (JAMA 2010;304:1124-5).
Gemcitabine as adjuvant chemotherapy following complete resection for pancreatic cancer did not improve overall survival when compared with standard chemotherapy with fluorouracil plus folinic acid in a randomized, controlled, open-label phase III trial, according to a report in the Sept. 8 issue of JAMA.
In what they described as “the largest ever adjuvant trial conducted in pancreatic cancer,” median overall survival and median progression-free survival were essentially the same between the two chemotherapies, said Dr. John P. Neoptolemos of the University of Liverpool (England) and his associates.
These findings do not confirm the results of a much smaller study involving patients with nonresected advanced pancreatic cancer, in which gemcitabine (Gemzar) conferred a survival benefit compared with fluorouracil.
The 1,088 patients in the European Study Group for Pancreatic Cancer–3 (ESPAC-3) trial were treated at 159 centers in 17 countries after undergoing complete macroscopic resection for ductal adenocarcinoma of the pancreas. The study subjects had a life expectancy of at least 3 months and showed no evidence of malignant ascites, peritoneal metastasis, or spread to the liver or other abdominal or extra-abdominal organs.
A total of 551 patients were randomly assigned to receive 6 months of standard fluorouracil plus folinic acid and 537 to receive 6 months of gemcitabine in the open-label trial. Patients were followed for a median of 2 years.
At the time of data analysis, 753 (69%) of the study subjects had died. Median survival was 23 months for the fluorouracil group and 23.6 months for the gemcitabine group, a nonsignificant difference.
Interim survival estimates for the fluorouracil group were 79% at 12 months and 48% at 24 months. Comparable estimates for the gemcitabine group were 80% at 12 months and 49% at 24 months, again a nonsignificant difference.
Similarly, median progression-free survival was 14 months for both study groups. Interim progression-free survival rates for fluorouracil was 56% at 12 months and 31% at 24 months, which was not significantly different from 61% and 30%, respectively, with gemcitabine, Dr. Neoptolemos and his colleagues said (JAMA 2010;304:1073-81).
About twice as many patients receiving fluorouracil (14%) as those receiving gemcitabine (7.5%) reported serious adverse effects related to their treatment. Patients in the fluorouracil group reported significantly more stomatitis and diarrhea, while those in the gemcitabine group reported more hematologic toxicity.
Quality of life scores were similar between the two groups.
The investigators currently are conducting another study to compare combined gemcitabine plus capecitabine (Xeloda) (a fluoropyrimidine) against gemcitabine alone for pancreatic cancer.
In an editorial, Dr. Eileen M. O’Reilly of Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York, said that even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer.
It is now clear, she added, that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said (JAMA 2010;304:1124-5).
No financial conflicts of interest were reported by the study authors. This study was supported by Cancer Research UK, National Cancer Institute of Canada, Canadian Cancer Society, Fonds de Recherche de la Societe Nationale Francaise de Gastroenterologie, Fondazioone Italiana Malattie del Pancreas, Health and Medical Research Council of Australia, Cancer Councils of New South Wales, Queensland, Victoria, and South Austalia, and the UK National Institute for Health Research at Royal Marsden Hospital.
Dr. O’Reilly reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech.
Gemcitabine as adjuvant chemotherapy following complete resection for pancreatic cancer did not improve overall survival when compared with standard chemotherapy with fluorouracil plus folinic acid in a randomized, controlled, open-label phase III trial, according to a report in the Sept. 8 issue of JAMA.
In what they described as “the largest ever adjuvant trial conducted in pancreatic cancer,” median overall survival and median progression-free survival were essentially the same between the two chemotherapies, said Dr. John P. Neoptolemos of the University of Liverpool (England) and his associates.
These findings do not confirm the results of a much smaller study involving patients with nonresected advanced pancreatic cancer, in which gemcitabine (Gemzar) conferred a survival benefit compared with fluorouracil.
The 1,088 patients in the European Study Group for Pancreatic Cancer–3 (ESPAC-3) trial were treated at 159 centers in 17 countries after undergoing complete macroscopic resection for ductal adenocarcinoma of the pancreas. The study subjects had a life expectancy of at least 3 months and showed no evidence of malignant ascites, peritoneal metastasis, or spread to the liver or other abdominal or extra-abdominal organs.
A total of 551 patients were randomly assigned to receive 6 months of standard fluorouracil plus folinic acid and 537 to receive 6 months of gemcitabine in the open-label trial. Patients were followed for a median of 2 years.
At the time of data analysis, 753 (69%) of the study subjects had died. Median survival was 23 months for the fluorouracil group and 23.6 months for the gemcitabine group, a nonsignificant difference.
Interim survival estimates for the fluorouracil group were 79% at 12 months and 48% at 24 months. Comparable estimates for the gemcitabine group were 80% at 12 months and 49% at 24 months, again a nonsignificant difference.
Similarly, median progression-free survival was 14 months for both study groups. Interim progression-free survival rates for fluorouracil was 56% at 12 months and 31% at 24 months, which was not significantly different from 61% and 30%, respectively, with gemcitabine, Dr. Neoptolemos and his colleagues said (JAMA 2010;304:1073-81).
About twice as many patients receiving fluorouracil (14%) as those receiving gemcitabine (7.5%) reported serious adverse effects related to their treatment. Patients in the fluorouracil group reported significantly more stomatitis and diarrhea, while those in the gemcitabine group reported more hematologic toxicity.
Quality of life scores were similar between the two groups.
The investigators currently are conducting another study to compare combined gemcitabine plus capecitabine (Xeloda) (a fluoropyrimidine) against gemcitabine alone for pancreatic cancer.
In an editorial, Dr. Eileen M. O’Reilly of Memorial Sloan-Kettering Cancer Center and Weill Medical College of Cornell University, New York, said that even though gemcitabine did not prove superior to standard fluorouracil chemotherapy in this study, the findings provide a significant contribution by helping to firmly establish the value of adjuvant chemotherapy alone, rather than chemotherapy plus radiation, for resected pancreatic cancer.
It is now clear, she added, that both gemcitabine and fluorouracil offer a modest but real improvement in overall survival, with about twice as many patients who receive either adjuvant agent surviving for 5 years, compared with those who don’t receive either one.
The study by Neoptolemos et al also demonstrates that for patients unable to tolerate gemcitabine, there is now a clearly validated alternative with fluorouracil, she said (JAMA 2010;304:1124-5).
No financial conflicts of interest were reported by the study authors. This study was supported by Cancer Research UK, National Cancer Institute of Canada, Canadian Cancer Society, Fonds de Recherche de la Societe Nationale Francaise de Gastroenterologie, Fondazioone Italiana Malattie del Pancreas, Health and Medical Research Council of Australia, Cancer Councils of New South Wales, Queensland, Victoria, and South Austalia, and the UK National Institute for Health Research at Royal Marsden Hospital.
Dr. O’Reilly reported receiving research funding from Sanofi-Aventis and consulting fees from Genentech.
From JAMA
Major Finding: Neither overall survival nor progression-free survival were significantly better with adjuvant gemcitabine, compared with standard fluorouracil plus folinic acid.
Data Source: A prospective, international, randomized, open-label study involving 1,088 patients with completely resected pancreatic cancer.
Disclosures: No financial conflicts of interest were reported. This study was supported by Cancer Research UK, National Cancer Institute of Canada, Canadian Cancer Society, Fonds de Recherche de la Societe Nationale Francaise de Gastroenterologie, Fondazioone Italiana Malattie del Pancreas, Health and Medical Research Council of Australia, Cancer Councils of New South Wales, Queensland, Victoria, and South Austalia, and the UK National Institute for Health Research at Royal Marsden Hospital.
Severity of 2009 H1N1 Infection Found Similar to Previous Years’ Infections
The severity of the illness caused by the 2009 H1N1 pandemic was no worse than that caused by seasonal influenza A in a Wisconsin population of 50,000 children and adults, according to a report in the Sept. 8 JAMA.
“Our results suggest that the clinical manifestations and risk of hospital admission are similar for the 2009 H1N1 and other seasonal influenza A strains among those presenting for medical care and documented to have influenza infection,” said Dr. Edward A. Belongia of the Marshfield (Wisc.) Clinic Research Foundation and his associates.
Previous studies have been unable to make direct comparisons of the spectrum of influenza illnesses because all the data on H1N1 have been surveillance data, particularly reports on hospital admissions and fatalities. “These reports have provided valuable descriptive information, but differing criteria for influenza testing by season and the lack of uniform standards for data collection and reporting limit comparisons with other influenza viruses,” Dr. Belongia and his colleagues said.
By contrast, their study made comparisons using a defined population in which enrollment and laboratory methods were consistent across all subjects. The researchers compared the characteristics of the illness caused by the pandemic with those of the illness caused by seasonal flu infection using both inpatient and outpatient medical records for all residents of 14 ZIP code areas surrounding Marshfield, an area home to approximately 50,000 people who receive all their medical care at the clinic facilities.
All patients who presented for medical care with at least one flu symptom were tested for influenza using nasopharyngeal or nasal swabs. The study periods encompassed 10 weeks during the 2007-2008 flu season, 12 weeks during the 2008-2009 flu season, and 27 weeks during the 2009 pandemic.
There were 545 cases of 2009 H1N1 infection, 221 cases of seasonal H1N1 in 2008-2009, and 632 cases of H3N2 infection in 2007-2008.
Symptom severity scores were calculated for each patient based on self- or parental report of the severity of cough, fever, chills, fatigue, nasal congestion, wheezing, vomiting, headache, muscle ache, sore throat, ear pain, and nausea. Possible scores ranged from 1 for a single mild symptom to 36 for 12 severe symptoms.
Symptom severity scores were comparable for the three infections, with a median score of 14 for 2009 H1N1, 16 for the 2008-2009 seasonal flu, and 17 for 2007-2008 seasonal flu. The proportion of patients who received antiviral medications was similar in all three study groups.
The cumulative incidence of hospital admission within 30 days of onset per 1,000 residents was 0.25 for H1N1, 0.15 for seasonal flu in 2008-2009, and 0.50 for seasonal flu in 2007-2008, which are nonsignificant differences, Dr. Belongia and his associates wrote (JAMA 2010;304:1091-8).
“Other published studies have reported a higher than expected incidence of hospitalization and death associated with 2009 H1N1 infection, particularly in children. This finding may be due to the greatly elevated incidence of 2009 H1N1 influenza in a highly susceptible population of children and young adults rather than increased virulence of 2009 H1N1 relative to seasonal influenza A viruses,” they said.
The investigators cautioned that their findings regarding the most serious outcomes, such as pneumonia, were limited by the fact that there was a very low number of such outcomes in all the study groups.
This study was funded by a grant from the Centers for Disease Control and Prevention. No financial conflicts of interest were reported.
The severity of the illness caused by the 2009 H1N1 pandemic was no worse than that caused by seasonal influenza A in a Wisconsin population of 50,000 children and adults, according to a report in the Sept. 8 JAMA.
“Our results suggest that the clinical manifestations and risk of hospital admission are similar for the 2009 H1N1 and other seasonal influenza A strains among those presenting for medical care and documented to have influenza infection,” said Dr. Edward A. Belongia of the Marshfield (Wisc.) Clinic Research Foundation and his associates.
Previous studies have been unable to make direct comparisons of the spectrum of influenza illnesses because all the data on H1N1 have been surveillance data, particularly reports on hospital admissions and fatalities. “These reports have provided valuable descriptive information, but differing criteria for influenza testing by season and the lack of uniform standards for data collection and reporting limit comparisons with other influenza viruses,” Dr. Belongia and his colleagues said.
By contrast, their study made comparisons using a defined population in which enrollment and laboratory methods were consistent across all subjects. The researchers compared the characteristics of the illness caused by the pandemic with those of the illness caused by seasonal flu infection using both inpatient and outpatient medical records for all residents of 14 ZIP code areas surrounding Marshfield, an area home to approximately 50,000 people who receive all their medical care at the clinic facilities.
All patients who presented for medical care with at least one flu symptom were tested for influenza using nasopharyngeal or nasal swabs. The study periods encompassed 10 weeks during the 2007-2008 flu season, 12 weeks during the 2008-2009 flu season, and 27 weeks during the 2009 pandemic.
There were 545 cases of 2009 H1N1 infection, 221 cases of seasonal H1N1 in 2008-2009, and 632 cases of H3N2 infection in 2007-2008.
Symptom severity scores were calculated for each patient based on self- or parental report of the severity of cough, fever, chills, fatigue, nasal congestion, wheezing, vomiting, headache, muscle ache, sore throat, ear pain, and nausea. Possible scores ranged from 1 for a single mild symptom to 36 for 12 severe symptoms.
Symptom severity scores were comparable for the three infections, with a median score of 14 for 2009 H1N1, 16 for the 2008-2009 seasonal flu, and 17 for 2007-2008 seasonal flu. The proportion of patients who received antiviral medications was similar in all three study groups.
The cumulative incidence of hospital admission within 30 days of onset per 1,000 residents was 0.25 for H1N1, 0.15 for seasonal flu in 2008-2009, and 0.50 for seasonal flu in 2007-2008, which are nonsignificant differences, Dr. Belongia and his associates wrote (JAMA 2010;304:1091-8).
“Other published studies have reported a higher than expected incidence of hospitalization and death associated with 2009 H1N1 infection, particularly in children. This finding may be due to the greatly elevated incidence of 2009 H1N1 influenza in a highly susceptible population of children and young adults rather than increased virulence of 2009 H1N1 relative to seasonal influenza A viruses,” they said.
The investigators cautioned that their findings regarding the most serious outcomes, such as pneumonia, were limited by the fact that there was a very low number of such outcomes in all the study groups.
This study was funded by a grant from the Centers for Disease Control and Prevention. No financial conflicts of interest were reported.
The severity of the illness caused by the 2009 H1N1 pandemic was no worse than that caused by seasonal influenza A in a Wisconsin population of 50,000 children and adults, according to a report in the Sept. 8 JAMA.
“Our results suggest that the clinical manifestations and risk of hospital admission are similar for the 2009 H1N1 and other seasonal influenza A strains among those presenting for medical care and documented to have influenza infection,” said Dr. Edward A. Belongia of the Marshfield (Wisc.) Clinic Research Foundation and his associates.
Previous studies have been unable to make direct comparisons of the spectrum of influenza illnesses because all the data on H1N1 have been surveillance data, particularly reports on hospital admissions and fatalities. “These reports have provided valuable descriptive information, but differing criteria for influenza testing by season and the lack of uniform standards for data collection and reporting limit comparisons with other influenza viruses,” Dr. Belongia and his colleagues said.
By contrast, their study made comparisons using a defined population in which enrollment and laboratory methods were consistent across all subjects. The researchers compared the characteristics of the illness caused by the pandemic with those of the illness caused by seasonal flu infection using both inpatient and outpatient medical records for all residents of 14 ZIP code areas surrounding Marshfield, an area home to approximately 50,000 people who receive all their medical care at the clinic facilities.
All patients who presented for medical care with at least one flu symptom were tested for influenza using nasopharyngeal or nasal swabs. The study periods encompassed 10 weeks during the 2007-2008 flu season, 12 weeks during the 2008-2009 flu season, and 27 weeks during the 2009 pandemic.
There were 545 cases of 2009 H1N1 infection, 221 cases of seasonal H1N1 in 2008-2009, and 632 cases of H3N2 infection in 2007-2008.
Symptom severity scores were calculated for each patient based on self- or parental report of the severity of cough, fever, chills, fatigue, nasal congestion, wheezing, vomiting, headache, muscle ache, sore throat, ear pain, and nausea. Possible scores ranged from 1 for a single mild symptom to 36 for 12 severe symptoms.
Symptom severity scores were comparable for the three infections, with a median score of 14 for 2009 H1N1, 16 for the 2008-2009 seasonal flu, and 17 for 2007-2008 seasonal flu. The proportion of patients who received antiviral medications was similar in all three study groups.
The cumulative incidence of hospital admission within 30 days of onset per 1,000 residents was 0.25 for H1N1, 0.15 for seasonal flu in 2008-2009, and 0.50 for seasonal flu in 2007-2008, which are nonsignificant differences, Dr. Belongia and his associates wrote (JAMA 2010;304:1091-8).
“Other published studies have reported a higher than expected incidence of hospitalization and death associated with 2009 H1N1 infection, particularly in children. This finding may be due to the greatly elevated incidence of 2009 H1N1 influenza in a highly susceptible population of children and young adults rather than increased virulence of 2009 H1N1 relative to seasonal influenza A viruses,” they said.
The investigators cautioned that their findings regarding the most serious outcomes, such as pneumonia, were limited by the fact that there was a very low number of such outcomes in all the study groups.
This study was funded by a grant from the Centers for Disease Control and Prevention. No financial conflicts of interest were reported.
Major Finding: On a scale from 1-36, the median score of symptom severity was 14 for 2009 H1N1 influenza compared with 16 for the seasonal flu in 2008-2009, and 17 for the seasonal flu in 2007-2008.
Data Source: A single-center, prospective study of all patients who tested positive for H1N1 or seasonal influenza A out of a population of approximately 50,000 adults and children residing in Wisconsin.
Disclosures: The study was funded by a grant from the Centers for Disease Control and Prevention, Atlanta. No financial conflicts of interest were reported.
Severity of 2009 H1N1 Infection Found Similar to Previous Years’ Infections
The severity of the illness caused by the 2009 H1N1 pandemic was no worse than that caused by seasonal influenza A in a Wisconsin population of 50,000 children and adults, according to a report in the Sept. 8 JAMA.
“Our results suggest that the clinical manifestations and risk of hospital admission are similar for the 2009 H1N1 and other seasonal influenza A strains among those presenting for medical care and documented to have influenza infection,” said Dr. Edward A. Belongia of the Marshfield (Wisc.) Clinic Research Foundation and his associates.
Previous studies have been unable to make direct comparisons of the spectrum of influenza illnesses because all the data on H1N1 have been surveillance data, particularly reports on hospital admissions and fatalities. “These reports have provided valuable descriptive information, but differing criteria for influenza testing by season and the lack of uniform standards for data collection and reporting limit comparisons with other influenza viruses,” Dr. Belongia and his colleagues said.
By contrast, their study made comparisons using a defined population in which enrollment and laboratory methods were consistent across all subjects. The researchers compared the characteristics of the illness caused by the pandemic with those of the illness caused by seasonal flu infection using both inpatient and outpatient medical records for all residents of 14 ZIP code areas surrounding Marshfield, an area home to approximately 50,000 people who receive all their medical care at the clinic facilities.
All patients who presented for medical care with at least one flu symptom were tested for influenza using nasopharyngeal or nasal swabs. The study periods encompassed 10 weeks during the 2007-2008 flu season, 12 weeks during the 2008-2009 flu season, and 27 weeks during the 2009 pandemic.
There were 545 cases of 2009 H1N1 infection, 221 cases of seasonal H1N1 in 2008-2009, and 632 cases of H3N2 infection in 2007-2008.
Symptom severity scores were calculated for each patient based on self- or parental report of the severity of cough, fever, chills, fatigue, nasal congestion, wheezing, vomiting, headache, muscle ache, sore throat, ear pain, and nausea. Possible scores ranged from 1 for a single mild symptom to 36 for 12 severe symptoms.
Symptom severity scores were comparable for the three infections, with a median score of 14 for 2009 H1N1, 16 for the 2008-2009 seasonal flu, and 17 for 2007-2008 seasonal flu. The proportion of patients who received antiviral medications was similar in all three study groups.
The cumulative incidence of hospital admission within 30 days of onset per 1,000 residents was 0.25 for H1N1, 0.15 for seasonal flu in 2008-2009, and 0.50 for seasonal flu in 2007-2008, which are nonsignificant differences, Dr. Belongia and his associates wrote (JAMA 2010;304:1091-8).
“Other published studies have reported a higher than expected incidence of hospitalization and death associated with 2009 H1N1 infection, particularly in children. This finding may be due to the greatly elevated incidence of 2009 H1N1 influenza in a highly susceptible population of children and young adults rather than increased virulence of 2009 H1N1 relative to seasonal influenza A viruses,” they said.
The investigators cautioned that their findings regarding the most serious outcomes, such as pneumonia, were limited by the fact that there was a very low number of such outcomes in all the study groups.
This study was funded by a grant from the Centers for Disease Control and Prevention. No financial conflicts of interest were reported.
The severity of the illness caused by the 2009 H1N1 pandemic was no worse than that caused by seasonal influenza A in a Wisconsin population of 50,000 children and adults, according to a report in the Sept. 8 JAMA.
“Our results suggest that the clinical manifestations and risk of hospital admission are similar for the 2009 H1N1 and other seasonal influenza A strains among those presenting for medical care and documented to have influenza infection,” said Dr. Edward A. Belongia of the Marshfield (Wisc.) Clinic Research Foundation and his associates.
Previous studies have been unable to make direct comparisons of the spectrum of influenza illnesses because all the data on H1N1 have been surveillance data, particularly reports on hospital admissions and fatalities. “These reports have provided valuable descriptive information, but differing criteria for influenza testing by season and the lack of uniform standards for data collection and reporting limit comparisons with other influenza viruses,” Dr. Belongia and his colleagues said.
By contrast, their study made comparisons using a defined population in which enrollment and laboratory methods were consistent across all subjects. The researchers compared the characteristics of the illness caused by the pandemic with those of the illness caused by seasonal flu infection using both inpatient and outpatient medical records for all residents of 14 ZIP code areas surrounding Marshfield, an area home to approximately 50,000 people who receive all their medical care at the clinic facilities.
All patients who presented for medical care with at least one flu symptom were tested for influenza using nasopharyngeal or nasal swabs. The study periods encompassed 10 weeks during the 2007-2008 flu season, 12 weeks during the 2008-2009 flu season, and 27 weeks during the 2009 pandemic.
There were 545 cases of 2009 H1N1 infection, 221 cases of seasonal H1N1 in 2008-2009, and 632 cases of H3N2 infection in 2007-2008.
Symptom severity scores were calculated for each patient based on self- or parental report of the severity of cough, fever, chills, fatigue, nasal congestion, wheezing, vomiting, headache, muscle ache, sore throat, ear pain, and nausea. Possible scores ranged from 1 for a single mild symptom to 36 for 12 severe symptoms.
Symptom severity scores were comparable for the three infections, with a median score of 14 for 2009 H1N1, 16 for the 2008-2009 seasonal flu, and 17 for 2007-2008 seasonal flu. The proportion of patients who received antiviral medications was similar in all three study groups.
The cumulative incidence of hospital admission within 30 days of onset per 1,000 residents was 0.25 for H1N1, 0.15 for seasonal flu in 2008-2009, and 0.50 for seasonal flu in 2007-2008, which are nonsignificant differences, Dr. Belongia and his associates wrote (JAMA 2010;304:1091-8).
“Other published studies have reported a higher than expected incidence of hospitalization and death associated with 2009 H1N1 infection, particularly in children. This finding may be due to the greatly elevated incidence of 2009 H1N1 influenza in a highly susceptible population of children and young adults rather than increased virulence of 2009 H1N1 relative to seasonal influenza A viruses,” they said.
The investigators cautioned that their findings regarding the most serious outcomes, such as pneumonia, were limited by the fact that there was a very low number of such outcomes in all the study groups.
This study was funded by a grant from the Centers for Disease Control and Prevention. No financial conflicts of interest were reported.
The severity of the illness caused by the 2009 H1N1 pandemic was no worse than that caused by seasonal influenza A in a Wisconsin population of 50,000 children and adults, according to a report in the Sept. 8 JAMA.
“Our results suggest that the clinical manifestations and risk of hospital admission are similar for the 2009 H1N1 and other seasonal influenza A strains among those presenting for medical care and documented to have influenza infection,” said Dr. Edward A. Belongia of the Marshfield (Wisc.) Clinic Research Foundation and his associates.
Previous studies have been unable to make direct comparisons of the spectrum of influenza illnesses because all the data on H1N1 have been surveillance data, particularly reports on hospital admissions and fatalities. “These reports have provided valuable descriptive information, but differing criteria for influenza testing by season and the lack of uniform standards for data collection and reporting limit comparisons with other influenza viruses,” Dr. Belongia and his colleagues said.
By contrast, their study made comparisons using a defined population in which enrollment and laboratory methods were consistent across all subjects. The researchers compared the characteristics of the illness caused by the pandemic with those of the illness caused by seasonal flu infection using both inpatient and outpatient medical records for all residents of 14 ZIP code areas surrounding Marshfield, an area home to approximately 50,000 people who receive all their medical care at the clinic facilities.
All patients who presented for medical care with at least one flu symptom were tested for influenza using nasopharyngeal or nasal swabs. The study periods encompassed 10 weeks during the 2007-2008 flu season, 12 weeks during the 2008-2009 flu season, and 27 weeks during the 2009 pandemic.
There were 545 cases of 2009 H1N1 infection, 221 cases of seasonal H1N1 in 2008-2009, and 632 cases of H3N2 infection in 2007-2008.
Symptom severity scores were calculated for each patient based on self- or parental report of the severity of cough, fever, chills, fatigue, nasal congestion, wheezing, vomiting, headache, muscle ache, sore throat, ear pain, and nausea. Possible scores ranged from 1 for a single mild symptom to 36 for 12 severe symptoms.
Symptom severity scores were comparable for the three infections, with a median score of 14 for 2009 H1N1, 16 for the 2008-2009 seasonal flu, and 17 for 2007-2008 seasonal flu. The proportion of patients who received antiviral medications was similar in all three study groups.
The cumulative incidence of hospital admission within 30 days of onset per 1,000 residents was 0.25 for H1N1, 0.15 for seasonal flu in 2008-2009, and 0.50 for seasonal flu in 2007-2008, which are nonsignificant differences, Dr. Belongia and his associates wrote (JAMA 2010;304:1091-8).
“Other published studies have reported a higher than expected incidence of hospitalization and death associated with 2009 H1N1 infection, particularly in children. This finding may be due to the greatly elevated incidence of 2009 H1N1 influenza in a highly susceptible population of children and young adults rather than increased virulence of 2009 H1N1 relative to seasonal influenza A viruses,” they said.
The investigators cautioned that their findings regarding the most serious outcomes, such as pneumonia, were limited by the fact that there was a very low number of such outcomes in all the study groups.
This study was funded by a grant from the Centers for Disease Control and Prevention. No financial conflicts of interest were reported.
Preference for Geometric Over Social Images May Signal Autism
Toddlers at risk for autism spectrum disorders preferred to look at geometric patterns more than social images in a 1-minute experiment, in contrast to developmentally delayed and developmentally normal toddlers, according to a report published online Sept. 6 in the Archives of General Psychiatry.
“When the percentage of time a toddler spent fixating on geometric patterns was 69% or greater, the positive predictive validity for accurately classifying that toddler as having an autism spectrum disorder was 100%,” said Karen Pierce, Ph.D., department of neurosciences, Autism Center of Excellence, University of California, San Diego, and her associates.
Other investigators have used eye-tracking technology to assess differences between autistic and other infants and young children in response to pictures of faces. However, “given the active pace of brain development during the infancy period combined with high intersubject variability of eye tracking patterns to faces during this time, examining the percentage of time an infant attends to the eye region of a face may not be stable enough to make diagnostically predictive claims,” Dr. Pierce and her colleagues noted.
“An alternative method to investigate early indicators of autism is to measure a very simple behavior: preference.”
Developmentally normal infants and toddlers are known to show a distinct preference for faces when presented with two different images to look at. “What infants prefer to look at when given a choice between two images may turn out to be a more clearly observable indicator of risk than how they look at a single image,” the researchers said.
To test this hypothesis, they used eye-tracking technology to monitor subjects’ gaze when watching split-screen moving images of children dancing or performing yoga on one side (dynamic social images) and moving geometric shapes on the other (dynamic geometric images). This movie contained 28 separate scenes, with each scene varying in duration from 2 to 4 seconds, for a total presentation time of 1 minute.
The study subjects were 110 toddlers (aged 14-42 months). A total of 37 children had autism spectrum disorders (27 with autistic disorder, 9 with pervasive developmental delay not otherwise specified, and 1 with autism spectrum features). Another 51 children, matched for age and gender, were developmentally normal. And 22 children who had developmental delay (12 with language delay and 10 with global developmental delay) were matched with the autism group based on chronological age, verbal and nonverbal developmental quotient as assessed on the Mullen Scales of Early Learning, and overall functioning.
Overall, the percentage of time that the toddlers spent viewing the geometric images was significantly different among the diagnostic groups. Toddlers with autism spectrum disorders fixated significantly longer on geometric images than did developmentally typical or developmentally delayed toddlers.
Forty percent of the autism group spent more than half their viewing time fixated on the geometric images, compared with only 2% of the developmentally typical toddlers and 9% of the developmentally delayed toddlers. Many of the children with autism spectrum disorders spent more than 70% of their viewing time watching the geometric images, and several spent more than 90% of their viewing time doing so – a percentage that was never found in either of the other groups.
When a cutoff was established at 69% of viewing time spent on geometric rather than social images, autism spectrum disorders were correctly predicted in 100% of the affected children and in none of the other groups of children.
The children with autism spectrum disorders who showed the most distinct preference for geometric images also showed a unique pattern of saccades, or abrupt eye movements, while viewing the movie. They showed significantly fewer saccades than did any of the other children when looking at their preferred geometric stimuli, and conversely they showed nearly twice as many saccades when looking at the social stimuli.
Other researchers have postulated that increased saccades while viewing faces reflects anxiety among people with autistic spectrum disorders, Dr. Pierce and her associates said (Arch. Gen. Psychiatry 2010 Sept. 6 [doi:10.1001/archgenpsychiatry2010.113]).
The findings show that a preference for watching geometric images plus aberrations in the number of saccades might indicate risk for an autism spectrum disorder in children as young as 14 months of age. In addition, “we believe that it may be easy to capture this preference using relatively inexpensive techniques in mainstream clinical settings such as a pediatrician’s office,” they noted.
Infants and children found to follow these patterns of eye movement would be “excellent candidates for further developmental evaluation and possible early treatment.”
However, it is important to note that 60% of the subjects with autism spectrum disorders did not exhibit these patterns, and that those who did were not necessarily the ones with the most pronounced symptoms.
Moreover, “while the discovery of a putative new early warning sign of autism is encouraging, results should be interpreted with some caution” because approximately 20% of the initial study population was excluded because of poor compliance with testing, the researchers added.
This study was funded by grants from the National Institute of Mental Health.
Toddlers at risk for autism spectrum disorders preferred to look at geometric patterns more than social images in a 1-minute experiment, in contrast to developmentally delayed and developmentally normal toddlers, according to a report published online Sept. 6 in the Archives of General Psychiatry.
“When the percentage of time a toddler spent fixating on geometric patterns was 69% or greater, the positive predictive validity for accurately classifying that toddler as having an autism spectrum disorder was 100%,” said Karen Pierce, Ph.D., department of neurosciences, Autism Center of Excellence, University of California, San Diego, and her associates.
Other investigators have used eye-tracking technology to assess differences between autistic and other infants and young children in response to pictures of faces. However, “given the active pace of brain development during the infancy period combined with high intersubject variability of eye tracking patterns to faces during this time, examining the percentage of time an infant attends to the eye region of a face may not be stable enough to make diagnostically predictive claims,” Dr. Pierce and her colleagues noted.
“An alternative method to investigate early indicators of autism is to measure a very simple behavior: preference.”
Developmentally normal infants and toddlers are known to show a distinct preference for faces when presented with two different images to look at. “What infants prefer to look at when given a choice between two images may turn out to be a more clearly observable indicator of risk than how they look at a single image,” the researchers said.
To test this hypothesis, they used eye-tracking technology to monitor subjects’ gaze when watching split-screen moving images of children dancing or performing yoga on one side (dynamic social images) and moving geometric shapes on the other (dynamic geometric images). This movie contained 28 separate scenes, with each scene varying in duration from 2 to 4 seconds, for a total presentation time of 1 minute.
The study subjects were 110 toddlers (aged 14-42 months). A total of 37 children had autism spectrum disorders (27 with autistic disorder, 9 with pervasive developmental delay not otherwise specified, and 1 with autism spectrum features). Another 51 children, matched for age and gender, were developmentally normal. And 22 children who had developmental delay (12 with language delay and 10 with global developmental delay) were matched with the autism group based on chronological age, verbal and nonverbal developmental quotient as assessed on the Mullen Scales of Early Learning, and overall functioning.
Overall, the percentage of time that the toddlers spent viewing the geometric images was significantly different among the diagnostic groups. Toddlers with autism spectrum disorders fixated significantly longer on geometric images than did developmentally typical or developmentally delayed toddlers.
Forty percent of the autism group spent more than half their viewing time fixated on the geometric images, compared with only 2% of the developmentally typical toddlers and 9% of the developmentally delayed toddlers. Many of the children with autism spectrum disorders spent more than 70% of their viewing time watching the geometric images, and several spent more than 90% of their viewing time doing so – a percentage that was never found in either of the other groups.
When a cutoff was established at 69% of viewing time spent on geometric rather than social images, autism spectrum disorders were correctly predicted in 100% of the affected children and in none of the other groups of children.
The children with autism spectrum disorders who showed the most distinct preference for geometric images also showed a unique pattern of saccades, or abrupt eye movements, while viewing the movie. They showed significantly fewer saccades than did any of the other children when looking at their preferred geometric stimuli, and conversely they showed nearly twice as many saccades when looking at the social stimuli.
Other researchers have postulated that increased saccades while viewing faces reflects anxiety among people with autistic spectrum disorders, Dr. Pierce and her associates said (Arch. Gen. Psychiatry 2010 Sept. 6 [doi:10.1001/archgenpsychiatry2010.113]).
The findings show that a preference for watching geometric images plus aberrations in the number of saccades might indicate risk for an autism spectrum disorder in children as young as 14 months of age. In addition, “we believe that it may be easy to capture this preference using relatively inexpensive techniques in mainstream clinical settings such as a pediatrician’s office,” they noted.
Infants and children found to follow these patterns of eye movement would be “excellent candidates for further developmental evaluation and possible early treatment.”
However, it is important to note that 60% of the subjects with autism spectrum disorders did not exhibit these patterns, and that those who did were not necessarily the ones with the most pronounced symptoms.
Moreover, “while the discovery of a putative new early warning sign of autism is encouraging, results should be interpreted with some caution” because approximately 20% of the initial study population was excluded because of poor compliance with testing, the researchers added.
This study was funded by grants from the National Institute of Mental Health.
Toddlers at risk for autism spectrum disorders preferred to look at geometric patterns more than social images in a 1-minute experiment, in contrast to developmentally delayed and developmentally normal toddlers, according to a report published online Sept. 6 in the Archives of General Psychiatry.
“When the percentage of time a toddler spent fixating on geometric patterns was 69% or greater, the positive predictive validity for accurately classifying that toddler as having an autism spectrum disorder was 100%,” said Karen Pierce, Ph.D., department of neurosciences, Autism Center of Excellence, University of California, San Diego, and her associates.
Other investigators have used eye-tracking technology to assess differences between autistic and other infants and young children in response to pictures of faces. However, “given the active pace of brain development during the infancy period combined with high intersubject variability of eye tracking patterns to faces during this time, examining the percentage of time an infant attends to the eye region of a face may not be stable enough to make diagnostically predictive claims,” Dr. Pierce and her colleagues noted.
“An alternative method to investigate early indicators of autism is to measure a very simple behavior: preference.”
Developmentally normal infants and toddlers are known to show a distinct preference for faces when presented with two different images to look at. “What infants prefer to look at when given a choice between two images may turn out to be a more clearly observable indicator of risk than how they look at a single image,” the researchers said.
To test this hypothesis, they used eye-tracking technology to monitor subjects’ gaze when watching split-screen moving images of children dancing or performing yoga on one side (dynamic social images) and moving geometric shapes on the other (dynamic geometric images). This movie contained 28 separate scenes, with each scene varying in duration from 2 to 4 seconds, for a total presentation time of 1 minute.
The study subjects were 110 toddlers (aged 14-42 months). A total of 37 children had autism spectrum disorders (27 with autistic disorder, 9 with pervasive developmental delay not otherwise specified, and 1 with autism spectrum features). Another 51 children, matched for age and gender, were developmentally normal. And 22 children who had developmental delay (12 with language delay and 10 with global developmental delay) were matched with the autism group based on chronological age, verbal and nonverbal developmental quotient as assessed on the Mullen Scales of Early Learning, and overall functioning.
Overall, the percentage of time that the toddlers spent viewing the geometric images was significantly different among the diagnostic groups. Toddlers with autism spectrum disorders fixated significantly longer on geometric images than did developmentally typical or developmentally delayed toddlers.
Forty percent of the autism group spent more than half their viewing time fixated on the geometric images, compared with only 2% of the developmentally typical toddlers and 9% of the developmentally delayed toddlers. Many of the children with autism spectrum disorders spent more than 70% of their viewing time watching the geometric images, and several spent more than 90% of their viewing time doing so – a percentage that was never found in either of the other groups.
When a cutoff was established at 69% of viewing time spent on geometric rather than social images, autism spectrum disorders were correctly predicted in 100% of the affected children and in none of the other groups of children.
The children with autism spectrum disorders who showed the most distinct preference for geometric images also showed a unique pattern of saccades, or abrupt eye movements, while viewing the movie. They showed significantly fewer saccades than did any of the other children when looking at their preferred geometric stimuli, and conversely they showed nearly twice as many saccades when looking at the social stimuli.
Other researchers have postulated that increased saccades while viewing faces reflects anxiety among people with autistic spectrum disorders, Dr. Pierce and her associates said (Arch. Gen. Psychiatry 2010 Sept. 6 [doi:10.1001/archgenpsychiatry2010.113]).
The findings show that a preference for watching geometric images plus aberrations in the number of saccades might indicate risk for an autism spectrum disorder in children as young as 14 months of age. In addition, “we believe that it may be easy to capture this preference using relatively inexpensive techniques in mainstream clinical settings such as a pediatrician’s office,” they noted.
Infants and children found to follow these patterns of eye movement would be “excellent candidates for further developmental evaluation and possible early treatment.”
However, it is important to note that 60% of the subjects with autism spectrum disorders did not exhibit these patterns, and that those who did were not necessarily the ones with the most pronounced symptoms.
Moreover, “while the discovery of a putative new early warning sign of autism is encouraging, results should be interpreted with some caution” because approximately 20% of the initial study population was excluded because of poor compliance with testing, the researchers added.
This study was funded by grants from the National Institute of Mental Health.
From the Archives of General Psychiatry
Major Finding: A large subset of toddlers with autism spectrum disorder prefer looking at moving images of geometric shapes to looking at those of children. They also slow a unique pattern of saccades when viewing such images. In contrast, developmentally normal and developmentally delayed toddlers do not show these patterns.
Data Source: A study of eye-tracking patterns in 110 children aged 14-42 months.
Disclosures: This study was funded by grants from the National Institute of Mental Health.
Quick Test for Rifampin-Resistant TB Shows Promise
An automated assay designed for use in Third World regions rapidly and accurately detected Mycobacterium tuberculosis infection and resistance to rifampin, according to a report published online Sept. 1 in the New England Journal of Medicine.
In a multicenter, prospective trial in South Africa, Peru, India, and Azerbaijan involving 1,730 patients suspected of having TB, the Xpert MTB/RIF correctly identified 72% of patients whose sputum smears were negative, as well as 98% of those with positive smears. It also correctly identified 98% of rifampin-resistant bacteria and 98% of rifampin-sensitive bacteria, said Dr. Catharina C. Boehme of the Foundation for Innovative New Diagnostics (FIND), Geneva, and her associates.
“Only a small fraction” of patients worldwide with drug-resistant TB currently has access to sufficiently sensitive diagnostic testing and drug-susceptibility testing, Dr. Boehme noted, because of the complex technologies required for mycobacterial culture and nucleic-acid amplification (N. Engl. J. Med. 2010 Sept. 1 [doi: 10.1056/NEJMoa0907847]).
“Globally, ineffective tuberculosis detection and the rise of multidrug resistance and extensively drug-resistant TB have led to calls for dramatic expansion of culture capability and drug-susceptibility testing in countries in which the disease is endemic,” Dr. Boehme and her colleagues noted. “Unfortunately, the infrastructure and trained personnel required for such testing are not available except in a limited number of reference centers, and results of testing are often not available for at least 4 months, which dramatically reduces its clinical utility.”.
FIND developed the new assay to address those needs. FIND also designed, supervised, and sponsored the study evaluating the assay’s performance.
The Xpert MTB/RIF kit includes a disposable plastic cartridge that contains all the reagents needed for bacterial analysis, nucleic acid extraction, PCR amplification, and amplicon detection. The only manual step is the “nonprecise” addition of a bactericidal buffer to sputum before transferring the sample to the cartridge. Because the cartridge is never reopened, there is little chance of amplicon contamination, the investigators noted. In addition, the sputum is inactivated at the same time it is liquified, thus making a biosafety cabinet unnecessary.
The cartridge is then inserted into the GeneXpert device, which delivers test results within 2 hours. Relatively unskilled health care workers at all the study locations became proficient in the assay’s use after a brief training session. Recent data from a separate study confirm that the assay generates no infectious aerosols, which obviates the need for laboratories equipped for advanced biosafety.
Of the 1,462 patients (4,386 sputum samples) assessed, 567 patients had smear-positive and culture-positive TB; 174 had smear-negative but culture-positive TB; 105 had clinically defined but smear-negative, culture-negative TB; and 616 had no clinical, smear, or culture evidence of TB. The remaining 268 patients were excluded from the study for a variety of reasons, including 103 who had an inadequate number of sputum samples and 10 who had an inadequate volume of sputum samples.
Overall sensitivity of the device among patients with culture-positive TB was 97.6%, with no significant variation in performance across the study sites. That suggests that the study findings “are likely to be widely applicable,” Dr. Boehme and her associates said.
Sensitivity was 99.8% for smear-positive and culture-positive cases, and 90.2% for smear-negative but culture-positive cases. The assay was specific in 604 of the 609 patients who proved not to have TB (99.2%).
In addition, “the MTB/RIF test correctly detected rifampin resistance in 209 of 211 patients (99.1% sensitivity)” and correctly identified rifampin susceptibility in all 506 patients who had it (100% specificity).
“In view of the low sensitivity of smear microscopy for the diagnosis of TB in patients with HIV infection, the increased sensitivity of the MTB/RIF test – notably, among patients with smear-negative tuberculosis – at the two South African sites with 60% to 80% prevalence of HIV infection is encouraging,” the researchers noted.
It is not yet known whether the results can be replicated “in microscopy centers, health posts, and other point-of-treatment settings where temperature and electricity supply will be more variable and training issues will be more relevant,” the investigators cautioned.
“Large-scale projects to show the feasibility and effect of MTB/RIF testing at such sites are under way,” they added.
The study was designed and supervised by the sponsor (and maker) of the Xpert MTB/RIF, FIND, with additional development support provided by the National Institutes of Health, Cepheid, and the Bill and Melinda Gates Foundation. The investigators reported no additional disclosures.
In an editorial accompanying Dr. Boehme’s report, Dr. Peter M. Small and Dr. Madhukar Pai said that the Xpert MTB/RIF assay has the potential to revolutionize the diagnosis of tuberculosis and has several critical advantages over conventional nucleic acid amplification tests.
The assay is “simple to perform with minimal training, is not prone to cross-contamination, requires minimal biosafety facilities, and has a high sensitivity in smear-negative TB (the last factor being particularly relevant in patients with HIV infection),” they noted (N. Engl. J. Med. 2010 Sept. 1 [doi:10.1056/NEJMe1008496]).
However, “because Boehme et al. used the test at reference laboratories, their study offers only indirect proof of concept for use in [other] settings. Critical to rapid scale-up of the test will be the results of additional studies to determine how it performs in such settings and whether its use improves outcomes for patients in a cost-effective manner,” Dr. Small and Dr. Pai added.
“If an improved rapid nucleic acid amplification test is adopted globally, it could help avert more than 15 million TB-related deaths by 2050,” they noted.
Dr. Small is at the Global Health Program of the Bill and Melinda Gates Foundation and the Institute for Systems Biology, both in Seattle. Dr. Pai is in the department of epidemiology and biostatistics at McGill University and at Montreal Chest Institute.
The Xpert MTB/RIF assay has the potential to revolutionize the diagnosis of tuberculosis, and it has several critical advantages over conventional nucleic acid amplification tests, said Dr. Peter M. Small and Dr. Madhukar Pai.
It is “simple to perform with minimal training, is not prone to cross-contamination, requires minimal biosafety facilities, and has a high sensitivity in smear-negative TB (the last factor being particularly relevant in patients with HIV infection),” they said.
However, “because Boehme et al. used the test at reference laboratories, their study offers only indirect proof of concept for use in [other] settings. Critical to rapid scale-up of the test will be the results of additional studies to determine how it performs in such settings and whether its use improves outcomes for patients in a cost-effective manner,” Dr. Small and Dr. Pai added.
“If an improved rapid nucleic acid amplification test is adopted globally, it could help avert more than 15 million TB-related deaths by 2050,” they noted.
Peter M. Small, M.D., is at the Global Health Program of the Bill and Melinda Gates Foundation and the Institute for Systems Biology, both in Seattle. Madhukar Pai, M.D., Ph.D., is in the department of epidemiology and biostatistics at McGill University and at Montreal Chest Institute. These comments were taken from their editorial accompanying Dr. Boehme’s report (N. Engl. J. Med. 2010 Sept. 1 [doi:10.1056/NEJMe1008496]).
The Xpert MTB/RIF assay has the potential to revolutionize the diagnosis of tuberculosis, and it has several critical advantages over conventional nucleic acid amplification tests, said Dr. Peter M. Small and Dr. Madhukar Pai.
It is “simple to perform with minimal training, is not prone to cross-contamination, requires minimal biosafety facilities, and has a high sensitivity in smear-negative TB (the last factor being particularly relevant in patients with HIV infection),” they said.
However, “because Boehme et al. used the test at reference laboratories, their study offers only indirect proof of concept for use in [other] settings. Critical to rapid scale-up of the test will be the results of additional studies to determine how it performs in such settings and whether its use improves outcomes for patients in a cost-effective manner,” Dr. Small and Dr. Pai added.
“If an improved rapid nucleic acid amplification test is adopted globally, it could help avert more than 15 million TB-related deaths by 2050,” they noted.
Peter M. Small, M.D., is at the Global Health Program of the Bill and Melinda Gates Foundation and the Institute for Systems Biology, both in Seattle. Madhukar Pai, M.D., Ph.D., is in the department of epidemiology and biostatistics at McGill University and at Montreal Chest Institute. These comments were taken from their editorial accompanying Dr. Boehme’s report (N. Engl. J. Med. 2010 Sept. 1 [doi:10.1056/NEJMe1008496]).
The Xpert MTB/RIF assay has the potential to revolutionize the diagnosis of tuberculosis, and it has several critical advantages over conventional nucleic acid amplification tests, said Dr. Peter M. Small and Dr. Madhukar Pai.
It is “simple to perform with minimal training, is not prone to cross-contamination, requires minimal biosafety facilities, and has a high sensitivity in smear-negative TB (the last factor being particularly relevant in patients with HIV infection),” they said.
However, “because Boehme et al. used the test at reference laboratories, their study offers only indirect proof of concept for use in [other] settings. Critical to rapid scale-up of the test will be the results of additional studies to determine how it performs in such settings and whether its use improves outcomes for patients in a cost-effective manner,” Dr. Small and Dr. Pai added.
“If an improved rapid nucleic acid amplification test is adopted globally, it could help avert more than 15 million TB-related deaths by 2050,” they noted.
Peter M. Small, M.D., is at the Global Health Program of the Bill and Melinda Gates Foundation and the Institute for Systems Biology, both in Seattle. Madhukar Pai, M.D., Ph.D., is in the department of epidemiology and biostatistics at McGill University and at Montreal Chest Institute. These comments were taken from their editorial accompanying Dr. Boehme’s report (N. Engl. J. Med. 2010 Sept. 1 [doi:10.1056/NEJMe1008496]).
An automated assay designed for use in Third World regions rapidly and accurately detected Mycobacterium tuberculosis infection and resistance to rifampin, according to a report published online Sept. 1 in the New England Journal of Medicine.
In a multicenter, prospective trial in South Africa, Peru, India, and Azerbaijan involving 1,730 patients suspected of having TB, the Xpert MTB/RIF correctly identified 72% of patients whose sputum smears were negative, as well as 98% of those with positive smears. It also correctly identified 98% of rifampin-resistant bacteria and 98% of rifampin-sensitive bacteria, said Dr. Catharina C. Boehme of the Foundation for Innovative New Diagnostics (FIND), Geneva, and her associates.
“Only a small fraction” of patients worldwide with drug-resistant TB currently has access to sufficiently sensitive diagnostic testing and drug-susceptibility testing, Dr. Boehme noted, because of the complex technologies required for mycobacterial culture and nucleic-acid amplification (N. Engl. J. Med. 2010 Sept. 1 [doi: 10.1056/NEJMoa0907847]).
“Globally, ineffective tuberculosis detection and the rise of multidrug resistance and extensively drug-resistant TB have led to calls for dramatic expansion of culture capability and drug-susceptibility testing in countries in which the disease is endemic,” Dr. Boehme and her colleagues noted. “Unfortunately, the infrastructure and trained personnel required for such testing are not available except in a limited number of reference centers, and results of testing are often not available for at least 4 months, which dramatically reduces its clinical utility.”.
FIND developed the new assay to address those needs. FIND also designed, supervised, and sponsored the study evaluating the assay’s performance.
The Xpert MTB/RIF kit includes a disposable plastic cartridge that contains all the reagents needed for bacterial analysis, nucleic acid extraction, PCR amplification, and amplicon detection. The only manual step is the “nonprecise” addition of a bactericidal buffer to sputum before transferring the sample to the cartridge. Because the cartridge is never reopened, there is little chance of amplicon contamination, the investigators noted. In addition, the sputum is inactivated at the same time it is liquified, thus making a biosafety cabinet unnecessary.
The cartridge is then inserted into the GeneXpert device, which delivers test results within 2 hours. Relatively unskilled health care workers at all the study locations became proficient in the assay’s use after a brief training session. Recent data from a separate study confirm that the assay generates no infectious aerosols, which obviates the need for laboratories equipped for advanced biosafety.
Of the 1,462 patients (4,386 sputum samples) assessed, 567 patients had smear-positive and culture-positive TB; 174 had smear-negative but culture-positive TB; 105 had clinically defined but smear-negative, culture-negative TB; and 616 had no clinical, smear, or culture evidence of TB. The remaining 268 patients were excluded from the study for a variety of reasons, including 103 who had an inadequate number of sputum samples and 10 who had an inadequate volume of sputum samples.
Overall sensitivity of the device among patients with culture-positive TB was 97.6%, with no significant variation in performance across the study sites. That suggests that the study findings “are likely to be widely applicable,” Dr. Boehme and her associates said.
Sensitivity was 99.8% for smear-positive and culture-positive cases, and 90.2% for smear-negative but culture-positive cases. The assay was specific in 604 of the 609 patients who proved not to have TB (99.2%).
In addition, “the MTB/RIF test correctly detected rifampin resistance in 209 of 211 patients (99.1% sensitivity)” and correctly identified rifampin susceptibility in all 506 patients who had it (100% specificity).
“In view of the low sensitivity of smear microscopy for the diagnosis of TB in patients with HIV infection, the increased sensitivity of the MTB/RIF test – notably, among patients with smear-negative tuberculosis – at the two South African sites with 60% to 80% prevalence of HIV infection is encouraging,” the researchers noted.
It is not yet known whether the results can be replicated “in microscopy centers, health posts, and other point-of-treatment settings where temperature and electricity supply will be more variable and training issues will be more relevant,” the investigators cautioned.
“Large-scale projects to show the feasibility and effect of MTB/RIF testing at such sites are under way,” they added.
The study was designed and supervised by the sponsor (and maker) of the Xpert MTB/RIF, FIND, with additional development support provided by the National Institutes of Health, Cepheid, and the Bill and Melinda Gates Foundation. The investigators reported no additional disclosures.
In an editorial accompanying Dr. Boehme’s report, Dr. Peter M. Small and Dr. Madhukar Pai said that the Xpert MTB/RIF assay has the potential to revolutionize the diagnosis of tuberculosis and has several critical advantages over conventional nucleic acid amplification tests.
The assay is “simple to perform with minimal training, is not prone to cross-contamination, requires minimal biosafety facilities, and has a high sensitivity in smear-negative TB (the last factor being particularly relevant in patients with HIV infection),” they noted (N. Engl. J. Med. 2010 Sept. 1 [doi:10.1056/NEJMe1008496]).
However, “because Boehme et al. used the test at reference laboratories, their study offers only indirect proof of concept for use in [other] settings. Critical to rapid scale-up of the test will be the results of additional studies to determine how it performs in such settings and whether its use improves outcomes for patients in a cost-effective manner,” Dr. Small and Dr. Pai added.
“If an improved rapid nucleic acid amplification test is adopted globally, it could help avert more than 15 million TB-related deaths by 2050,” they noted.
Dr. Small is at the Global Health Program of the Bill and Melinda Gates Foundation and the Institute for Systems Biology, both in Seattle. Dr. Pai is in the department of epidemiology and biostatistics at McGill University and at Montreal Chest Institute.
An automated assay designed for use in Third World regions rapidly and accurately detected Mycobacterium tuberculosis infection and resistance to rifampin, according to a report published online Sept. 1 in the New England Journal of Medicine.
In a multicenter, prospective trial in South Africa, Peru, India, and Azerbaijan involving 1,730 patients suspected of having TB, the Xpert MTB/RIF correctly identified 72% of patients whose sputum smears were negative, as well as 98% of those with positive smears. It also correctly identified 98% of rifampin-resistant bacteria and 98% of rifampin-sensitive bacteria, said Dr. Catharina C. Boehme of the Foundation for Innovative New Diagnostics (FIND), Geneva, and her associates.
“Only a small fraction” of patients worldwide with drug-resistant TB currently has access to sufficiently sensitive diagnostic testing and drug-susceptibility testing, Dr. Boehme noted, because of the complex technologies required for mycobacterial culture and nucleic-acid amplification (N. Engl. J. Med. 2010 Sept. 1 [doi: 10.1056/NEJMoa0907847]).
“Globally, ineffective tuberculosis detection and the rise of multidrug resistance and extensively drug-resistant TB have led to calls for dramatic expansion of culture capability and drug-susceptibility testing in countries in which the disease is endemic,” Dr. Boehme and her colleagues noted. “Unfortunately, the infrastructure and trained personnel required for such testing are not available except in a limited number of reference centers, and results of testing are often not available for at least 4 months, which dramatically reduces its clinical utility.”.
FIND developed the new assay to address those needs. FIND also designed, supervised, and sponsored the study evaluating the assay’s performance.
The Xpert MTB/RIF kit includes a disposable plastic cartridge that contains all the reagents needed for bacterial analysis, nucleic acid extraction, PCR amplification, and amplicon detection. The only manual step is the “nonprecise” addition of a bactericidal buffer to sputum before transferring the sample to the cartridge. Because the cartridge is never reopened, there is little chance of amplicon contamination, the investigators noted. In addition, the sputum is inactivated at the same time it is liquified, thus making a biosafety cabinet unnecessary.
The cartridge is then inserted into the GeneXpert device, which delivers test results within 2 hours. Relatively unskilled health care workers at all the study locations became proficient in the assay’s use after a brief training session. Recent data from a separate study confirm that the assay generates no infectious aerosols, which obviates the need for laboratories equipped for advanced biosafety.
Of the 1,462 patients (4,386 sputum samples) assessed, 567 patients had smear-positive and culture-positive TB; 174 had smear-negative but culture-positive TB; 105 had clinically defined but smear-negative, culture-negative TB; and 616 had no clinical, smear, or culture evidence of TB. The remaining 268 patients were excluded from the study for a variety of reasons, including 103 who had an inadequate number of sputum samples and 10 who had an inadequate volume of sputum samples.
Overall sensitivity of the device among patients with culture-positive TB was 97.6%, with no significant variation in performance across the study sites. That suggests that the study findings “are likely to be widely applicable,” Dr. Boehme and her associates said.
Sensitivity was 99.8% for smear-positive and culture-positive cases, and 90.2% for smear-negative but culture-positive cases. The assay was specific in 604 of the 609 patients who proved not to have TB (99.2%).
In addition, “the MTB/RIF test correctly detected rifampin resistance in 209 of 211 patients (99.1% sensitivity)” and correctly identified rifampin susceptibility in all 506 patients who had it (100% specificity).
“In view of the low sensitivity of smear microscopy for the diagnosis of TB in patients with HIV infection, the increased sensitivity of the MTB/RIF test – notably, among patients with smear-negative tuberculosis – at the two South African sites with 60% to 80% prevalence of HIV infection is encouraging,” the researchers noted.
It is not yet known whether the results can be replicated “in microscopy centers, health posts, and other point-of-treatment settings where temperature and electricity supply will be more variable and training issues will be more relevant,” the investigators cautioned.
“Large-scale projects to show the feasibility and effect of MTB/RIF testing at such sites are under way,” they added.
The study was designed and supervised by the sponsor (and maker) of the Xpert MTB/RIF, FIND, with additional development support provided by the National Institutes of Health, Cepheid, and the Bill and Melinda Gates Foundation. The investigators reported no additional disclosures.
In an editorial accompanying Dr. Boehme’s report, Dr. Peter M. Small and Dr. Madhukar Pai said that the Xpert MTB/RIF assay has the potential to revolutionize the diagnosis of tuberculosis and has several critical advantages over conventional nucleic acid amplification tests.
The assay is “simple to perform with minimal training, is not prone to cross-contamination, requires minimal biosafety facilities, and has a high sensitivity in smear-negative TB (the last factor being particularly relevant in patients with HIV infection),” they noted (N. Engl. J. Med. 2010 Sept. 1 [doi:10.1056/NEJMe1008496]).
However, “because Boehme et al. used the test at reference laboratories, their study offers only indirect proof of concept for use in [other] settings. Critical to rapid scale-up of the test will be the results of additional studies to determine how it performs in such settings and whether its use improves outcomes for patients in a cost-effective manner,” Dr. Small and Dr. Pai added.
“If an improved rapid nucleic acid amplification test is adopted globally, it could help avert more than 15 million TB-related deaths by 2050,” they noted.
Dr. Small is at the Global Health Program of the Bill and Melinda Gates Foundation and the Institute for Systems Biology, both in Seattle. Dr. Pai is in the department of epidemiology and biostatistics at McGill University and at Montreal Chest Institute.
From the New England Journal of Medicine
Intensive BP Control Slows CKD Progression Only in Select Patients
Intensive blood pressure control doesn’t slow the progression of chronic kidney disease any better than standard blood pressure control in most patients, according to a report in the Sept. 2 New England Journal of Medicine.
It appears that the more intensive approach may benefit only patients who have proteinuria with a protein:creatinine ratio greater than 0.22, a value that is compatible with the widely accepted threshold of 300 mg/day for absolute urinary protein excretion, said Dr. Lawrence J. Appel of Johns Hopkins University, Baltimore, and his associates in the AASK (African-American Study of Kidney Disease and Hypertension) Collaborative Research Group.
Until now, “few trials have tested the effects of intensive blood pressure control [compared with conventional control] on the progression of chronic kidney disease, and the findings from such trials have been inconsistent. Despite a lack of compelling evidence, numerous guidelines recommend a reduced blood pressure target in patients with CKD,” they wrote.
Previous studies have rarely followed patients beyond 5 years, even though it typically takes longer than that for end-stage renal disease (ESRD) to develop in patients with CKD.
The AASK study compared outcomes between the two approaches to BP control in 1,094 black adults with mild to moderate hypertensive chronic kidney disease (defined as diastolic BP greater than 95 mm Hg and a glomerular filtration rate of 20-65 mL/min) but without marked proteinuria. Patients with diabetes were excluded from the trial.
In the first phase of the AASK investigation, patients were randomly assigned to either intensive BP control with a target of 92 mm Hg or lower mean arterial pressure (that is, lower than the usual target of 130/80 mm Hg recommended for CKD patients) or to conventional BP control with a target of 102-107 mm Hg mean arterial pressure (which corresponds to the conventional BP target of 140/90 mm Hg).
Throughout this initial phase of the trial, which lasted approximately 4 years, mean blood pressure was significantly lower in the intensive-control group (130/78 mm Hg) than in the standard-control group (141/86 mm Hg), yet there was no significant difference in the primary outcome of progression of kidney disease, development of ESRD, or death. Likewise, there was no significant difference between the two approaches in secondary or clinical outcomes.
In the second phase of the AASK investigation, patients who had not yet developed ESRD were invited to continue in a cohort portion of the trial, in which the BP target was 140/90 mm Hg. In 2004, when national guidelines were changed, this target was amended to lower than 130/80 mm Hg.
After a cumulative follow-up of 8-12 years, there still was no significant difference in primary or secondary outcomes between those who were initially assigned to the intensive-control and the standard-control groups. More intensive BP control did not slow the rate of progression of CKD, Dr. Appel and his associates reported (N. Engl. J. Med. 2010;363:918-29).
However, the intensive-control approach did benefit one subgroup of patients with proteinuria: those who had a protein:creatinine ratio of more than 0.22 at baseline. These patients showed a significant reduction in the primary outcome of progression of kidney disease, development of ESRD, or death, as well as in secondary and clinical outcomes.
The reason for this discrepancy is not known. “Overall, it is hard to develop a coherent, biologically plausible argument for a qualitative interaction between harm in patients without proteinuria and benefit in those with proteinuria,” the researchers said.
In an accompanying editorial, Dr. Julie R. Ingelfinger, chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine, wrote that the study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria (N. Engl. J. Med. 2010;363:974-6). She noted that the Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE trial (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline. And intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria, Dr. Ingelfinger wrote.
The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases, the Office of Research in Minority Health, and the National Institutes of Health. King Pharmaceuticals provided financial support and donated antihypertensive medications to each clinical center. Pfizer, AstraZeneca, GlaxoSmithKline, Forest Laboratories, Pharmacia, and Upjohn also donated antihypertensive drugs. None of these companies had any role in the design of the study, the accrual or analysis of data, or the preparation of the manuscript. Some of the investigators reported being in consultant and/or advisory board roles or receiving funds from numerous companies including Daiichi-Sankyo, Novartis, Amgen, King Pharmaceuticals, Abbott, Boehringer-Ingelheim, Litholink, Eli Lilly, Takeda, Merck, and Watson. Dr. Ingelfinger reported having no conflicts of interest.
This study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria.
Data from other studies also support the conclusion that intensive BP control is beneficial in select patients.
The Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) trial also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline.
In addition, intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria.
Julie R. Ingelfinger, M.D., is chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine. These comments were summarized from her editorial accompanying the report (N. Engl. J. Med. 2010;363:974-6). She reported having no relevant conflicts of interest.
This study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria.
Data from other studies also support the conclusion that intensive BP control is beneficial in select patients.
The Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) trial also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline.
In addition, intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria.
Julie R. Ingelfinger, M.D., is chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine. These comments were summarized from her editorial accompanying the report (N. Engl. J. Med. 2010;363:974-6). She reported having no relevant conflicts of interest.
This study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria.
Data from other studies also support the conclusion that intensive BP control is beneficial in select patients.
The Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) trial also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline.
In addition, intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria.
Julie R. Ingelfinger, M.D., is chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine. These comments were summarized from her editorial accompanying the report (N. Engl. J. Med. 2010;363:974-6). She reported having no relevant conflicts of interest.
Intensive blood pressure control doesn’t slow the progression of chronic kidney disease any better than standard blood pressure control in most patients, according to a report in the Sept. 2 New England Journal of Medicine.
It appears that the more intensive approach may benefit only patients who have proteinuria with a protein:creatinine ratio greater than 0.22, a value that is compatible with the widely accepted threshold of 300 mg/day for absolute urinary protein excretion, said Dr. Lawrence J. Appel of Johns Hopkins University, Baltimore, and his associates in the AASK (African-American Study of Kidney Disease and Hypertension) Collaborative Research Group.
Until now, “few trials have tested the effects of intensive blood pressure control [compared with conventional control] on the progression of chronic kidney disease, and the findings from such trials have been inconsistent. Despite a lack of compelling evidence, numerous guidelines recommend a reduced blood pressure target in patients with CKD,” they wrote.
Previous studies have rarely followed patients beyond 5 years, even though it typically takes longer than that for end-stage renal disease (ESRD) to develop in patients with CKD.
The AASK study compared outcomes between the two approaches to BP control in 1,094 black adults with mild to moderate hypertensive chronic kidney disease (defined as diastolic BP greater than 95 mm Hg and a glomerular filtration rate of 20-65 mL/min) but without marked proteinuria. Patients with diabetes were excluded from the trial.
In the first phase of the AASK investigation, patients were randomly assigned to either intensive BP control with a target of 92 mm Hg or lower mean arterial pressure (that is, lower than the usual target of 130/80 mm Hg recommended for CKD patients) or to conventional BP control with a target of 102-107 mm Hg mean arterial pressure (which corresponds to the conventional BP target of 140/90 mm Hg).
Throughout this initial phase of the trial, which lasted approximately 4 years, mean blood pressure was significantly lower in the intensive-control group (130/78 mm Hg) than in the standard-control group (141/86 mm Hg), yet there was no significant difference in the primary outcome of progression of kidney disease, development of ESRD, or death. Likewise, there was no significant difference between the two approaches in secondary or clinical outcomes.
In the second phase of the AASK investigation, patients who had not yet developed ESRD were invited to continue in a cohort portion of the trial, in which the BP target was 140/90 mm Hg. In 2004, when national guidelines were changed, this target was amended to lower than 130/80 mm Hg.
After a cumulative follow-up of 8-12 years, there still was no significant difference in primary or secondary outcomes between those who were initially assigned to the intensive-control and the standard-control groups. More intensive BP control did not slow the rate of progression of CKD, Dr. Appel and his associates reported (N. Engl. J. Med. 2010;363:918-29).
However, the intensive-control approach did benefit one subgroup of patients with proteinuria: those who had a protein:creatinine ratio of more than 0.22 at baseline. These patients showed a significant reduction in the primary outcome of progression of kidney disease, development of ESRD, or death, as well as in secondary and clinical outcomes.
The reason for this discrepancy is not known. “Overall, it is hard to develop a coherent, biologically plausible argument for a qualitative interaction between harm in patients without proteinuria and benefit in those with proteinuria,” the researchers said.
In an accompanying editorial, Dr. Julie R. Ingelfinger, chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine, wrote that the study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria (N. Engl. J. Med. 2010;363:974-6). She noted that the Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE trial (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline. And intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria, Dr. Ingelfinger wrote.
The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases, the Office of Research in Minority Health, and the National Institutes of Health. King Pharmaceuticals provided financial support and donated antihypertensive medications to each clinical center. Pfizer, AstraZeneca, GlaxoSmithKline, Forest Laboratories, Pharmacia, and Upjohn also donated antihypertensive drugs. None of these companies had any role in the design of the study, the accrual or analysis of data, or the preparation of the manuscript. Some of the investigators reported being in consultant and/or advisory board roles or receiving funds from numerous companies including Daiichi-Sankyo, Novartis, Amgen, King Pharmaceuticals, Abbott, Boehringer-Ingelheim, Litholink, Eli Lilly, Takeda, Merck, and Watson. Dr. Ingelfinger reported having no conflicts of interest.
Intensive blood pressure control doesn’t slow the progression of chronic kidney disease any better than standard blood pressure control in most patients, according to a report in the Sept. 2 New England Journal of Medicine.
It appears that the more intensive approach may benefit only patients who have proteinuria with a protein:creatinine ratio greater than 0.22, a value that is compatible with the widely accepted threshold of 300 mg/day for absolute urinary protein excretion, said Dr. Lawrence J. Appel of Johns Hopkins University, Baltimore, and his associates in the AASK (African-American Study of Kidney Disease and Hypertension) Collaborative Research Group.
Until now, “few trials have tested the effects of intensive blood pressure control [compared with conventional control] on the progression of chronic kidney disease, and the findings from such trials have been inconsistent. Despite a lack of compelling evidence, numerous guidelines recommend a reduced blood pressure target in patients with CKD,” they wrote.
Previous studies have rarely followed patients beyond 5 years, even though it typically takes longer than that for end-stage renal disease (ESRD) to develop in patients with CKD.
The AASK study compared outcomes between the two approaches to BP control in 1,094 black adults with mild to moderate hypertensive chronic kidney disease (defined as diastolic BP greater than 95 mm Hg and a glomerular filtration rate of 20-65 mL/min) but without marked proteinuria. Patients with diabetes were excluded from the trial.
In the first phase of the AASK investigation, patients were randomly assigned to either intensive BP control with a target of 92 mm Hg or lower mean arterial pressure (that is, lower than the usual target of 130/80 mm Hg recommended for CKD patients) or to conventional BP control with a target of 102-107 mm Hg mean arterial pressure (which corresponds to the conventional BP target of 140/90 mm Hg).
Throughout this initial phase of the trial, which lasted approximately 4 years, mean blood pressure was significantly lower in the intensive-control group (130/78 mm Hg) than in the standard-control group (141/86 mm Hg), yet there was no significant difference in the primary outcome of progression of kidney disease, development of ESRD, or death. Likewise, there was no significant difference between the two approaches in secondary or clinical outcomes.
In the second phase of the AASK investigation, patients who had not yet developed ESRD were invited to continue in a cohort portion of the trial, in which the BP target was 140/90 mm Hg. In 2004, when national guidelines were changed, this target was amended to lower than 130/80 mm Hg.
After a cumulative follow-up of 8-12 years, there still was no significant difference in primary or secondary outcomes between those who were initially assigned to the intensive-control and the standard-control groups. More intensive BP control did not slow the rate of progression of CKD, Dr. Appel and his associates reported (N. Engl. J. Med. 2010;363:918-29).
However, the intensive-control approach did benefit one subgroup of patients with proteinuria: those who had a protein:creatinine ratio of more than 0.22 at baseline. These patients showed a significant reduction in the primary outcome of progression of kidney disease, development of ESRD, or death, as well as in secondary and clinical outcomes.
The reason for this discrepancy is not known. “Overall, it is hard to develop a coherent, biologically plausible argument for a qualitative interaction between harm in patients without proteinuria and benefit in those with proteinuria,” the researchers said.
In an accompanying editorial, Dr. Julie R. Ingelfinger, chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine, wrote that the study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria (N. Engl. J. Med. 2010;363:974-6). She noted that the Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE trial (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline. And intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria, Dr. Ingelfinger wrote.
The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases, the Office of Research in Minority Health, and the National Institutes of Health. King Pharmaceuticals provided financial support and donated antihypertensive medications to each clinical center. Pfizer, AstraZeneca, GlaxoSmithKline, Forest Laboratories, Pharmacia, and Upjohn also donated antihypertensive drugs. None of these companies had any role in the design of the study, the accrual or analysis of data, or the preparation of the manuscript. Some of the investigators reported being in consultant and/or advisory board roles or receiving funds from numerous companies including Daiichi-Sankyo, Novartis, Amgen, King Pharmaceuticals, Abbott, Boehringer-Ingelheim, Litholink, Eli Lilly, Takeda, Merck, and Watson. Dr. Ingelfinger reported having no conflicts of interest.
Intensive BP Control Slows CKD Progression Only in Select Patients
Intensive blood pressure control doesn’t slow the progression of chronic kidney disease any better than standard blood pressure control in most patients, according to a report in the Sept. 2 New England Journal of Medicine.
It appears that the more intensive approach may benefit only patients who have proteinuria with a protein:creatinine ratio greater than 0.22, a value that is compatible with the widely accepted threshold of 300 mg/day for absolute urinary protein excretion, said Dr. Lawrence J. Appel of Johns Hopkins University, Baltimore, and his associates in the AASK (African-American Study of Kidney Disease and Hypertension) Collaborative Research Group.
Until now, “few trials have tested the effects of intensive blood pressure control [compared with conventional control] on the progression of chronic kidney disease, and the findings from such trials have been inconsistent. Despite a lack of compelling evidence, numerous guidelines recommend a reduced blood pressure target in patients with CKD,” they wrote.
Previous studies have rarely followed patients beyond 5 years, even though it typically takes longer than that for end-stage renal disease (ESRD) to develop in patients with CKD.
The AASK study compared outcomes between the two approaches to BP control in 1,094 black adults with mild to moderate hypertensive chronic kidney disease (defined as diastolic BP greater than 95 mm Hg and a glomerular filtration rate of 20-65 mL/min) but without marked proteinuria. Patients with diabetes were excluded from the trial.
In the first phase of the AASK investigation, patients were randomly assigned to either intensive BP control with a target of 92 mm Hg or lower mean arterial pressure (that is, lower than the usual target of 130/80 mm Hg recommended for CKD patients) or to conventional BP control with a target of 102-107 mm Hg mean arterial pressure (which corresponds to the conventional BP target of 140/90 mm Hg).
Throughout this initial phase of the trial, which lasted approximately 4 years, mean blood pressure was significantly lower in the intensive-control group (130/78 mm Hg) than in the standard-control group (141/86 mm Hg), yet there was no significant difference in the primary outcome of progression of kidney disease, development of ESRD, or death. Likewise, there was no significant difference between the two approaches in secondary or clinical outcomes.
In the second phase of the AASK investigation, patients who had not yet developed ESRD were invited to continue in a cohort portion of the trial, in which the BP target was 140/90 mm Hg. In 2004, when national guidelines were changed, this target was amended to lower than 130/80 mm Hg.
After a cumulative follow-up of 8-12 years, there still was no significant difference in primary or secondary outcomes between those who were initially assigned to the intensive-control and the standard-control groups. More intensive BP control did not slow the rate of progression of CKD, Dr. Appel and his associates reported (N. Engl. J. Med. 2010;363:918-29).
However, the intensive-control approach did benefit one subgroup of patients with proteinuria: those who had a protein:creatinine ratio of more than 0.22 at baseline. These patients showed a significant reduction in the primary outcome of progression of kidney disease, development of ESRD, or death, as well as in secondary and clinical outcomes.
The reason for this discrepancy is not known. “Overall, it is hard to develop a coherent, biologically plausible argument for a qualitative interaction between harm in patients without proteinuria and benefit in those with proteinuria,” the researchers said.
The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases, the Office of Research in Minority Health, and the National Institutes of Health. King Pharmaceuticals provided financial support and donated antihypertensive medications to each clinical center. Pfizer, AstraZeneca, GlaxoSmithKline, Forest Laboratories, Pharmacia, and Upjohn also donated antihypertensive drugs. None of these companies had any role in the design of the study, the accrual or analysis of data, or the preparation of the manuscript. Some of the investigators reported being in consultant and/or advisory board roles or receiving funds from numerous companies including Daiichi-Sankyo, Novartis, Amgen, King Pharmaceuticals, Abbott, Boehringer-Ingelheim, Litholink, Eli Lilly, Takeda, Merck, and Watson. Dr. Ingelfinger reported having no conflicts of interest.
This study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria.
Data from other studies also support the conclusion that intensive BP control is beneficial in select patients.
The Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) trial also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline.
In addition, intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria.
Julie R. Ingelfinger, M.D., is chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine. These comments were summarized from her editorial accompanying the report (N. Engl. J. Med. 2010;363:974-6). She reported having no relevant conflicts of interest.
This study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria.
Data from other studies also support the conclusion that intensive BP control is beneficial in select patients.
The Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) trial also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline.
In addition, intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria.
Julie R. Ingelfinger, M.D., is chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine. These comments were summarized from her editorial accompanying the report (N. Engl. J. Med. 2010;363:974-6). She reported having no relevant conflicts of interest.
This study lends hope to the concept that intensive treatment will improve renal outcomes in at least some patients with hypertension, chronic kidney disease, and microalbuminuria.
Data from other studies also support the conclusion that intensive BP control is beneficial in select patients.
The Modification of Diet in Renal Disease trial showed that intensive BP control, compared with standard control, benefited patients who had more than 1 g of proteinuria at baseline. The ESCAPE (Effect of Strict Blood Pressure Control and ACE Inhibition on the Progression of Chronic Renal Failure in Pediatric Patients) trial also demonstrated that intensive BP control with a fixed dose of an ACE inhibitor significantly slowed the progression of renal disease, with the largest effects seen in children who had substantial proteinuria, hypertension, and a reduced GFR at baseline.
In addition, intensive BP control was beneficial in a recent study of adults in Italy who had idiopathic glomerular diseases associated with hypertension and proteinuria.
Julie R. Ingelfinger, M.D., is chief of pediatric nephrology at Massachusetts General Hospital, Boston, and a deputy editor of the New England Journal of Medicine. These comments were summarized from her editorial accompanying the report (N. Engl. J. Med. 2010;363:974-6). She reported having no relevant conflicts of interest.
Intensive blood pressure control doesn’t slow the progression of chronic kidney disease any better than standard blood pressure control in most patients, according to a report in the Sept. 2 New England Journal of Medicine.
It appears that the more intensive approach may benefit only patients who have proteinuria with a protein:creatinine ratio greater than 0.22, a value that is compatible with the widely accepted threshold of 300 mg/day for absolute urinary protein excretion, said Dr. Lawrence J. Appel of Johns Hopkins University, Baltimore, and his associates in the AASK (African-American Study of Kidney Disease and Hypertension) Collaborative Research Group.
Until now, “few trials have tested the effects of intensive blood pressure control [compared with conventional control] on the progression of chronic kidney disease, and the findings from such trials have been inconsistent. Despite a lack of compelling evidence, numerous guidelines recommend a reduced blood pressure target in patients with CKD,” they wrote.
Previous studies have rarely followed patients beyond 5 years, even though it typically takes longer than that for end-stage renal disease (ESRD) to develop in patients with CKD.
The AASK study compared outcomes between the two approaches to BP control in 1,094 black adults with mild to moderate hypertensive chronic kidney disease (defined as diastolic BP greater than 95 mm Hg and a glomerular filtration rate of 20-65 mL/min) but without marked proteinuria. Patients with diabetes were excluded from the trial.
In the first phase of the AASK investigation, patients were randomly assigned to either intensive BP control with a target of 92 mm Hg or lower mean arterial pressure (that is, lower than the usual target of 130/80 mm Hg recommended for CKD patients) or to conventional BP control with a target of 102-107 mm Hg mean arterial pressure (which corresponds to the conventional BP target of 140/90 mm Hg).
Throughout this initial phase of the trial, which lasted approximately 4 years, mean blood pressure was significantly lower in the intensive-control group (130/78 mm Hg) than in the standard-control group (141/86 mm Hg), yet there was no significant difference in the primary outcome of progression of kidney disease, development of ESRD, or death. Likewise, there was no significant difference between the two approaches in secondary or clinical outcomes.
In the second phase of the AASK investigation, patients who had not yet developed ESRD were invited to continue in a cohort portion of the trial, in which the BP target was 140/90 mm Hg. In 2004, when national guidelines were changed, this target was amended to lower than 130/80 mm Hg.
After a cumulative follow-up of 8-12 years, there still was no significant difference in primary or secondary outcomes between those who were initially assigned to the intensive-control and the standard-control groups. More intensive BP control did not slow the rate of progression of CKD, Dr. Appel and his associates reported (N. Engl. J. Med. 2010;363:918-29).
However, the intensive-control approach did benefit one subgroup of patients with proteinuria: those who had a protein:creatinine ratio of more than 0.22 at baseline. These patients showed a significant reduction in the primary outcome of progression of kidney disease, development of ESRD, or death, as well as in secondary and clinical outcomes.
The reason for this discrepancy is not known. “Overall, it is hard to develop a coherent, biologically plausible argument for a qualitative interaction between harm in patients without proteinuria and benefit in those with proteinuria,” the researchers said.
The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases, the Office of Research in Minority Health, and the National Institutes of Health. King Pharmaceuticals provided financial support and donated antihypertensive medications to each clinical center. Pfizer, AstraZeneca, GlaxoSmithKline, Forest Laboratories, Pharmacia, and Upjohn also donated antihypertensive drugs. None of these companies had any role in the design of the study, the accrual or analysis of data, or the preparation of the manuscript. Some of the investigators reported being in consultant and/or advisory board roles or receiving funds from numerous companies including Daiichi-Sankyo, Novartis, Amgen, King Pharmaceuticals, Abbott, Boehringer-Ingelheim, Litholink, Eli Lilly, Takeda, Merck, and Watson. Dr. Ingelfinger reported having no conflicts of interest.
Intensive blood pressure control doesn’t slow the progression of chronic kidney disease any better than standard blood pressure control in most patients, according to a report in the Sept. 2 New England Journal of Medicine.
It appears that the more intensive approach may benefit only patients who have proteinuria with a protein:creatinine ratio greater than 0.22, a value that is compatible with the widely accepted threshold of 300 mg/day for absolute urinary protein excretion, said Dr. Lawrence J. Appel of Johns Hopkins University, Baltimore, and his associates in the AASK (African-American Study of Kidney Disease and Hypertension) Collaborative Research Group.
Until now, “few trials have tested the effects of intensive blood pressure control [compared with conventional control] on the progression of chronic kidney disease, and the findings from such trials have been inconsistent. Despite a lack of compelling evidence, numerous guidelines recommend a reduced blood pressure target in patients with CKD,” they wrote.
Previous studies have rarely followed patients beyond 5 years, even though it typically takes longer than that for end-stage renal disease (ESRD) to develop in patients with CKD.
The AASK study compared outcomes between the two approaches to BP control in 1,094 black adults with mild to moderate hypertensive chronic kidney disease (defined as diastolic BP greater than 95 mm Hg and a glomerular filtration rate of 20-65 mL/min) but without marked proteinuria. Patients with diabetes were excluded from the trial.
In the first phase of the AASK investigation, patients were randomly assigned to either intensive BP control with a target of 92 mm Hg or lower mean arterial pressure (that is, lower than the usual target of 130/80 mm Hg recommended for CKD patients) or to conventional BP control with a target of 102-107 mm Hg mean arterial pressure (which corresponds to the conventional BP target of 140/90 mm Hg).
Throughout this initial phase of the trial, which lasted approximately 4 years, mean blood pressure was significantly lower in the intensive-control group (130/78 mm Hg) than in the standard-control group (141/86 mm Hg), yet there was no significant difference in the primary outcome of progression of kidney disease, development of ESRD, or death. Likewise, there was no significant difference between the two approaches in secondary or clinical outcomes.
In the second phase of the AASK investigation, patients who had not yet developed ESRD were invited to continue in a cohort portion of the trial, in which the BP target was 140/90 mm Hg. In 2004, when national guidelines were changed, this target was amended to lower than 130/80 mm Hg.
After a cumulative follow-up of 8-12 years, there still was no significant difference in primary or secondary outcomes between those who were initially assigned to the intensive-control and the standard-control groups. More intensive BP control did not slow the rate of progression of CKD, Dr. Appel and his associates reported (N. Engl. J. Med. 2010;363:918-29).
However, the intensive-control approach did benefit one subgroup of patients with proteinuria: those who had a protein:creatinine ratio of more than 0.22 at baseline. These patients showed a significant reduction in the primary outcome of progression of kidney disease, development of ESRD, or death, as well as in secondary and clinical outcomes.
The reason for this discrepancy is not known. “Overall, it is hard to develop a coherent, biologically plausible argument for a qualitative interaction between harm in patients without proteinuria and benefit in those with proteinuria,” the researchers said.
The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases, the Office of Research in Minority Health, and the National Institutes of Health. King Pharmaceuticals provided financial support and donated antihypertensive medications to each clinical center. Pfizer, AstraZeneca, GlaxoSmithKline, Forest Laboratories, Pharmacia, and Upjohn also donated antihypertensive drugs. None of these companies had any role in the design of the study, the accrual or analysis of data, or the preparation of the manuscript. Some of the investigators reported being in consultant and/or advisory board roles or receiving funds from numerous companies including Daiichi-Sankyo, Novartis, Amgen, King Pharmaceuticals, Abbott, Boehringer-Ingelheim, Litholink, Eli Lilly, Takeda, Merck, and Watson. Dr. Ingelfinger reported having no conflicts of interest.
Major Finding: Compared with standard BP control, intensive BP control failed to slow the progression of CKD, prevent the development of end-stage renal disease, or prevent death in most patients who had mild to moderate chronic kidney disease. Intensive BP control was beneficial only in the subgroup of patients who had proteinuria with a protein:creatinine ratio greater than 0.22 at baseline.
Data Source: AASK, a clinical trial with an initial 4-year randomized phase comparing intensive BP control with standard BP control in 1,094 black adults, as well as an observational cohort phase with a further 4-8 years of extended follow-up.
Disclosures: This study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases, the Office of Research in Minority Health, and the National Institutes of Health. King Pharmaceuticals provided financial support and donated antihypertensive medications to each clinical center. Pfizer, AstraZeneca, GlaxoSmithKline, Forest Laboratories, Pharmacia, and Upjohn also donated antihypertensive drugs. None of these companies had any role in the design of the study, the accrual or analysis of data, or the preparation of the manuscript. Some of the investigators reported being in consultant and/or advisory board roles or receiving funds from numerous companies including Daiichi-Sankyo, Novartis, Amgen, King Pharmaceuticals, Abbott, Boehringer-Ingelheim, Litholink, Eli Lilly, Takeda, Merck, and Watson.
Weight Gain in Middle Age Raises Diabetes Risk
Weight gain and fat accumulation in both middle and older age raise the risk of diabetes, according to a prospective cohort study.
The links between overweight and diabetes, and between central adiposity and diabetes, are well known in younger adults but have not been fully explored in older adults, said Mary L. Biggs, Ph.D., of the University of Washington School of Public Health and Community Medicine, Seattle, and her associates.
They examined these associations using data on 4,193 subjects participating in the Cardiovascular Health Study, a prospective, longitudinal cohort study of people aged 65 years and older living in four communities in North Carolina, Maryland, Pennsylvania, and California. The subjects were enrolled beginning in 1989 and followed annually for a median of 12 years.
The mean age at baseline was 73 years; 59% of the subjects were women, and 10% were African American.
Changes in the participants' weight, body mass index, fat mass, waist circumference, waist-to-hip ratio, and waist-to-height ratio were documented from baseline onward, at ages 65 and older. The subjects also were asked to report body composition measures from when they were age 50, so that their BMI at age 50 could be calculated.
During follow-up, 339 subjects developed diabetes.
Measures of overall and of central adiposity both at both middle age (50 years) and older age (at least 65 years) were significantly associated with the risk of developing diabetes in men and women. Subjects in the highest category of adiposity had a two- to sixfold greater risk of incident diabetes than did those in the lowest category.
Similarly, the risk of diabetes rose monotonically with the amount of weight gained between age 50 and baseline. “Compared with participants whose weight remained stable [during that interval], those who gained 9 kg or more between the age of 50 years and study entry had an approximately threefold greater risk of developing diabetes during follow-up,” Dr. Biggs and her colleagues said (JAMA 2010;303:2504-12).
“Participants who were obese (BMI greater than or equal to 30) at 50 years of age and who experienced the most weight gain (greater than 9 kg) between the age of 50 years and study entry had five times the risk of developing diabetes, compared with weight-stable participants with normal BMI (less than 25) at 50 years of age,” they added.
Subjects in the highest categories of both BMI and waist circumference were more than four times as likely to develop diabetes as were subjects in the lowest categories of those measures.
The increased risk associated with adiposity appeared to wane as subjects aged, but even among participants aged 75 and older, those in the highest category of BMI still had double the risk of developing diabetes, compared with those in the lowest category of BMI.
The reason that diabetes risk declines somewhat after age 75 is not known. It is possible that anthropomorphic measures may not adequately quantify body fat at that age because of age-related changes in body composition, such as decreased muscle mass and decreased height.
“A second possibility is that regional fat distribution is more important in the etiology of diabetes than absolute fat mass,” the researchers wrote. Another reason may be that the pathology of diabetes in older adults differs from that in younger adults.
Or it simply may be that people who are more susceptible to adiposity-related death do not survive into old age, resulting in selective survival of fitter people, said Dr. Biggs and her colleagues.
The investigators were somewhat surprised to note that the risk of diabetes did not decline in subjects who lost weight during follow-up. Again, the reason is not yet known.
“Older adults may lose proportionately more muscle mass with weight loss than younger ones, decreasing the accuracy of weight loss as a surrogate for loss of adipose tissue in older adults. Furthermore, the loss of skeletal muscle mass may decrease insulin sensitivity, negating the benefit derived from fat loss,” they noted.
However, clinicians should note that the relation between weight loss and diabetes risk in older adults is complex, and “our results do not preclude the possibility that voluntary weight loss reduces the risk of diabetes in older adults,” they added.
This study was supported by the National Heart, Lung, and Blood Institute, the National Institute on Aging, the University of Pittsburgh Claude D. Pepper Older Americans Independence Center, and the National Institute of Neurological Disorders and Stroke. No financial conflicts of interest were reported.
Weight gain and fat accumulation in both middle and older age raise the risk of diabetes, according to a prospective cohort study.
The links between overweight and diabetes, and between central adiposity and diabetes, are well known in younger adults but have not been fully explored in older adults, said Mary L. Biggs, Ph.D., of the University of Washington School of Public Health and Community Medicine, Seattle, and her associates.
They examined these associations using data on 4,193 subjects participating in the Cardiovascular Health Study, a prospective, longitudinal cohort study of people aged 65 years and older living in four communities in North Carolina, Maryland, Pennsylvania, and California. The subjects were enrolled beginning in 1989 and followed annually for a median of 12 years.
The mean age at baseline was 73 years; 59% of the subjects were women, and 10% were African American.
Changes in the participants' weight, body mass index, fat mass, waist circumference, waist-to-hip ratio, and waist-to-height ratio were documented from baseline onward, at ages 65 and older. The subjects also were asked to report body composition measures from when they were age 50, so that their BMI at age 50 could be calculated.
During follow-up, 339 subjects developed diabetes.
Measures of overall and of central adiposity both at both middle age (50 years) and older age (at least 65 years) were significantly associated with the risk of developing diabetes in men and women. Subjects in the highest category of adiposity had a two- to sixfold greater risk of incident diabetes than did those in the lowest category.
Similarly, the risk of diabetes rose monotonically with the amount of weight gained between age 50 and baseline. “Compared with participants whose weight remained stable [during that interval], those who gained 9 kg or more between the age of 50 years and study entry had an approximately threefold greater risk of developing diabetes during follow-up,” Dr. Biggs and her colleagues said (JAMA 2010;303:2504-12).
“Participants who were obese (BMI greater than or equal to 30) at 50 years of age and who experienced the most weight gain (greater than 9 kg) between the age of 50 years and study entry had five times the risk of developing diabetes, compared with weight-stable participants with normal BMI (less than 25) at 50 years of age,” they added.
Subjects in the highest categories of both BMI and waist circumference were more than four times as likely to develop diabetes as were subjects in the lowest categories of those measures.
The increased risk associated with adiposity appeared to wane as subjects aged, but even among participants aged 75 and older, those in the highest category of BMI still had double the risk of developing diabetes, compared with those in the lowest category of BMI.
The reason that diabetes risk declines somewhat after age 75 is not known. It is possible that anthropomorphic measures may not adequately quantify body fat at that age because of age-related changes in body composition, such as decreased muscle mass and decreased height.
“A second possibility is that regional fat distribution is more important in the etiology of diabetes than absolute fat mass,” the researchers wrote. Another reason may be that the pathology of diabetes in older adults differs from that in younger adults.
Or it simply may be that people who are more susceptible to adiposity-related death do not survive into old age, resulting in selective survival of fitter people, said Dr. Biggs and her colleagues.
The investigators were somewhat surprised to note that the risk of diabetes did not decline in subjects who lost weight during follow-up. Again, the reason is not yet known.
“Older adults may lose proportionately more muscle mass with weight loss than younger ones, decreasing the accuracy of weight loss as a surrogate for loss of adipose tissue in older adults. Furthermore, the loss of skeletal muscle mass may decrease insulin sensitivity, negating the benefit derived from fat loss,” they noted.
However, clinicians should note that the relation between weight loss and diabetes risk in older adults is complex, and “our results do not preclude the possibility that voluntary weight loss reduces the risk of diabetes in older adults,” they added.
This study was supported by the National Heart, Lung, and Blood Institute, the National Institute on Aging, the University of Pittsburgh Claude D. Pepper Older Americans Independence Center, and the National Institute of Neurological Disorders and Stroke. No financial conflicts of interest were reported.
Weight gain and fat accumulation in both middle and older age raise the risk of diabetes, according to a prospective cohort study.
The links between overweight and diabetes, and between central adiposity and diabetes, are well known in younger adults but have not been fully explored in older adults, said Mary L. Biggs, Ph.D., of the University of Washington School of Public Health and Community Medicine, Seattle, and her associates.
They examined these associations using data on 4,193 subjects participating in the Cardiovascular Health Study, a prospective, longitudinal cohort study of people aged 65 years and older living in four communities in North Carolina, Maryland, Pennsylvania, and California. The subjects were enrolled beginning in 1989 and followed annually for a median of 12 years.
The mean age at baseline was 73 years; 59% of the subjects were women, and 10% were African American.
Changes in the participants' weight, body mass index, fat mass, waist circumference, waist-to-hip ratio, and waist-to-height ratio were documented from baseline onward, at ages 65 and older. The subjects also were asked to report body composition measures from when they were age 50, so that their BMI at age 50 could be calculated.
During follow-up, 339 subjects developed diabetes.
Measures of overall and of central adiposity both at both middle age (50 years) and older age (at least 65 years) were significantly associated with the risk of developing diabetes in men and women. Subjects in the highest category of adiposity had a two- to sixfold greater risk of incident diabetes than did those in the lowest category.
Similarly, the risk of diabetes rose monotonically with the amount of weight gained between age 50 and baseline. “Compared with participants whose weight remained stable [during that interval], those who gained 9 kg or more between the age of 50 years and study entry had an approximately threefold greater risk of developing diabetes during follow-up,” Dr. Biggs and her colleagues said (JAMA 2010;303:2504-12).
“Participants who were obese (BMI greater than or equal to 30) at 50 years of age and who experienced the most weight gain (greater than 9 kg) between the age of 50 years and study entry had five times the risk of developing diabetes, compared with weight-stable participants with normal BMI (less than 25) at 50 years of age,” they added.
Subjects in the highest categories of both BMI and waist circumference were more than four times as likely to develop diabetes as were subjects in the lowest categories of those measures.
The increased risk associated with adiposity appeared to wane as subjects aged, but even among participants aged 75 and older, those in the highest category of BMI still had double the risk of developing diabetes, compared with those in the lowest category of BMI.
The reason that diabetes risk declines somewhat after age 75 is not known. It is possible that anthropomorphic measures may not adequately quantify body fat at that age because of age-related changes in body composition, such as decreased muscle mass and decreased height.
“A second possibility is that regional fat distribution is more important in the etiology of diabetes than absolute fat mass,” the researchers wrote. Another reason may be that the pathology of diabetes in older adults differs from that in younger adults.
Or it simply may be that people who are more susceptible to adiposity-related death do not survive into old age, resulting in selective survival of fitter people, said Dr. Biggs and her colleagues.
The investigators were somewhat surprised to note that the risk of diabetes did not decline in subjects who lost weight during follow-up. Again, the reason is not yet known.
“Older adults may lose proportionately more muscle mass with weight loss than younger ones, decreasing the accuracy of weight loss as a surrogate for loss of adipose tissue in older adults. Furthermore, the loss of skeletal muscle mass may decrease insulin sensitivity, negating the benefit derived from fat loss,” they noted.
However, clinicians should note that the relation between weight loss and diabetes risk in older adults is complex, and “our results do not preclude the possibility that voluntary weight loss reduces the risk of diabetes in older adults,” they added.
This study was supported by the National Heart, Lung, and Blood Institute, the National Institute on Aging, the University of Pittsburgh Claude D. Pepper Older Americans Independence Center, and the National Institute of Neurological Disorders and Stroke. No financial conflicts of interest were reported.
From JAMA