User login
Mortality Risk Doubles in Year After Hip Fracture
Mortality risk doubles during the year after hip fracture among women aged 65 years and older, then returns to baseline in many women; but this pattern doesn’t apply in all cases, according to a report published online Sept. 26 in Archives of Internal Medicine.
Mortality risk after sustaining a hip fracture differs by patient age, underlying health, and the interval since the injury occurred in this population, said Dr. Erin S. LeBlanc of the center for health research at Kaiser Permanente Northwest Region, Portland, Ore., and her associates.
Previous studies of this issue have had methodological limitations and have yielded inconsistent results. Most have shown increased short-term mortality, but have had mixed findings on long-term mortality. "Our data suggest that previous mixed results ... may have been the result of differences in the underlying age and health status of the population being studied," Dr. LeBlanc and her colleagues said (Arch. Intern. Med. 2011 Sept. 26 [doi:10.1001/archinternmed.2011.447]).
They used data from the SOF (Study of Osteoporotic Fractures) to address these methodological limitations. The subjects were identified before hip fractures occurred, the study design was prospective, and extensive data on comorbidities allowed adjustment for potentially confounding factors.
The SOF subjects were 5,580 community-dwelling women aged 65 and older who resided in Maryland, Minnesota, Oregon, and Pennsylvania at baseline in 1986-1988. This population included 1,116 women who sustained incident hip fractures during a mean follow-up of 14 years, and 4,464 age-matched control subjects without hip fracture.
Mortality risk was highest in the first year after hip fracture. The rate was 16.9% among cases, compared with only 8.4% among controls. This doubling of risk persisted when the analysis was adjusted to account for factors such as total hip bone mineral density.
Moreover, deaths in the control group were evenly spread throughout the year, whereas those in the case group were concentrated within the first 6 months of the year. "In addition, more than half the deaths (99 of 189 [52.4%]) in the first year following hip fracture occurred within the first 3 months for the cases," the investigators said.
When the study subjects were categorized by age (younger than 70 years, 70-79 years, or 80 years and older), the youngest group showed a fivefold rise in mortality risk during the first year after hip fracture (16.3%), compared with women younger than 70 who did not sustain a hip fracture (3.7%).
In contrast, the oldest women showed no increased mortality risk in the year following hip fracture, and the women in the middle group showed an intermediate risk.
In addition, mortality risk remained elevated for years 1-10 among women in the youngest age group, but it was somewhat lower than the mortality risk in the first year. In contrast, mortality risk declined to baseline for the next 10 years among women in the two older age groups.
"We hypothesize that age influences the risk of death after hip fracture by affecting the baseline death rate in the population. Those who are younger ... have a low risk of dying from other causes. Therefore, experiencing a hip fracture may increase their mortality risk compared with nonfracture controls.
"In contrast, octogenarians generally have a relatively high risk of dying from other causes; therefore, experiencing a hip fracture does not result in an increased risk of death during the next year compared with other women their age," the researchers said.
Because women aged 65-70 years remain at increased risk of death for an additional 5-10 years following hip fracture, prevention of the injury should be a high priority in this age group, they added.
The leading causes of death – coronary heart disease, cancer, and stroke – were the same between cases and controls. Rates of death from sepsis also were the same between the two groups. However, more women who sustained hip fractures, compared with control women, died from pneumonia (10.5% vs. 5.6%), cognitive disorders (9.2% vs. 6.7%), and osteoporotic fracture (2% vs. 0%). More control women died from cancer (11% vs. 18.2%).
"Although, in our study, [fewer than] 15% of the deaths were due to infection or osteoporosis (the most likely causes of death to be directly attributed to the fracture itself), hip fracture could have been a contributing cause in many of the remaining deaths, including those attributed to coronary heart disease and stroke," Dr. LeBlanc and her associates noted.
This study was limited in that 99% of the subjects were non-Hispanic white women aged 65 and older, so the results may not be generalizable to men, other ethnic groups, or younger women, they added.
This study was supported by the U.S. Public Health Service, the National Institute of Arthritis and Musculoskeletal and Skin Diseases, the National Institute on Aging, and the National Center for Research Resources. No financial conflicts of interest were reported.
Mortality risk doubles during the year after hip fracture among women aged 65 years and older, then returns to baseline in many women; but this pattern doesn’t apply in all cases, according to a report published online Sept. 26 in Archives of Internal Medicine.
Mortality risk after sustaining a hip fracture differs by patient age, underlying health, and the interval since the injury occurred in this population, said Dr. Erin S. LeBlanc of the center for health research at Kaiser Permanente Northwest Region, Portland, Ore., and her associates.
Previous studies of this issue have had methodological limitations and have yielded inconsistent results. Most have shown increased short-term mortality, but have had mixed findings on long-term mortality. "Our data suggest that previous mixed results ... may have been the result of differences in the underlying age and health status of the population being studied," Dr. LeBlanc and her colleagues said (Arch. Intern. Med. 2011 Sept. 26 [doi:10.1001/archinternmed.2011.447]).
They used data from the SOF (Study of Osteoporotic Fractures) to address these methodological limitations. The subjects were identified before hip fractures occurred, the study design was prospective, and extensive data on comorbidities allowed adjustment for potentially confounding factors.
The SOF subjects were 5,580 community-dwelling women aged 65 and older who resided in Maryland, Minnesota, Oregon, and Pennsylvania at baseline in 1986-1988. This population included 1,116 women who sustained incident hip fractures during a mean follow-up of 14 years, and 4,464 age-matched control subjects without hip fracture.
Mortality risk was highest in the first year after hip fracture. The rate was 16.9% among cases, compared with only 8.4% among controls. This doubling of risk persisted when the analysis was adjusted to account for factors such as total hip bone mineral density.
Moreover, deaths in the control group were evenly spread throughout the year, whereas those in the case group were concentrated within the first 6 months of the year. "In addition, more than half the deaths (99 of 189 [52.4%]) in the first year following hip fracture occurred within the first 3 months for the cases," the investigators said.
When the study subjects were categorized by age (younger than 70 years, 70-79 years, or 80 years and older), the youngest group showed a fivefold rise in mortality risk during the first year after hip fracture (16.3%), compared with women younger than 70 who did not sustain a hip fracture (3.7%).
In contrast, the oldest women showed no increased mortality risk in the year following hip fracture, and the women in the middle group showed an intermediate risk.
In addition, mortality risk remained elevated for years 1-10 among women in the youngest age group, but it was somewhat lower than the mortality risk in the first year. In contrast, mortality risk declined to baseline for the next 10 years among women in the two older age groups.
"We hypothesize that age influences the risk of death after hip fracture by affecting the baseline death rate in the population. Those who are younger ... have a low risk of dying from other causes. Therefore, experiencing a hip fracture may increase their mortality risk compared with nonfracture controls.
"In contrast, octogenarians generally have a relatively high risk of dying from other causes; therefore, experiencing a hip fracture does not result in an increased risk of death during the next year compared with other women their age," the researchers said.
Because women aged 65-70 years remain at increased risk of death for an additional 5-10 years following hip fracture, prevention of the injury should be a high priority in this age group, they added.
The leading causes of death – coronary heart disease, cancer, and stroke – were the same between cases and controls. Rates of death from sepsis also were the same between the two groups. However, more women who sustained hip fractures, compared with control women, died from pneumonia (10.5% vs. 5.6%), cognitive disorders (9.2% vs. 6.7%), and osteoporotic fracture (2% vs. 0%). More control women died from cancer (11% vs. 18.2%).
"Although, in our study, [fewer than] 15% of the deaths were due to infection or osteoporosis (the most likely causes of death to be directly attributed to the fracture itself), hip fracture could have been a contributing cause in many of the remaining deaths, including those attributed to coronary heart disease and stroke," Dr. LeBlanc and her associates noted.
This study was limited in that 99% of the subjects were non-Hispanic white women aged 65 and older, so the results may not be generalizable to men, other ethnic groups, or younger women, they added.
This study was supported by the U.S. Public Health Service, the National Institute of Arthritis and Musculoskeletal and Skin Diseases, the National Institute on Aging, and the National Center for Research Resources. No financial conflicts of interest were reported.
Mortality risk doubles during the year after hip fracture among women aged 65 years and older, then returns to baseline in many women; but this pattern doesn’t apply in all cases, according to a report published online Sept. 26 in Archives of Internal Medicine.
Mortality risk after sustaining a hip fracture differs by patient age, underlying health, and the interval since the injury occurred in this population, said Dr. Erin S. LeBlanc of the center for health research at Kaiser Permanente Northwest Region, Portland, Ore., and her associates.
Previous studies of this issue have had methodological limitations and have yielded inconsistent results. Most have shown increased short-term mortality, but have had mixed findings on long-term mortality. "Our data suggest that previous mixed results ... may have been the result of differences in the underlying age and health status of the population being studied," Dr. LeBlanc and her colleagues said (Arch. Intern. Med. 2011 Sept. 26 [doi:10.1001/archinternmed.2011.447]).
They used data from the SOF (Study of Osteoporotic Fractures) to address these methodological limitations. The subjects were identified before hip fractures occurred, the study design was prospective, and extensive data on comorbidities allowed adjustment for potentially confounding factors.
The SOF subjects were 5,580 community-dwelling women aged 65 and older who resided in Maryland, Minnesota, Oregon, and Pennsylvania at baseline in 1986-1988. This population included 1,116 women who sustained incident hip fractures during a mean follow-up of 14 years, and 4,464 age-matched control subjects without hip fracture.
Mortality risk was highest in the first year after hip fracture. The rate was 16.9% among cases, compared with only 8.4% among controls. This doubling of risk persisted when the analysis was adjusted to account for factors such as total hip bone mineral density.
Moreover, deaths in the control group were evenly spread throughout the year, whereas those in the case group were concentrated within the first 6 months of the year. "In addition, more than half the deaths (99 of 189 [52.4%]) in the first year following hip fracture occurred within the first 3 months for the cases," the investigators said.
When the study subjects were categorized by age (younger than 70 years, 70-79 years, or 80 years and older), the youngest group showed a fivefold rise in mortality risk during the first year after hip fracture (16.3%), compared with women younger than 70 who did not sustain a hip fracture (3.7%).
In contrast, the oldest women showed no increased mortality risk in the year following hip fracture, and the women in the middle group showed an intermediate risk.
In addition, mortality risk remained elevated for years 1-10 among women in the youngest age group, but it was somewhat lower than the mortality risk in the first year. In contrast, mortality risk declined to baseline for the next 10 years among women in the two older age groups.
"We hypothesize that age influences the risk of death after hip fracture by affecting the baseline death rate in the population. Those who are younger ... have a low risk of dying from other causes. Therefore, experiencing a hip fracture may increase their mortality risk compared with nonfracture controls.
"In contrast, octogenarians generally have a relatively high risk of dying from other causes; therefore, experiencing a hip fracture does not result in an increased risk of death during the next year compared with other women their age," the researchers said.
Because women aged 65-70 years remain at increased risk of death for an additional 5-10 years following hip fracture, prevention of the injury should be a high priority in this age group, they added.
The leading causes of death – coronary heart disease, cancer, and stroke – were the same between cases and controls. Rates of death from sepsis also were the same between the two groups. However, more women who sustained hip fractures, compared with control women, died from pneumonia (10.5% vs. 5.6%), cognitive disorders (9.2% vs. 6.7%), and osteoporotic fracture (2% vs. 0%). More control women died from cancer (11% vs. 18.2%).
"Although, in our study, [fewer than] 15% of the deaths were due to infection or osteoporosis (the most likely causes of death to be directly attributed to the fracture itself), hip fracture could have been a contributing cause in many of the remaining deaths, including those attributed to coronary heart disease and stroke," Dr. LeBlanc and her associates noted.
This study was limited in that 99% of the subjects were non-Hispanic white women aged 65 and older, so the results may not be generalizable to men, other ethnic groups, or younger women, they added.
This study was supported by the U.S. Public Health Service, the National Institute of Arthritis and Musculoskeletal and Skin Diseases, the National Institute on Aging, and the National Center for Research Resources. No financial conflicts of interest were reported.
FROM ARCHIVES OF INTERNAL MEDICINE
Major Finding: Mortality risk was highest during the year after a hip fracture, with a rate of 16.9% among women who had hip fracture but only 8.4% among those who did not.
Data Source: A prospective case-control study involving 5,580 community-dwelling women aged 65 and older at baseline who were followed for a mean of 14 years for hip fracture and mortality.
Disclosures: This study was supported by the Public Health Service, the National Institute of Arthritis and Musculoskeletal and Skin Diseases, the National Institute on Aging, and the National Center for Research Resources. No financial conflicts of interest were reported.
Factors Predict Erectile Function After Prostate Cancer Therapy
Mathematical models that are based on patient characteristics, pretreatment sexual functioning, and treatment details help predict whether men will have erectile function 2 years after therapy for early-stage prostate cancer, according to a report in the Sept. 21 issue of JAMA.
The predictive models were developed in a cohort of 1,027 patients who underwent prostatectomy, external beam radiotherapy, or brachytherapy during 2003-2006, and they were then validated against actual experience in a separate registry of 1,913 community-based patients.
This verification "suggests that these findings are generalizable and may help physicians and patients to set personalized expectations regarding prospects for erectile function in the years following primary treatment for prostate cancer," said Dr. Mehrdad Alemozaffar of the division of urology at Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, and his associates (JAMA 2011;306:1205-1214).
The investigators developed their models using data from a prospective, longitudinal cohort of men who were treated at nine university-affiliated hospitals. A total of 524 elected to undergo prostatectomy, 241 opted for external beam radiotherapy, and 262 had brachytherapy for their clinical stage T1 or T2 prostate cancer.
Before treatment commenced, 28% of the prostatectomy group, 47% of the radiotherapy group, and 33% of the brachytherapy group reported that they already had some erectile dysfunction. At 2 years after treatment, these rates increased to 65%, 63%, and 57%, respectively.
For prostatectomy, four factors – younger age, lower PSA level at baseline, better pretreatment sexual functioning, and nerve-sparing surgery – were found to raise the odds that the study subjects would be able to attain functional erections suitable for intercourse 2 years after treatment. "Erectile function increased approximately linearly with decreasing age and with increasing pretreatment sexual functioning score," Dr. Alemozaffar and his colleagues said.
Using these data, they constructed a table of probabilities that men choosing prostatectomy would be able to attain functional erections. For example, a 50-year-old man’s prospects for having functional erections after prostatectomy will vary between 6% and 70% depending on his pretreatment sexual function score, his baseline PSA level, and whether he planned to use a nerve-sparing surgical technique.
For subjects having external beam radiotherapy, the odds that they will be able to attain functional erections suitable for intercourse improve with lower PSA level, better pretreatment sexual functioning, and no use of neoadjuvant hormone therapy. According to the model, a patient’s probability of recovering the ability to attain functional erections varies between 16% and 92%, depending on these three factors.
For those having brachytherapy, the factors associated with better odds of attaining functional erections are better pretreatment sexual functioning, younger age, black race, and lower body mass index. For example, a 60-year-old man’s probability of doing so varies from 11% to 98%, depending on his pretreatment sexual functioning, age, race, and BMI.
Dr. Alemozaffar and his associates assessed how their models performed in a separate cohort of 1,913 men enrolled in a community-based registry. The model-predicted probabilities corresponded well to the observed outcomes in this cohort.
Thus, the models provide "a validated, broadly applicable framework to predict the probability of long-term posttreatment erectile dysfunction for individual patients," they said.
It was notable that in initial univariate analyses, poorer recovery of erectile function correlated with higher numbers of comorbid conditions. However, this correlation did not persist in multivariate analyses, so the models do not include comorbidities.
"Other researchers have found diabetes and peripheral vascular disease to be associated with worse posttreatment sexual outcome," but those studies did not adjust for differences in pretreatment sexual function. It thus appears that pretreatment sexual function may supersede the effects of comorbidities on posttreatment erectile function, they said.
Also of note was the finding that the models were more accurate at predicting erectile function after external radiotherapy than after prostatectomy. The reason for this difference is not yet known, and it is possible that surgical factors, such as the surgeon’s proficiency or variations in specific techniques, "may contribute to a broader range of outcomes" after prostatectomy than after radiotherapy. This issue warrants further study, they added.
This study was supported by the National Institutes of Health. Dr. Alemozaffar’s associates reported ties to numerous industry sources.
The implication of the study by Dr. Alemozaffar and colleagues is that optimizing the prediction of outcomes requires detailed knowledge that most primary care physicians may not have – in this case, detailed knowledge of the patient’s baseline sexual function, said Dr. Michael J. Barry.
"Routinely collecting objective measures of subjective phenomena like sexual function from patients will need to become part of usual care rather than just research," he noted.
"For most scenarios, the take-away message [of this study] is that if the patient has chosen surgery, he will more than likely lose erectile function, whereas if he has chosen radiotherapy, he has a better than even chance of preserving it, at least for 2 years," Dr. Barry said.
Michael J. Barry, M.D., is in the general medicine division at Massachusetts General Hospital and Harvard Medical School, Boston. He is also at the Foundation for Informed Medical Decision Making, also in Boston. These remarks were adapted from his editorial accompanying Dr. Alemozaffar’s report (JAMA 2011;306:1258-9).
The implication of the study by Dr. Alemozaffar and colleagues is that optimizing the prediction of outcomes requires detailed knowledge that most primary care physicians may not have – in this case, detailed knowledge of the patient’s baseline sexual function, said Dr. Michael J. Barry.
"Routinely collecting objective measures of subjective phenomena like sexual function from patients will need to become part of usual care rather than just research," he noted.
"For most scenarios, the take-away message [of this study] is that if the patient has chosen surgery, he will more than likely lose erectile function, whereas if he has chosen radiotherapy, he has a better than even chance of preserving it, at least for 2 years," Dr. Barry said.
Michael J. Barry, M.D., is in the general medicine division at Massachusetts General Hospital and Harvard Medical School, Boston. He is also at the Foundation for Informed Medical Decision Making, also in Boston. These remarks were adapted from his editorial accompanying Dr. Alemozaffar’s report (JAMA 2011;306:1258-9).
The implication of the study by Dr. Alemozaffar and colleagues is that optimizing the prediction of outcomes requires detailed knowledge that most primary care physicians may not have – in this case, detailed knowledge of the patient’s baseline sexual function, said Dr. Michael J. Barry.
"Routinely collecting objective measures of subjective phenomena like sexual function from patients will need to become part of usual care rather than just research," he noted.
"For most scenarios, the take-away message [of this study] is that if the patient has chosen surgery, he will more than likely lose erectile function, whereas if he has chosen radiotherapy, he has a better than even chance of preserving it, at least for 2 years," Dr. Barry said.
Michael J. Barry, M.D., is in the general medicine division at Massachusetts General Hospital and Harvard Medical School, Boston. He is also at the Foundation for Informed Medical Decision Making, also in Boston. These remarks were adapted from his editorial accompanying Dr. Alemozaffar’s report (JAMA 2011;306:1258-9).
Mathematical models that are based on patient characteristics, pretreatment sexual functioning, and treatment details help predict whether men will have erectile function 2 years after therapy for early-stage prostate cancer, according to a report in the Sept. 21 issue of JAMA.
The predictive models were developed in a cohort of 1,027 patients who underwent prostatectomy, external beam radiotherapy, or brachytherapy during 2003-2006, and they were then validated against actual experience in a separate registry of 1,913 community-based patients.
This verification "suggests that these findings are generalizable and may help physicians and patients to set personalized expectations regarding prospects for erectile function in the years following primary treatment for prostate cancer," said Dr. Mehrdad Alemozaffar of the division of urology at Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, and his associates (JAMA 2011;306:1205-1214).
The investigators developed their models using data from a prospective, longitudinal cohort of men who were treated at nine university-affiliated hospitals. A total of 524 elected to undergo prostatectomy, 241 opted for external beam radiotherapy, and 262 had brachytherapy for their clinical stage T1 or T2 prostate cancer.
Before treatment commenced, 28% of the prostatectomy group, 47% of the radiotherapy group, and 33% of the brachytherapy group reported that they already had some erectile dysfunction. At 2 years after treatment, these rates increased to 65%, 63%, and 57%, respectively.
For prostatectomy, four factors – younger age, lower PSA level at baseline, better pretreatment sexual functioning, and nerve-sparing surgery – were found to raise the odds that the study subjects would be able to attain functional erections suitable for intercourse 2 years after treatment. "Erectile function increased approximately linearly with decreasing age and with increasing pretreatment sexual functioning score," Dr. Alemozaffar and his colleagues said.
Using these data, they constructed a table of probabilities that men choosing prostatectomy would be able to attain functional erections. For example, a 50-year-old man’s prospects for having functional erections after prostatectomy will vary between 6% and 70% depending on his pretreatment sexual function score, his baseline PSA level, and whether he planned to use a nerve-sparing surgical technique.
For subjects having external beam radiotherapy, the odds that they will be able to attain functional erections suitable for intercourse improve with lower PSA level, better pretreatment sexual functioning, and no use of neoadjuvant hormone therapy. According to the model, a patient’s probability of recovering the ability to attain functional erections varies between 16% and 92%, depending on these three factors.
For those having brachytherapy, the factors associated with better odds of attaining functional erections are better pretreatment sexual functioning, younger age, black race, and lower body mass index. For example, a 60-year-old man’s probability of doing so varies from 11% to 98%, depending on his pretreatment sexual functioning, age, race, and BMI.
Dr. Alemozaffar and his associates assessed how their models performed in a separate cohort of 1,913 men enrolled in a community-based registry. The model-predicted probabilities corresponded well to the observed outcomes in this cohort.
Thus, the models provide "a validated, broadly applicable framework to predict the probability of long-term posttreatment erectile dysfunction for individual patients," they said.
It was notable that in initial univariate analyses, poorer recovery of erectile function correlated with higher numbers of comorbid conditions. However, this correlation did not persist in multivariate analyses, so the models do not include comorbidities.
"Other researchers have found diabetes and peripheral vascular disease to be associated with worse posttreatment sexual outcome," but those studies did not adjust for differences in pretreatment sexual function. It thus appears that pretreatment sexual function may supersede the effects of comorbidities on posttreatment erectile function, they said.
Also of note was the finding that the models were more accurate at predicting erectile function after external radiotherapy than after prostatectomy. The reason for this difference is not yet known, and it is possible that surgical factors, such as the surgeon’s proficiency or variations in specific techniques, "may contribute to a broader range of outcomes" after prostatectomy than after radiotherapy. This issue warrants further study, they added.
This study was supported by the National Institutes of Health. Dr. Alemozaffar’s associates reported ties to numerous industry sources.
Mathematical models that are based on patient characteristics, pretreatment sexual functioning, and treatment details help predict whether men will have erectile function 2 years after therapy for early-stage prostate cancer, according to a report in the Sept. 21 issue of JAMA.
The predictive models were developed in a cohort of 1,027 patients who underwent prostatectomy, external beam radiotherapy, or brachytherapy during 2003-2006, and they were then validated against actual experience in a separate registry of 1,913 community-based patients.
This verification "suggests that these findings are generalizable and may help physicians and patients to set personalized expectations regarding prospects for erectile function in the years following primary treatment for prostate cancer," said Dr. Mehrdad Alemozaffar of the division of urology at Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, and his associates (JAMA 2011;306:1205-1214).
The investigators developed their models using data from a prospective, longitudinal cohort of men who were treated at nine university-affiliated hospitals. A total of 524 elected to undergo prostatectomy, 241 opted for external beam radiotherapy, and 262 had brachytherapy for their clinical stage T1 or T2 prostate cancer.
Before treatment commenced, 28% of the prostatectomy group, 47% of the radiotherapy group, and 33% of the brachytherapy group reported that they already had some erectile dysfunction. At 2 years after treatment, these rates increased to 65%, 63%, and 57%, respectively.
For prostatectomy, four factors – younger age, lower PSA level at baseline, better pretreatment sexual functioning, and nerve-sparing surgery – were found to raise the odds that the study subjects would be able to attain functional erections suitable for intercourse 2 years after treatment. "Erectile function increased approximately linearly with decreasing age and with increasing pretreatment sexual functioning score," Dr. Alemozaffar and his colleagues said.
Using these data, they constructed a table of probabilities that men choosing prostatectomy would be able to attain functional erections. For example, a 50-year-old man’s prospects for having functional erections after prostatectomy will vary between 6% and 70% depending on his pretreatment sexual function score, his baseline PSA level, and whether he planned to use a nerve-sparing surgical technique.
For subjects having external beam radiotherapy, the odds that they will be able to attain functional erections suitable for intercourse improve with lower PSA level, better pretreatment sexual functioning, and no use of neoadjuvant hormone therapy. According to the model, a patient’s probability of recovering the ability to attain functional erections varies between 16% and 92%, depending on these three factors.
For those having brachytherapy, the factors associated with better odds of attaining functional erections are better pretreatment sexual functioning, younger age, black race, and lower body mass index. For example, a 60-year-old man’s probability of doing so varies from 11% to 98%, depending on his pretreatment sexual functioning, age, race, and BMI.
Dr. Alemozaffar and his associates assessed how their models performed in a separate cohort of 1,913 men enrolled in a community-based registry. The model-predicted probabilities corresponded well to the observed outcomes in this cohort.
Thus, the models provide "a validated, broadly applicable framework to predict the probability of long-term posttreatment erectile dysfunction for individual patients," they said.
It was notable that in initial univariate analyses, poorer recovery of erectile function correlated with higher numbers of comorbid conditions. However, this correlation did not persist in multivariate analyses, so the models do not include comorbidities.
"Other researchers have found diabetes and peripheral vascular disease to be associated with worse posttreatment sexual outcome," but those studies did not adjust for differences in pretreatment sexual function. It thus appears that pretreatment sexual function may supersede the effects of comorbidities on posttreatment erectile function, they said.
Also of note was the finding that the models were more accurate at predicting erectile function after external radiotherapy than after prostatectomy. The reason for this difference is not yet known, and it is possible that surgical factors, such as the surgeon’s proficiency or variations in specific techniques, "may contribute to a broader range of outcomes" after prostatectomy than after radiotherapy. This issue warrants further study, they added.
This study was supported by the National Institutes of Health. Dr. Alemozaffar’s associates reported ties to numerous industry sources.
FROM JAMA
Major Finding: The odds that a man will recover erectile function after prostatectomy vary from 6% to 70%; after external beam radiotherapy, they vary from 16% to 92%; and after brachytherapy, they vary from 11% to 98%, depending on patient characteristics, pretreatment sexual functioning, and treatment details.
Data Source: Mathematical models to predict erectile function 2 years after prostate cancer treatment were derived from a prospective cohort study of 1,027 men treated for early-stage prostate cancer, and the models were then validated using a community registry of 1,913 cases.
Disclosures: This study was supported by the National Institutes of Health. Dr. Alemozaffar’s associates reported ties to numerous industry sources.
High Platelet Reactivity Signals High Risk of Ischemic Events
ACS patients who show high residual platelet reactivity in response to clopidogrel on platelet function testing are at increased risk of ischemic events in both the short term and the long term, according to a prospective, observational study reported in the September 21 issue of JAMA.
In particular, these patients’ risk of cardiac mortality is twice as high (9.7%) as that of patients who do not show high residual platelet reactivity on platelet function testing (4.3%), said Dr. Guido Parodi of the department of cardiology, Careggi Hospital, Florence, and his associates.
To assess whether residual platelet reactivity to clopidogrel may be a prognostic marker, the investigators performed a prospective observational cohort study of 1,789 consecutive patients with acute coronary syndromes who were undergoing percutaneous coronary intervention (PCI) with stent implantation at a single center over a 4-year period. Platelet reactivity was tested using light transmittance aggregometry with an adenosine diphosphate (ADP) agonist, 12-18 hours after administering a 600-mg loading dose of clopidogrel.
The overall incidence of high residual platelet reactivity by ADP testing was 14%, which was considered relatively low.
The primary end point was a composite of ischemic events including cardiac death, MI, urgent coronary revascularization, and stroke during 2 years of follow-up. This end point developed in 14.6% of patients with high residual platelet reactivity, compared with only 8.7% of the other patients.
"The difference in the primary end point event rate was driven by the difference in cardiac mortality, which was 9.7% in the high residual platelet reactivity group and 4.3% in the low residual platelet reactivity group; there were no differences in the other components of the primary end point," the investigators said (JAMA 2011;306:1215-23).
The higher rate of cardiac mortality emerged within 6 months of baseline and persisted throughout follow-up, they noted.
The secondary end point of stent thrombosis also was twice as high in patients with high residual platelet reactivity (6.1%) as in other patients (2.9%).
These results support the hypothesis that high residual platelet reactivity in response to clopidogrel is a prognostic marker for ischemic events.
Of note was the finding that normalization of platelet reactivity did not improve outcomes, compared with persistent platelet reactivity.
Patients who initially showed high residual platelet reactivity in response to clopidogrel underwent an adjustment of therapy to rectify the problem. An ADP-guided increase in maintenance clopidogrel dose or an ADP-guided switch to ticlopidine led to normalized reactivity in 62% of this group and but did not affect the other 38%, who continued to show persistently high reactivity. Yet outcomes were not significantly different between these two groups.
This study was supported by the Italian Health Ministry. Dr. Parodi and his associates reported ties to numerous industry sources.
Despite the promising results of this and other studies, the currently available evidence "cannot support the routine use of platelet function testing in clinical practice. Until results of appropriately powered randomized controlled trials demonstrate efficacy and safety (in particular, low bleeding risk) of adjustment of antiplatelet treatment, platelet function testing should be reserved mostly as a research tool," said Dr. Dominick J. Angiolillo.
Since adjusting antiplatelet therapy on the basis of platelet function testing did not improve patient outcomes in this study, clinicians are left "without an answer about what to do with the results of platelet function testing," he noted. It also remains unknown "whether platelet reactivity is simply a marker of risk or if it is a modifiable risk factor that can affect prognosis."
Dominick J. Angiolillo, M.D., Ph.D., is at the University of Florida, Jacksonville. He reported ties to numerous industry sources. These remarks were taken from his editorial accompanying Dr. Parodi’s report (JAMA 2011;306:1260-1).
Despite the promising results of this and other studies, the currently available evidence "cannot support the routine use of platelet function testing in clinical practice. Until results of appropriately powered randomized controlled trials demonstrate efficacy and safety (in particular, low bleeding risk) of adjustment of antiplatelet treatment, platelet function testing should be reserved mostly as a research tool," said Dr. Dominick J. Angiolillo.
Since adjusting antiplatelet therapy on the basis of platelet function testing did not improve patient outcomes in this study, clinicians are left "without an answer about what to do with the results of platelet function testing," he noted. It also remains unknown "whether platelet reactivity is simply a marker of risk or if it is a modifiable risk factor that can affect prognosis."
Dominick J. Angiolillo, M.D., Ph.D., is at the University of Florida, Jacksonville. He reported ties to numerous industry sources. These remarks were taken from his editorial accompanying Dr. Parodi’s report (JAMA 2011;306:1260-1).
Despite the promising results of this and other studies, the currently available evidence "cannot support the routine use of platelet function testing in clinical practice. Until results of appropriately powered randomized controlled trials demonstrate efficacy and safety (in particular, low bleeding risk) of adjustment of antiplatelet treatment, platelet function testing should be reserved mostly as a research tool," said Dr. Dominick J. Angiolillo.
Since adjusting antiplatelet therapy on the basis of platelet function testing did not improve patient outcomes in this study, clinicians are left "without an answer about what to do with the results of platelet function testing," he noted. It also remains unknown "whether platelet reactivity is simply a marker of risk or if it is a modifiable risk factor that can affect prognosis."
Dominick J. Angiolillo, M.D., Ph.D., is at the University of Florida, Jacksonville. He reported ties to numerous industry sources. These remarks were taken from his editorial accompanying Dr. Parodi’s report (JAMA 2011;306:1260-1).
ACS patients who show high residual platelet reactivity in response to clopidogrel on platelet function testing are at increased risk of ischemic events in both the short term and the long term, according to a prospective, observational study reported in the September 21 issue of JAMA.
In particular, these patients’ risk of cardiac mortality is twice as high (9.7%) as that of patients who do not show high residual platelet reactivity on platelet function testing (4.3%), said Dr. Guido Parodi of the department of cardiology, Careggi Hospital, Florence, and his associates.
To assess whether residual platelet reactivity to clopidogrel may be a prognostic marker, the investigators performed a prospective observational cohort study of 1,789 consecutive patients with acute coronary syndromes who were undergoing percutaneous coronary intervention (PCI) with stent implantation at a single center over a 4-year period. Platelet reactivity was tested using light transmittance aggregometry with an adenosine diphosphate (ADP) agonist, 12-18 hours after administering a 600-mg loading dose of clopidogrel.
The overall incidence of high residual platelet reactivity by ADP testing was 14%, which was considered relatively low.
The primary end point was a composite of ischemic events including cardiac death, MI, urgent coronary revascularization, and stroke during 2 years of follow-up. This end point developed in 14.6% of patients with high residual platelet reactivity, compared with only 8.7% of the other patients.
"The difference in the primary end point event rate was driven by the difference in cardiac mortality, which was 9.7% in the high residual platelet reactivity group and 4.3% in the low residual platelet reactivity group; there were no differences in the other components of the primary end point," the investigators said (JAMA 2011;306:1215-23).
The higher rate of cardiac mortality emerged within 6 months of baseline and persisted throughout follow-up, they noted.
The secondary end point of stent thrombosis also was twice as high in patients with high residual platelet reactivity (6.1%) as in other patients (2.9%).
These results support the hypothesis that high residual platelet reactivity in response to clopidogrel is a prognostic marker for ischemic events.
Of note was the finding that normalization of platelet reactivity did not improve outcomes, compared with persistent platelet reactivity.
Patients who initially showed high residual platelet reactivity in response to clopidogrel underwent an adjustment of therapy to rectify the problem. An ADP-guided increase in maintenance clopidogrel dose or an ADP-guided switch to ticlopidine led to normalized reactivity in 62% of this group and but did not affect the other 38%, who continued to show persistently high reactivity. Yet outcomes were not significantly different between these two groups.
This study was supported by the Italian Health Ministry. Dr. Parodi and his associates reported ties to numerous industry sources.
ACS patients who show high residual platelet reactivity in response to clopidogrel on platelet function testing are at increased risk of ischemic events in both the short term and the long term, according to a prospective, observational study reported in the September 21 issue of JAMA.
In particular, these patients’ risk of cardiac mortality is twice as high (9.7%) as that of patients who do not show high residual platelet reactivity on platelet function testing (4.3%), said Dr. Guido Parodi of the department of cardiology, Careggi Hospital, Florence, and his associates.
To assess whether residual platelet reactivity to clopidogrel may be a prognostic marker, the investigators performed a prospective observational cohort study of 1,789 consecutive patients with acute coronary syndromes who were undergoing percutaneous coronary intervention (PCI) with stent implantation at a single center over a 4-year period. Platelet reactivity was tested using light transmittance aggregometry with an adenosine diphosphate (ADP) agonist, 12-18 hours after administering a 600-mg loading dose of clopidogrel.
The overall incidence of high residual platelet reactivity by ADP testing was 14%, which was considered relatively low.
The primary end point was a composite of ischemic events including cardiac death, MI, urgent coronary revascularization, and stroke during 2 years of follow-up. This end point developed in 14.6% of patients with high residual platelet reactivity, compared with only 8.7% of the other patients.
"The difference in the primary end point event rate was driven by the difference in cardiac mortality, which was 9.7% in the high residual platelet reactivity group and 4.3% in the low residual platelet reactivity group; there were no differences in the other components of the primary end point," the investigators said (JAMA 2011;306:1215-23).
The higher rate of cardiac mortality emerged within 6 months of baseline and persisted throughout follow-up, they noted.
The secondary end point of stent thrombosis also was twice as high in patients with high residual platelet reactivity (6.1%) as in other patients (2.9%).
These results support the hypothesis that high residual platelet reactivity in response to clopidogrel is a prognostic marker for ischemic events.
Of note was the finding that normalization of platelet reactivity did not improve outcomes, compared with persistent platelet reactivity.
Patients who initially showed high residual platelet reactivity in response to clopidogrel underwent an adjustment of therapy to rectify the problem. An ADP-guided increase in maintenance clopidogrel dose or an ADP-guided switch to ticlopidine led to normalized reactivity in 62% of this group and but did not affect the other 38%, who continued to show persistently high reactivity. Yet outcomes were not significantly different between these two groups.
This study was supported by the Italian Health Ministry. Dr. Parodi and his associates reported ties to numerous industry sources.
FROM JAMA
Major Finding: The primary end point of ischemic events developed in 14.6% of patients who showed high residual platelet reactivity in response to clopidogrel on platelet function testing, compared with 8.7% of other patients, and cardiac mortality was 9.7% vs 4.3%, respectively.
Data Source: A prospective observational cohort study of 1,789 ACS patients undergoing PCI at a single center during a 4-year period.
Disclosures: This study was supported by the Italian Health Ministry. Dr. Parodi and his associates reported ties to numerous industry sources.
LGA at Birth Linked to Excess Mortality in Young Adulthood
Low gestational age at birth appears to be strongly associated with higher mortality during young adulthood, independently of fetal growth and other perinatal and socioeconomic factors, according to a study in the Sept. 21 issue of JAMA.
The robust association was observed even among "late" preterm births at 34-36 weeks, said Dr. Casey Crump of the department of medicine at Stanford (Calif.) University, and his associates.
"To our knowledge, this is the first study to report the specific contribution of gestational age at birth on mortality in adulthood. The results underscore the persistent long-term health sequelae of preterm birth," the investigators said (JAMA 2011;306:1233-40).
"Clinicians will increasingly encounter the health sequelae of preterm birth throughout the life course and will need to be aware of the long-term effects on the survivors, their families, and society," they noted.
Previous studies have examined the relationship between low birth weight and adult mortality, but have not assessed the contribution of gestational age. To do so, Dr. Crump and his colleagues performed a cohort study of 674,820 singleton infants born in 1973-1979 in Sweden, who were followed throughout their lives for all-cause and cause-specific mortality.
The study subjects were aged 29-36 years at the most recent follow-up. The prevalence of preterm birth in Sweden in the late 1970s was 5%. The prevalence in this cohort was 4.1% (27,979 preterm births).
There were 7,095 deaths among the study subjects. Mortality strongly correlated with low gestational age (LGA) at birth during early childhood, an association that disappeared in late childhood and adolescence but reappeared in young adulthood.
The relationship was robust and linear in young adulthood at ages 18-36 years. Adjusting the data to account for numerous possible confounders – including the subject’s sex, birth year, and birth order; the mother’s age at delivery; the mother’s marital status; and both parents’ educational status – had little effect on the risk estimates.
An analysis that excluded subjects born with congenital malformations also did not affect the correlation between LGA at birth and increased mortality in young adulthood. Mortality was increased even among subjects born at the end of the preterm period at 34-36 weeks, the investigators said.
When the data were analyzed by cause of death, LGA at birth was most strongly associated with mortality due to respiratory and endocrine disorders, followed by cardiovascular disorders. In contrast, it was not significantly associated with death from neurological disorders, cancer, or injury.
This finding is consistent with reports in the literature that LGA correlates with asthma, hypertension, diabetes, and hypothyroidism in later life, Dr. Crump and his associates said.
"The underlying mechanisms are still largely unknown but may involve a complex interplay of fetal and postnatal nutritional abnormalities; other intrauterine exposures, including glucocorticoid and sex hormone alterations; and common genetic factors," they said.
The researchers noted that the prevalence of preterm birth in the United States at present exceeds 12%, more than double the prevalence in this cohort. Most survivors "have a high level of function and self-reported quality of life," but the results of this study show that increased long-term morbidities and mortality also can be expected, Dr. Crump and his associates said.
However, it should be noted that today’s preterm infants may differ in important ways from the subjects in this study because neonatal care has advanced during the interim. "It is unclear to what extent our findings are generalizable to later cohorts, and any such comparison should be made with caution," they noted.
This study was supported by the U.S. National Institute of Child Health and Human Development, the Swedish Research Council, the Swedish Council for Working Life and Social Research, and the Avtal om Läkarutbildning och Forskning (Agreement on Medical Training and Research), Lund, Sweden. The authors reported no financial conflicts of interest.
Low gestational age at birth appears to be strongly associated with higher mortality during young adulthood, independently of fetal growth and other perinatal and socioeconomic factors, according to a study in the Sept. 21 issue of JAMA.
The robust association was observed even among "late" preterm births at 34-36 weeks, said Dr. Casey Crump of the department of medicine at Stanford (Calif.) University, and his associates.
"To our knowledge, this is the first study to report the specific contribution of gestational age at birth on mortality in adulthood. The results underscore the persistent long-term health sequelae of preterm birth," the investigators said (JAMA 2011;306:1233-40).
"Clinicians will increasingly encounter the health sequelae of preterm birth throughout the life course and will need to be aware of the long-term effects on the survivors, their families, and society," they noted.
Previous studies have examined the relationship between low birth weight and adult mortality, but have not assessed the contribution of gestational age. To do so, Dr. Crump and his colleagues performed a cohort study of 674,820 singleton infants born in 1973-1979 in Sweden, who were followed throughout their lives for all-cause and cause-specific mortality.
The study subjects were aged 29-36 years at the most recent follow-up. The prevalence of preterm birth in Sweden in the late 1970s was 5%. The prevalence in this cohort was 4.1% (27,979 preterm births).
There were 7,095 deaths among the study subjects. Mortality strongly correlated with low gestational age (LGA) at birth during early childhood, an association that disappeared in late childhood and adolescence but reappeared in young adulthood.
The relationship was robust and linear in young adulthood at ages 18-36 years. Adjusting the data to account for numerous possible confounders – including the subject’s sex, birth year, and birth order; the mother’s age at delivery; the mother’s marital status; and both parents’ educational status – had little effect on the risk estimates.
An analysis that excluded subjects born with congenital malformations also did not affect the correlation between LGA at birth and increased mortality in young adulthood. Mortality was increased even among subjects born at the end of the preterm period at 34-36 weeks, the investigators said.
When the data were analyzed by cause of death, LGA at birth was most strongly associated with mortality due to respiratory and endocrine disorders, followed by cardiovascular disorders. In contrast, it was not significantly associated with death from neurological disorders, cancer, or injury.
This finding is consistent with reports in the literature that LGA correlates with asthma, hypertension, diabetes, and hypothyroidism in later life, Dr. Crump and his associates said.
"The underlying mechanisms are still largely unknown but may involve a complex interplay of fetal and postnatal nutritional abnormalities; other intrauterine exposures, including glucocorticoid and sex hormone alterations; and common genetic factors," they said.
The researchers noted that the prevalence of preterm birth in the United States at present exceeds 12%, more than double the prevalence in this cohort. Most survivors "have a high level of function and self-reported quality of life," but the results of this study show that increased long-term morbidities and mortality also can be expected, Dr. Crump and his associates said.
However, it should be noted that today’s preterm infants may differ in important ways from the subjects in this study because neonatal care has advanced during the interim. "It is unclear to what extent our findings are generalizable to later cohorts, and any such comparison should be made with caution," they noted.
This study was supported by the U.S. National Institute of Child Health and Human Development, the Swedish Research Council, the Swedish Council for Working Life and Social Research, and the Avtal om Läkarutbildning och Forskning (Agreement on Medical Training and Research), Lund, Sweden. The authors reported no financial conflicts of interest.
Low gestational age at birth appears to be strongly associated with higher mortality during young adulthood, independently of fetal growth and other perinatal and socioeconomic factors, according to a study in the Sept. 21 issue of JAMA.
The robust association was observed even among "late" preterm births at 34-36 weeks, said Dr. Casey Crump of the department of medicine at Stanford (Calif.) University, and his associates.
"To our knowledge, this is the first study to report the specific contribution of gestational age at birth on mortality in adulthood. The results underscore the persistent long-term health sequelae of preterm birth," the investigators said (JAMA 2011;306:1233-40).
"Clinicians will increasingly encounter the health sequelae of preterm birth throughout the life course and will need to be aware of the long-term effects on the survivors, their families, and society," they noted.
Previous studies have examined the relationship between low birth weight and adult mortality, but have not assessed the contribution of gestational age. To do so, Dr. Crump and his colleagues performed a cohort study of 674,820 singleton infants born in 1973-1979 in Sweden, who were followed throughout their lives for all-cause and cause-specific mortality.
The study subjects were aged 29-36 years at the most recent follow-up. The prevalence of preterm birth in Sweden in the late 1970s was 5%. The prevalence in this cohort was 4.1% (27,979 preterm births).
There were 7,095 deaths among the study subjects. Mortality strongly correlated with low gestational age (LGA) at birth during early childhood, an association that disappeared in late childhood and adolescence but reappeared in young adulthood.
The relationship was robust and linear in young adulthood at ages 18-36 years. Adjusting the data to account for numerous possible confounders – including the subject’s sex, birth year, and birth order; the mother’s age at delivery; the mother’s marital status; and both parents’ educational status – had little effect on the risk estimates.
An analysis that excluded subjects born with congenital malformations also did not affect the correlation between LGA at birth and increased mortality in young adulthood. Mortality was increased even among subjects born at the end of the preterm period at 34-36 weeks, the investigators said.
When the data were analyzed by cause of death, LGA at birth was most strongly associated with mortality due to respiratory and endocrine disorders, followed by cardiovascular disorders. In contrast, it was not significantly associated with death from neurological disorders, cancer, or injury.
This finding is consistent with reports in the literature that LGA correlates with asthma, hypertension, diabetes, and hypothyroidism in later life, Dr. Crump and his associates said.
"The underlying mechanisms are still largely unknown but may involve a complex interplay of fetal and postnatal nutritional abnormalities; other intrauterine exposures, including glucocorticoid and sex hormone alterations; and common genetic factors," they said.
The researchers noted that the prevalence of preterm birth in the United States at present exceeds 12%, more than double the prevalence in this cohort. Most survivors "have a high level of function and self-reported quality of life," but the results of this study show that increased long-term morbidities and mortality also can be expected, Dr. Crump and his associates said.
However, it should be noted that today’s preterm infants may differ in important ways from the subjects in this study because neonatal care has advanced during the interim. "It is unclear to what extent our findings are generalizable to later cohorts, and any such comparison should be made with caution," they noted.
This study was supported by the U.S. National Institute of Child Health and Human Development, the Swedish Research Council, the Swedish Council for Working Life and Social Research, and the Avtal om Läkarutbildning och Forskning (Agreement on Medical Training and Research), Lund, Sweden. The authors reported no financial conflicts of interest.
FROM JAMA
Major Finding: Low gestational age at birth correlated robustly and in a linear fashion with excess mortality at age 18-36 years, independently of the subject’s sex, birth year, and birth order; the mother’s age at delivery and marital status; and both parents’ educational status.
Data Source: A cohort study involving 674,820 singleton births in Sweden in 1973-1979, including 27,979 preterm births that were followed through 2008.
Disclosures: This study was supported by the U.S. National Institute of Child Health and Human Development, the Swedish Research Council, the Swedish Council for Working Life and Social Research, and the Avtal om Läkarutbildning och Forskning (Agreement on Medical Training and Research), Lund, Sweden. The authors reported no financial conflicts of interest.
Varicose Vein Treatments: Laser Ablation Equals Surgery
For chronic great saphenous vein insufficiency, endovenous laser ablation is comparably effective and as safe as high ligation with vein stripping, according to a study published online Sept. 19 in the Archives of Dermatology.
In what researchers described as the largest and most powerful randomized clinical trial (RCT) to date comparing an endovenous technique with conventional surgery, the two approaches were "equivalent in terms of the primary objective of clinical recurrence," as well as in almost all of the secondary end points, at 2-year follow-up.
"This major finding is in accordance with those of all of the RCTs published so far comparing EVLT [endovenous laser treatment] and HLS [high ligation with stripping]" of the great saphenous vein, said Dr. Knuth Rass of the department of dermatology, venerology, and allergy, Saarland University Hospital, Homburg, Germany, and his associates.
The investigators reported the 2-year results of the ongoing RELACS (Randomized Study Comparing Endovenous Laser Ablation With Crossectomy and Stripping of the Great Saphenous Vein). The trial enrolled 400 consecutive patients (one treated leg per patient) who were seen at two medical centers in Germany for great saphenous vein insufficiency with saphenofemoral incompetence and reflux at least down to the knee level.
However, 54 subjects dropped out after randomization, mostly because they preferred the treatment to which they were not assigned. So the study analyses were based on the per-protocol population of 185 treated with EVLT and 161 treated with HLS.
The primary end point (the rate of freedom from clinical recurrence) was 84% with EVLT and 77% with HLS, a nonsignificant difference.
Similarly, the recurrence-free rates specifically involving varicose veins originating from the operative site were 97% in both groups, the researchers reported (Arch. Dermatol. 2011 [doi:10.1001/archdermatol.2011.272]).
Also in both groups, the scores on a measure of varicose vein severity declined from baseline to 3 months, declined further from 3-12 months, and remained stable at 12-24 months. Disease-specific quality of life scores improved significantly and to the same degree in both groups, with no differences in subscores on pain, physical well-being, psychological well-being, or social well-being.
There were no significant differences between the two groups in the rate of major complications, which was 1.1% overall. These included one case of GI bleeding in the EVLT group, which was related to the use of low-molecular-weight heparin and oral ibuprofen, and two cases of thrombus propagation into the common femoral vein, also in the EVLT group.
Minor adverse effects were frequent but mild in both groups. "Phlebitic reactions, indurations, dyspigmentations, and pain incidence and intensity were more pronounced in the EVLT group. [But] pain persisted longer after HLS," noted the investigators. Bruising and dysesthesia were the same in both groups.
A "remarkable" 98% of both groups said they were satisfied with treatment and would undergo each procedure again if medically necessary.
The two treatment approaches did differ in the incidence of recurrences at the saphenofemoral junction as detected on duplex ultrasonography. These recurrence-free rates were 82% with EVLT and 99% with HLS. However, this difference had no apparent effect on clinical or functional outcome.
"Currently, it remains speculative as to if, when, and to what extent the duplex-detected refluxes at the saphenofemoral junction evolve to a clinical recurrence. ... Further follow-up to 5 years after treatment is scheduled for this study and will probably provide more evidence on this topic," Dr. Rass and his associates wrote.
No conflicts of interest were reported.
For chronic great saphenous vein insufficiency, endovenous laser ablation is comparably effective and as safe as high ligation with vein stripping, according to a study published online Sept. 19 in the Archives of Dermatology.
In what researchers described as the largest and most powerful randomized clinical trial (RCT) to date comparing an endovenous technique with conventional surgery, the two approaches were "equivalent in terms of the primary objective of clinical recurrence," as well as in almost all of the secondary end points, at 2-year follow-up.
"This major finding is in accordance with those of all of the RCTs published so far comparing EVLT [endovenous laser treatment] and HLS [high ligation with stripping]" of the great saphenous vein, said Dr. Knuth Rass of the department of dermatology, venerology, and allergy, Saarland University Hospital, Homburg, Germany, and his associates.
The investigators reported the 2-year results of the ongoing RELACS (Randomized Study Comparing Endovenous Laser Ablation With Crossectomy and Stripping of the Great Saphenous Vein). The trial enrolled 400 consecutive patients (one treated leg per patient) who were seen at two medical centers in Germany for great saphenous vein insufficiency with saphenofemoral incompetence and reflux at least down to the knee level.
However, 54 subjects dropped out after randomization, mostly because they preferred the treatment to which they were not assigned. So the study analyses were based on the per-protocol population of 185 treated with EVLT and 161 treated with HLS.
The primary end point (the rate of freedom from clinical recurrence) was 84% with EVLT and 77% with HLS, a nonsignificant difference.
Similarly, the recurrence-free rates specifically involving varicose veins originating from the operative site were 97% in both groups, the researchers reported (Arch. Dermatol. 2011 [doi:10.1001/archdermatol.2011.272]).
Also in both groups, the scores on a measure of varicose vein severity declined from baseline to 3 months, declined further from 3-12 months, and remained stable at 12-24 months. Disease-specific quality of life scores improved significantly and to the same degree in both groups, with no differences in subscores on pain, physical well-being, psychological well-being, or social well-being.
There were no significant differences between the two groups in the rate of major complications, which was 1.1% overall. These included one case of GI bleeding in the EVLT group, which was related to the use of low-molecular-weight heparin and oral ibuprofen, and two cases of thrombus propagation into the common femoral vein, also in the EVLT group.
Minor adverse effects were frequent but mild in both groups. "Phlebitic reactions, indurations, dyspigmentations, and pain incidence and intensity were more pronounced in the EVLT group. [But] pain persisted longer after HLS," noted the investigators. Bruising and dysesthesia were the same in both groups.
A "remarkable" 98% of both groups said they were satisfied with treatment and would undergo each procedure again if medically necessary.
The two treatment approaches did differ in the incidence of recurrences at the saphenofemoral junction as detected on duplex ultrasonography. These recurrence-free rates were 82% with EVLT and 99% with HLS. However, this difference had no apparent effect on clinical or functional outcome.
"Currently, it remains speculative as to if, when, and to what extent the duplex-detected refluxes at the saphenofemoral junction evolve to a clinical recurrence. ... Further follow-up to 5 years after treatment is scheduled for this study and will probably provide more evidence on this topic," Dr. Rass and his associates wrote.
No conflicts of interest were reported.
For chronic great saphenous vein insufficiency, endovenous laser ablation is comparably effective and as safe as high ligation with vein stripping, according to a study published online Sept. 19 in the Archives of Dermatology.
In what researchers described as the largest and most powerful randomized clinical trial (RCT) to date comparing an endovenous technique with conventional surgery, the two approaches were "equivalent in terms of the primary objective of clinical recurrence," as well as in almost all of the secondary end points, at 2-year follow-up.
"This major finding is in accordance with those of all of the RCTs published so far comparing EVLT [endovenous laser treatment] and HLS [high ligation with stripping]" of the great saphenous vein, said Dr. Knuth Rass of the department of dermatology, venerology, and allergy, Saarland University Hospital, Homburg, Germany, and his associates.
The investigators reported the 2-year results of the ongoing RELACS (Randomized Study Comparing Endovenous Laser Ablation With Crossectomy and Stripping of the Great Saphenous Vein). The trial enrolled 400 consecutive patients (one treated leg per patient) who were seen at two medical centers in Germany for great saphenous vein insufficiency with saphenofemoral incompetence and reflux at least down to the knee level.
However, 54 subjects dropped out after randomization, mostly because they preferred the treatment to which they were not assigned. So the study analyses were based on the per-protocol population of 185 treated with EVLT and 161 treated with HLS.
The primary end point (the rate of freedom from clinical recurrence) was 84% with EVLT and 77% with HLS, a nonsignificant difference.
Similarly, the recurrence-free rates specifically involving varicose veins originating from the operative site were 97% in both groups, the researchers reported (Arch. Dermatol. 2011 [doi:10.1001/archdermatol.2011.272]).
Also in both groups, the scores on a measure of varicose vein severity declined from baseline to 3 months, declined further from 3-12 months, and remained stable at 12-24 months. Disease-specific quality of life scores improved significantly and to the same degree in both groups, with no differences in subscores on pain, physical well-being, psychological well-being, or social well-being.
There were no significant differences between the two groups in the rate of major complications, which was 1.1% overall. These included one case of GI bleeding in the EVLT group, which was related to the use of low-molecular-weight heparin and oral ibuprofen, and two cases of thrombus propagation into the common femoral vein, also in the EVLT group.
Minor adverse effects were frequent but mild in both groups. "Phlebitic reactions, indurations, dyspigmentations, and pain incidence and intensity were more pronounced in the EVLT group. [But] pain persisted longer after HLS," noted the investigators. Bruising and dysesthesia were the same in both groups.
A "remarkable" 98% of both groups said they were satisfied with treatment and would undergo each procedure again if medically necessary.
The two treatment approaches did differ in the incidence of recurrences at the saphenofemoral junction as detected on duplex ultrasonography. These recurrence-free rates were 82% with EVLT and 99% with HLS. However, this difference had no apparent effect on clinical or functional outcome.
"Currently, it remains speculative as to if, when, and to what extent the duplex-detected refluxes at the saphenofemoral junction evolve to a clinical recurrence. ... Further follow-up to 5 years after treatment is scheduled for this study and will probably provide more evidence on this topic," Dr. Rass and his associates wrote.
No conflicts of interest were reported.
FROM ARCHIVES OF DERMATOLOGY
Major Finding: The rate of freedom from clinical recurrence was 84% for endovenous laser ablation and 77% for high ligation with vein stripping, a nonsignificant difference.
Data Source: A randomized clinical trial of patients treated at two centers in Germany for insufficiency of the great saphenous vein with EVLT (185 patients) or HLS (161 patients) and followed for 2 years.
Disclosures: No conflicts of interest were reported.
Concomitant Golimumab Lessened Clinical Rheumatoid Arthritis
Adding golimumab to methotrexate therapy lessened synovitis, osteitis, and bone erosion to a greater degree than did placebo plus methotrexate, a study has shown.
These improvements were evident as early as the 12th week of treatment on serial magnetic resonance imaging exams, which proved to be much more sensitive than conventional radiography at demonstrating the changes, said Dr. Mikkel Ostergaard, professor of rheumatology at Copenhagen University Hospital at Glostrup, Denmark, and his associates.
They reported the results of a substudy of the 1-year GO-BEFORE (Golimumab Before Employing Methotrexate as the First-Line Option in the Treatment of Rheumatoid Arthritis of Early Onset) study, a large randomized controlled trial comparing various combinations of oral methotrexate (MTX), golimumab injections, and placebo in rheumatoid arthritis (RA) patients. GO-BEFORE’s findings demonstrated that after 28 weeks, "golimumab in combination with MTX reduced signs and symptoms and radiographic progression of RA in MTX-naive patients, with a safety profile similar to other anti-[tumor necrosis factor] agents," the investigators said.
Their substudy involved 318 of these subjects who underwent serial MRI evaluations of the wrist and metacarpophalangeal joints at 12 and 24 weeks. Synovitis and osteitis (bone marrow edema), which signal heavy infiltration by inflammatory cells including osteoclasts, are precursors of new bone erosions. These changes are visible on MRI well before conventional radiography can detect them.
The MRIs were assessed by two readers and an adjudicator using the Rheumatoid Arthritis MRI Scoring (RAMRIS) system, "which has demonstrated very good reliability and a high level of sensitivity to change." Study subjects who received MTX plus golimumab showed significantly better RAMRIS scores than did those who received MTX alone, as early as week 12 and continuing through week 24, Dr. Ostergaard and his colleagues said (Arthritis Rheum. 2011 Aug. 31 [doi 10.1002/art.30592]).
For example, at week 12, synovitis scores decreased by 1.92 points for the wrist and metacarpophalangeal joints and by 0.85 points for the wrist alone with combined therapy, compared with 0.14 points and 0.02 points with MTX alone. Bone edema–osteitis score decreased by 1.82 points with combined therapy but only by 0.56 points with MTX alone, and bone erosion scores decreased by 0.40 points vs. 0.24 points.
"Similar trends were observed in the sensitivity analyses conducted for the mean change in RAMRIS scores from baseline to week 24," they added.
In a series of MRIs that were representative of the substudy population as a whole, "images show bone edema that was extensive at baseline, markedly decreased at week 12, and nearly resolved at week 24," they noted.
The researchers emphasized that the substudy confirmed the conclusion of the entire GO-BEFORE clinical trial, but that MRI demonstrated the statistically significant difference between study groups in less than half the time (12 weeks rather than 28 weeks) and using fewer than half the subjects (318 patients rather than 637 patients). This documents that MRI is a more sensitive tool for detecting structural damage than conventional radiography, they said.
This study was funded by Centocor and Schering-Plough. The investigators reported no other financial disclosures.
Adding golimumab to methotrexate therapy lessened synovitis, osteitis, and bone erosion to a greater degree than did placebo plus methotrexate, a study has shown.
These improvements were evident as early as the 12th week of treatment on serial magnetic resonance imaging exams, which proved to be much more sensitive than conventional radiography at demonstrating the changes, said Dr. Mikkel Ostergaard, professor of rheumatology at Copenhagen University Hospital at Glostrup, Denmark, and his associates.
They reported the results of a substudy of the 1-year GO-BEFORE (Golimumab Before Employing Methotrexate as the First-Line Option in the Treatment of Rheumatoid Arthritis of Early Onset) study, a large randomized controlled trial comparing various combinations of oral methotrexate (MTX), golimumab injections, and placebo in rheumatoid arthritis (RA) patients. GO-BEFORE’s findings demonstrated that after 28 weeks, "golimumab in combination with MTX reduced signs and symptoms and radiographic progression of RA in MTX-naive patients, with a safety profile similar to other anti-[tumor necrosis factor] agents," the investigators said.
Their substudy involved 318 of these subjects who underwent serial MRI evaluations of the wrist and metacarpophalangeal joints at 12 and 24 weeks. Synovitis and osteitis (bone marrow edema), which signal heavy infiltration by inflammatory cells including osteoclasts, are precursors of new bone erosions. These changes are visible on MRI well before conventional radiography can detect them.
The MRIs were assessed by two readers and an adjudicator using the Rheumatoid Arthritis MRI Scoring (RAMRIS) system, "which has demonstrated very good reliability and a high level of sensitivity to change." Study subjects who received MTX plus golimumab showed significantly better RAMRIS scores than did those who received MTX alone, as early as week 12 and continuing through week 24, Dr. Ostergaard and his colleagues said (Arthritis Rheum. 2011 Aug. 31 [doi 10.1002/art.30592]).
For example, at week 12, synovitis scores decreased by 1.92 points for the wrist and metacarpophalangeal joints and by 0.85 points for the wrist alone with combined therapy, compared with 0.14 points and 0.02 points with MTX alone. Bone edema–osteitis score decreased by 1.82 points with combined therapy but only by 0.56 points with MTX alone, and bone erosion scores decreased by 0.40 points vs. 0.24 points.
"Similar trends were observed in the sensitivity analyses conducted for the mean change in RAMRIS scores from baseline to week 24," they added.
In a series of MRIs that were representative of the substudy population as a whole, "images show bone edema that was extensive at baseline, markedly decreased at week 12, and nearly resolved at week 24," they noted.
The researchers emphasized that the substudy confirmed the conclusion of the entire GO-BEFORE clinical trial, but that MRI demonstrated the statistically significant difference between study groups in less than half the time (12 weeks rather than 28 weeks) and using fewer than half the subjects (318 patients rather than 637 patients). This documents that MRI is a more sensitive tool for detecting structural damage than conventional radiography, they said.
This study was funded by Centocor and Schering-Plough. The investigators reported no other financial disclosures.
Adding golimumab to methotrexate therapy lessened synovitis, osteitis, and bone erosion to a greater degree than did placebo plus methotrexate, a study has shown.
These improvements were evident as early as the 12th week of treatment on serial magnetic resonance imaging exams, which proved to be much more sensitive than conventional radiography at demonstrating the changes, said Dr. Mikkel Ostergaard, professor of rheumatology at Copenhagen University Hospital at Glostrup, Denmark, and his associates.
They reported the results of a substudy of the 1-year GO-BEFORE (Golimumab Before Employing Methotrexate as the First-Line Option in the Treatment of Rheumatoid Arthritis of Early Onset) study, a large randomized controlled trial comparing various combinations of oral methotrexate (MTX), golimumab injections, and placebo in rheumatoid arthritis (RA) patients. GO-BEFORE’s findings demonstrated that after 28 weeks, "golimumab in combination with MTX reduced signs and symptoms and radiographic progression of RA in MTX-naive patients, with a safety profile similar to other anti-[tumor necrosis factor] agents," the investigators said.
Their substudy involved 318 of these subjects who underwent serial MRI evaluations of the wrist and metacarpophalangeal joints at 12 and 24 weeks. Synovitis and osteitis (bone marrow edema), which signal heavy infiltration by inflammatory cells including osteoclasts, are precursors of new bone erosions. These changes are visible on MRI well before conventional radiography can detect them.
The MRIs were assessed by two readers and an adjudicator using the Rheumatoid Arthritis MRI Scoring (RAMRIS) system, "which has demonstrated very good reliability and a high level of sensitivity to change." Study subjects who received MTX plus golimumab showed significantly better RAMRIS scores than did those who received MTX alone, as early as week 12 and continuing through week 24, Dr. Ostergaard and his colleagues said (Arthritis Rheum. 2011 Aug. 31 [doi 10.1002/art.30592]).
For example, at week 12, synovitis scores decreased by 1.92 points for the wrist and metacarpophalangeal joints and by 0.85 points for the wrist alone with combined therapy, compared with 0.14 points and 0.02 points with MTX alone. Bone edema–osteitis score decreased by 1.82 points with combined therapy but only by 0.56 points with MTX alone, and bone erosion scores decreased by 0.40 points vs. 0.24 points.
"Similar trends were observed in the sensitivity analyses conducted for the mean change in RAMRIS scores from baseline to week 24," they added.
In a series of MRIs that were representative of the substudy population as a whole, "images show bone edema that was extensive at baseline, markedly decreased at week 12, and nearly resolved at week 24," they noted.
The researchers emphasized that the substudy confirmed the conclusion of the entire GO-BEFORE clinical trial, but that MRI demonstrated the statistically significant difference between study groups in less than half the time (12 weeks rather than 28 weeks) and using fewer than half the subjects (318 patients rather than 637 patients). This documents that MRI is a more sensitive tool for detecting structural damage than conventional radiography, they said.
This study was funded by Centocor and Schering-Plough. The investigators reported no other financial disclosures.
FROM ARTHRITIS AND RHEUMATISM
Major Finding: RA patients who received methotrexate plus golimumab showed significantly better RAMRIS scores on MRI than did those who received MTX alone, as early as week 12 and continuing through week 24.
Data Source: A substudy of the GO-BEFORE study, involving 318 patients with active RA whose response to treatment was monitored via MRI.
Disclosures: This study was funded by Centocor and Schering-Plough. The investigators reported no other financial disclosures.
Lymph Node Target Questioned in Colon Cancer Surgery
The number of lymph nodes evaluated for metastases during colon cancer surgery has increased markedly during the past 20 years – but the improvement is not associated with any increase in the proportion of cancers that are node-positive, according to a report in the Sept. 14 issue of JAMA.
This suggests that this "upstaging mechanism" – raising the number of lymph nodes evaluated to improve identification of lymph-node–positive cancers, and thus to tailor treatment accordingly – cannot be the primary basis for improved patient survival, said Helen M. Parsons, MPH, of the U.S. National Cancer Institute’s applied research program in Bethesda, Md., and her associates.
The investigators analyzed 20-year trends in lymph node evaluation using data from 1988-2008 from the NCI’s SEER (Surveillance, Epidemiology, and End Results) registry. They reviewed records on 86,394 adults treated with radical surgical resection of the colon for a first occurrence of invasive adenocarcinoma.
The number of lymph nodes evaluated rose markedly during the study period. In 1988-1990, only 35% of patients underwent "acceptable" lymph node evaluation, defined as examination of at least 12 lymph nodes. That rate increased to 38% in 1994-1996, to 47% in 2000-2002, and to 74% in 2006-2008, Ms. Parsons and her colleagues said (JAMA 2011;306:1089-97).
However, this increase was not associated with a rise in node-positive cancer during the same period. Patients with "very high levels of lymph node evaluation ... were only slightly more likely to have node-positive disease, compared with those with few nodes evaluated," the investigators wrote.
Meanwhile, the relative hazard of death continued to decline when more lymph nodes were evaluated whether patients had node-positive or node-negative disease. Paradoxically, the improvement was greater in node-negative than in node-positive patients.
"After adjusting for patient, tumor, and primary treatment factors, we found patients with node-negative disease had lower 5-year mortality when more lymph nodes were evaluated. This effect was unexpectedly larger than that observed for patients with node-positive disease.
"These findings suggest that providers who evaluate more lymph nodes may provide some other unmeasured care, leading to better outcomes," the researchers said.
"Alternatively, the relationship between nodes evaluated and survival may reflect an underlying interaction between the tumor and the individual, influencing survival. In other words, tumor factors may stimulate lymph nodes to enlarge, reflecting immune system recognition of the tumor and more favorable outcomes," Ms. Parsons and her associates said.
This study was limited in that SEER does not collect data pertaining to comorbidities that may have affected the surgeons’ ability to excise adequate tissue samples for lymph node evaluation, they said.
The study results suggest that some factor besides upstaging (possibly improved surgical quality or postsurgical care) "may be the driving mechanism between the lymph node–survival relationship. As a result, implementing wide-range quality improvement initiatives to increase lymph node evaluation for colon cancer may have a limited effect on improving survival in this population," they added.
No conflicts of interest were reported.
In addition to the percentage of cases in which lymph node sampling was "adequate," the average number of lymph nodes sampled has risen steadily during the study period. Twenty or more nodes were sampled in only 12% of patients in 1988-1990, which increased to 34% of patients by the end of the study period, said Dr. Sandra L. Wong.
Yet "despite searching for and finding many more lymph nodes in resected colon specimens, the proportion of patients with node-positive cancers during this time was unchanged, ranging from 40% to 42%," she said.
This effectively debunks the notion that counting more nodes improves staging accuracy. Instead, it may indicate that higher lymph node counts are a proxy for improved care overall, "whether on the part of the surgeons who perform a more thorough cancer operation or pathologists who are more diligent in examining operative specimens," Dr. Wong said.
It’s also possible that "patients who mount a stronger immune response to their cancers may have larger lymph nodes present in regional nodal basins, making them easier to find by pathologists. These patients may have an improved prognosis irrespective of finding cancer in their lymph nodes," she noted.
Dr. Wong, M.D., is in the department of surgery at the University of Michigan, Ann Arbor. She reported no financial conflicts of interest. These remarks were adapted from her editorial accompanying Ms. Parsons’ report (JAMA 2011;306:1139-41).
In addition to the percentage of cases in which lymph node sampling was "adequate," the average number of lymph nodes sampled has risen steadily during the study period. Twenty or more nodes were sampled in only 12% of patients in 1988-1990, which increased to 34% of patients by the end of the study period, said Dr. Sandra L. Wong.
Yet "despite searching for and finding many more lymph nodes in resected colon specimens, the proportion of patients with node-positive cancers during this time was unchanged, ranging from 40% to 42%," she said.
This effectively debunks the notion that counting more nodes improves staging accuracy. Instead, it may indicate that higher lymph node counts are a proxy for improved care overall, "whether on the part of the surgeons who perform a more thorough cancer operation or pathologists who are more diligent in examining operative specimens," Dr. Wong said.
It’s also possible that "patients who mount a stronger immune response to their cancers may have larger lymph nodes present in regional nodal basins, making them easier to find by pathologists. These patients may have an improved prognosis irrespective of finding cancer in their lymph nodes," she noted.
Dr. Wong, M.D., is in the department of surgery at the University of Michigan, Ann Arbor. She reported no financial conflicts of interest. These remarks were adapted from her editorial accompanying Ms. Parsons’ report (JAMA 2011;306:1139-41).
In addition to the percentage of cases in which lymph node sampling was "adequate," the average number of lymph nodes sampled has risen steadily during the study period. Twenty or more nodes were sampled in only 12% of patients in 1988-1990, which increased to 34% of patients by the end of the study period, said Dr. Sandra L. Wong.
Yet "despite searching for and finding many more lymph nodes in resected colon specimens, the proportion of patients with node-positive cancers during this time was unchanged, ranging from 40% to 42%," she said.
This effectively debunks the notion that counting more nodes improves staging accuracy. Instead, it may indicate that higher lymph node counts are a proxy for improved care overall, "whether on the part of the surgeons who perform a more thorough cancer operation or pathologists who are more diligent in examining operative specimens," Dr. Wong said.
It’s also possible that "patients who mount a stronger immune response to their cancers may have larger lymph nodes present in regional nodal basins, making them easier to find by pathologists. These patients may have an improved prognosis irrespective of finding cancer in their lymph nodes," she noted.
Dr. Wong, M.D., is in the department of surgery at the University of Michigan, Ann Arbor. She reported no financial conflicts of interest. These remarks were adapted from her editorial accompanying Ms. Parsons’ report (JAMA 2011;306:1139-41).
The number of lymph nodes evaluated for metastases during colon cancer surgery has increased markedly during the past 20 years – but the improvement is not associated with any increase in the proportion of cancers that are node-positive, according to a report in the Sept. 14 issue of JAMA.
This suggests that this "upstaging mechanism" – raising the number of lymph nodes evaluated to improve identification of lymph-node–positive cancers, and thus to tailor treatment accordingly – cannot be the primary basis for improved patient survival, said Helen M. Parsons, MPH, of the U.S. National Cancer Institute’s applied research program in Bethesda, Md., and her associates.
The investigators analyzed 20-year trends in lymph node evaluation using data from 1988-2008 from the NCI’s SEER (Surveillance, Epidemiology, and End Results) registry. They reviewed records on 86,394 adults treated with radical surgical resection of the colon for a first occurrence of invasive adenocarcinoma.
The number of lymph nodes evaluated rose markedly during the study period. In 1988-1990, only 35% of patients underwent "acceptable" lymph node evaluation, defined as examination of at least 12 lymph nodes. That rate increased to 38% in 1994-1996, to 47% in 2000-2002, and to 74% in 2006-2008, Ms. Parsons and her colleagues said (JAMA 2011;306:1089-97).
However, this increase was not associated with a rise in node-positive cancer during the same period. Patients with "very high levels of lymph node evaluation ... were only slightly more likely to have node-positive disease, compared with those with few nodes evaluated," the investigators wrote.
Meanwhile, the relative hazard of death continued to decline when more lymph nodes were evaluated whether patients had node-positive or node-negative disease. Paradoxically, the improvement was greater in node-negative than in node-positive patients.
"After adjusting for patient, tumor, and primary treatment factors, we found patients with node-negative disease had lower 5-year mortality when more lymph nodes were evaluated. This effect was unexpectedly larger than that observed for patients with node-positive disease.
"These findings suggest that providers who evaluate more lymph nodes may provide some other unmeasured care, leading to better outcomes," the researchers said.
"Alternatively, the relationship between nodes evaluated and survival may reflect an underlying interaction between the tumor and the individual, influencing survival. In other words, tumor factors may stimulate lymph nodes to enlarge, reflecting immune system recognition of the tumor and more favorable outcomes," Ms. Parsons and her associates said.
This study was limited in that SEER does not collect data pertaining to comorbidities that may have affected the surgeons’ ability to excise adequate tissue samples for lymph node evaluation, they said.
The study results suggest that some factor besides upstaging (possibly improved surgical quality or postsurgical care) "may be the driving mechanism between the lymph node–survival relationship. As a result, implementing wide-range quality improvement initiatives to increase lymph node evaluation for colon cancer may have a limited effect on improving survival in this population," they added.
No conflicts of interest were reported.
The number of lymph nodes evaluated for metastases during colon cancer surgery has increased markedly during the past 20 years – but the improvement is not associated with any increase in the proportion of cancers that are node-positive, according to a report in the Sept. 14 issue of JAMA.
This suggests that this "upstaging mechanism" – raising the number of lymph nodes evaluated to improve identification of lymph-node–positive cancers, and thus to tailor treatment accordingly – cannot be the primary basis for improved patient survival, said Helen M. Parsons, MPH, of the U.S. National Cancer Institute’s applied research program in Bethesda, Md., and her associates.
The investigators analyzed 20-year trends in lymph node evaluation using data from 1988-2008 from the NCI’s SEER (Surveillance, Epidemiology, and End Results) registry. They reviewed records on 86,394 adults treated with radical surgical resection of the colon for a first occurrence of invasive adenocarcinoma.
The number of lymph nodes evaluated rose markedly during the study period. In 1988-1990, only 35% of patients underwent "acceptable" lymph node evaluation, defined as examination of at least 12 lymph nodes. That rate increased to 38% in 1994-1996, to 47% in 2000-2002, and to 74% in 2006-2008, Ms. Parsons and her colleagues said (JAMA 2011;306:1089-97).
However, this increase was not associated with a rise in node-positive cancer during the same period. Patients with "very high levels of lymph node evaluation ... were only slightly more likely to have node-positive disease, compared with those with few nodes evaluated," the investigators wrote.
Meanwhile, the relative hazard of death continued to decline when more lymph nodes were evaluated whether patients had node-positive or node-negative disease. Paradoxically, the improvement was greater in node-negative than in node-positive patients.
"After adjusting for patient, tumor, and primary treatment factors, we found patients with node-negative disease had lower 5-year mortality when more lymph nodes were evaluated. This effect was unexpectedly larger than that observed for patients with node-positive disease.
"These findings suggest that providers who evaluate more lymph nodes may provide some other unmeasured care, leading to better outcomes," the researchers said.
"Alternatively, the relationship between nodes evaluated and survival may reflect an underlying interaction between the tumor and the individual, influencing survival. In other words, tumor factors may stimulate lymph nodes to enlarge, reflecting immune system recognition of the tumor and more favorable outcomes," Ms. Parsons and her associates said.
This study was limited in that SEER does not collect data pertaining to comorbidities that may have affected the surgeons’ ability to excise adequate tissue samples for lymph node evaluation, they said.
The study results suggest that some factor besides upstaging (possibly improved surgical quality or postsurgical care) "may be the driving mechanism between the lymph node–survival relationship. As a result, implementing wide-range quality improvement initiatives to increase lymph node evaluation for colon cancer may have a limited effect on improving survival in this population," they added.
No conflicts of interest were reported.
FROM JAMA
Major Finding: In 1988-1990, only 35% of colon cancer patients underwent examination of at least 12 lymph nodes; that rate increased to 38% in 1994-1996, to 47% in 2000-2002, and to 74% in 2006-2008.
Data Source: An observational cohort study analyzing SEER data on 86,394 patients who underwent surgery for colon cancer in 1988-2008.
Disclosures: No conflicts of interest were reported.
Graduated Driver Licensing Cuts Younger Teens' Fatal Crashes
Restrictions on driving by young, new drivers, known as graduated driver licensing programs, substantially decreased the incidence of fatal crashes among the 16-year-old drivers for whom they were designed, according to a report in the Sept. 14 issue of JAMA.
Paradoxically, however, the same programs appear to raise the incidence of fatal crashes for drivers aged 18 and 19, who are not directly subject to the restrictions, said Scott V. Masten, Ph.D., of the California Department of Motor Vehicles’ research and development branch, Sacramento, and his associates.
All 50 states and the District of Columbia have adopted graduated driver licensing (GDL) systems which mandate that novice drivers gain more experience in low-risk conditions before they "graduate," step by step, into driving under riskier conditions. Drivers younger than 18 years can only attain full, unrestricted licensure after they complete a lengthy learning period supervised by an adult. The strongest programs also add steps that limit driving at night and/or driving with multiple underage passengers.
Dr. Masten and his colleagues assessed data from the nationwide Fatality Analysis Reporting System for the period from 1986 through 2007, which included information on all drivers, crash circumstances, and vehicles (including passenger cars, light pickup trucks, vans, and sports-utility vehicles) that involved a death.
In unadjusted analyses, the rates of fatal crashes for each age separately – 16-year-olds, 17-year-olds, 18-year-olds, and 19-year-olds – as well as the rate of fatal crashes for all teenagers combined were consistently lower in states that had three-step GDL programs than in states that did not. The unadjusted rate of fatal crashes for all adolescents combined was 29.7 per 100,000 person-years with the strongest GDL programs, 36.8 per 100,000 person-years with weaker GDL programs, and 47.2 per 100,000 person-years in programs with none of the key GDL elements.
However, in adjusted analyses, GDL programs were associated with a lower incidence of fatal crashes only among 16-year-old drivers. In addition, stronger programs appeared to decrease the rate more effectively than weaker programs among 16-year-olds. But both types of programs slightly raised the rate among 18- and 19-year-olds.
"Since enactment of the first program in 1996, GDL programs (weaker and stronger combined) are estimated to have been associated with 1,348 fewer fatal crashes involving 16-year-old drivers but with 1,086 more involving 18-year-old drivers," the investigators said (JAMA 2011;306:1098-1103).
The reasons for the paradoxical increase among older teens are not known. "Mandatory periods of supervised driving clearly reduce risk while novices learn how to handle a vehicle, gain insights into the behaviors of other drivers, and develop understanding of the physical driving environment.
"Supervised driving, however, is co-driving, and some important lessons of experience, such as the need for self-regulation and what it means to be fully responsible for a vehicle, cannot be learned until teens begin driving alone. Under GDL, this now occurs at least 6 months later, reducing the [total] time that young drivers have to learn from driving on their own before they are 18," Dr. Masten and his associates noted.
"Research is needed to determine what accounts for the increase among 18-year-old drivers and whether this increase occurs among nonfatal crashes as well," they added.
Unfortunately, there is no state-specific national database of nonfatal crashes in the United States.
The investigators cautioned that fatal crashes "represent a small and atypical subset of all crashes." They are much more likely than nonfatal crashes to involve high-risk behaviors such as drinking and speeding. GDL programs’ major influence would be on crashes attributable to lack of understanding rather than to crashes attributable to risky behavior, they said.
No conflicts of interest were reported.
Restrictions on driving by young, new drivers, known as graduated driver licensing programs, substantially decreased the incidence of fatal crashes among the 16-year-old drivers for whom they were designed, according to a report in the Sept. 14 issue of JAMA.
Paradoxically, however, the same programs appear to raise the incidence of fatal crashes for drivers aged 18 and 19, who are not directly subject to the restrictions, said Scott V. Masten, Ph.D., of the California Department of Motor Vehicles’ research and development branch, Sacramento, and his associates.
All 50 states and the District of Columbia have adopted graduated driver licensing (GDL) systems which mandate that novice drivers gain more experience in low-risk conditions before they "graduate," step by step, into driving under riskier conditions. Drivers younger than 18 years can only attain full, unrestricted licensure after they complete a lengthy learning period supervised by an adult. The strongest programs also add steps that limit driving at night and/or driving with multiple underage passengers.
Dr. Masten and his colleagues assessed data from the nationwide Fatality Analysis Reporting System for the period from 1986 through 2007, which included information on all drivers, crash circumstances, and vehicles (including passenger cars, light pickup trucks, vans, and sports-utility vehicles) that involved a death.
In unadjusted analyses, the rates of fatal crashes for each age separately – 16-year-olds, 17-year-olds, 18-year-olds, and 19-year-olds – as well as the rate of fatal crashes for all teenagers combined were consistently lower in states that had three-step GDL programs than in states that did not. The unadjusted rate of fatal crashes for all adolescents combined was 29.7 per 100,000 person-years with the strongest GDL programs, 36.8 per 100,000 person-years with weaker GDL programs, and 47.2 per 100,000 person-years in programs with none of the key GDL elements.
However, in adjusted analyses, GDL programs were associated with a lower incidence of fatal crashes only among 16-year-old drivers. In addition, stronger programs appeared to decrease the rate more effectively than weaker programs among 16-year-olds. But both types of programs slightly raised the rate among 18- and 19-year-olds.
"Since enactment of the first program in 1996, GDL programs (weaker and stronger combined) are estimated to have been associated with 1,348 fewer fatal crashes involving 16-year-old drivers but with 1,086 more involving 18-year-old drivers," the investigators said (JAMA 2011;306:1098-1103).
The reasons for the paradoxical increase among older teens are not known. "Mandatory periods of supervised driving clearly reduce risk while novices learn how to handle a vehicle, gain insights into the behaviors of other drivers, and develop understanding of the physical driving environment.
"Supervised driving, however, is co-driving, and some important lessons of experience, such as the need for self-regulation and what it means to be fully responsible for a vehicle, cannot be learned until teens begin driving alone. Under GDL, this now occurs at least 6 months later, reducing the [total] time that young drivers have to learn from driving on their own before they are 18," Dr. Masten and his associates noted.
"Research is needed to determine what accounts for the increase among 18-year-old drivers and whether this increase occurs among nonfatal crashes as well," they added.
Unfortunately, there is no state-specific national database of nonfatal crashes in the United States.
The investigators cautioned that fatal crashes "represent a small and atypical subset of all crashes." They are much more likely than nonfatal crashes to involve high-risk behaviors such as drinking and speeding. GDL programs’ major influence would be on crashes attributable to lack of understanding rather than to crashes attributable to risky behavior, they said.
No conflicts of interest were reported.
Restrictions on driving by young, new drivers, known as graduated driver licensing programs, substantially decreased the incidence of fatal crashes among the 16-year-old drivers for whom they were designed, according to a report in the Sept. 14 issue of JAMA.
Paradoxically, however, the same programs appear to raise the incidence of fatal crashes for drivers aged 18 and 19, who are not directly subject to the restrictions, said Scott V. Masten, Ph.D., of the California Department of Motor Vehicles’ research and development branch, Sacramento, and his associates.
All 50 states and the District of Columbia have adopted graduated driver licensing (GDL) systems which mandate that novice drivers gain more experience in low-risk conditions before they "graduate," step by step, into driving under riskier conditions. Drivers younger than 18 years can only attain full, unrestricted licensure after they complete a lengthy learning period supervised by an adult. The strongest programs also add steps that limit driving at night and/or driving with multiple underage passengers.
Dr. Masten and his colleagues assessed data from the nationwide Fatality Analysis Reporting System for the period from 1986 through 2007, which included information on all drivers, crash circumstances, and vehicles (including passenger cars, light pickup trucks, vans, and sports-utility vehicles) that involved a death.
In unadjusted analyses, the rates of fatal crashes for each age separately – 16-year-olds, 17-year-olds, 18-year-olds, and 19-year-olds – as well as the rate of fatal crashes for all teenagers combined were consistently lower in states that had three-step GDL programs than in states that did not. The unadjusted rate of fatal crashes for all adolescents combined was 29.7 per 100,000 person-years with the strongest GDL programs, 36.8 per 100,000 person-years with weaker GDL programs, and 47.2 per 100,000 person-years in programs with none of the key GDL elements.
However, in adjusted analyses, GDL programs were associated with a lower incidence of fatal crashes only among 16-year-old drivers. In addition, stronger programs appeared to decrease the rate more effectively than weaker programs among 16-year-olds. But both types of programs slightly raised the rate among 18- and 19-year-olds.
"Since enactment of the first program in 1996, GDL programs (weaker and stronger combined) are estimated to have been associated with 1,348 fewer fatal crashes involving 16-year-old drivers but with 1,086 more involving 18-year-old drivers," the investigators said (JAMA 2011;306:1098-1103).
The reasons for the paradoxical increase among older teens are not known. "Mandatory periods of supervised driving clearly reduce risk while novices learn how to handle a vehicle, gain insights into the behaviors of other drivers, and develop understanding of the physical driving environment.
"Supervised driving, however, is co-driving, and some important lessons of experience, such as the need for self-regulation and what it means to be fully responsible for a vehicle, cannot be learned until teens begin driving alone. Under GDL, this now occurs at least 6 months later, reducing the [total] time that young drivers have to learn from driving on their own before they are 18," Dr. Masten and his associates noted.
"Research is needed to determine what accounts for the increase among 18-year-old drivers and whether this increase occurs among nonfatal crashes as well," they added.
Unfortunately, there is no state-specific national database of nonfatal crashes in the United States.
The investigators cautioned that fatal crashes "represent a small and atypical subset of all crashes." They are much more likely than nonfatal crashes to involve high-risk behaviors such as drinking and speeding. GDL programs’ major influence would be on crashes attributable to lack of understanding rather than to crashes attributable to risky behavior, they said.
No conflicts of interest were reported.
FROM JAMA
Major Finding: Graduated driver licensing was associated with a lower rate of fatal crashes for 16-year-old drivers, cutting the total by an estimated 1,348 crashes since such programs were introduced in 1996.
Data Source: Analysis of data on fatal car accidents from the nationwide Fatality Analysis Reporting System for 1986-2007, with detailed information about adolescent drivers, vehicles, and the circumstances surrounding the crashes.
Disclosures: No conflicts of interest were reported.
Venlafaxine, Clonidine Top Placebo for Breast Cancer Hot Flashes
Venlafaxine and clonidine both outperformed placebo in controlling hot flashes among women with breast cancer in a study published online Sept. 12 in the Journal of Clinical Oncology.
Effective treatments for hot flashes may improve these patients’ ability to continue their anticancer therapies, said Dr. Annelies H. Boekhout of The Netherlands Cancer Institute, Amsterdam, and her associates.
The SSNRI venlafaxine (Effexor) and the antihypertensive clonidine "both are often prescribed treatments and are recommended in clinical guidelines in the management of hot flashes. However, a three-arm trial comparing clonidine, venlafaxine, and placebo in patients with breast cancer has not been conducted" until now, they noted.
In their double-blind study at three Dutch hospitals, 102 women with breast cancer who experienced at least two hot flashes per day were stratified by age, duration of symptoms, concurrent endocrine therapy, and previous chemotherapy, and randomly assigned to receive 75 mg venlafaxine (41 patients), 0.1 mg clonidine (41 patients), or matching placebo (20 patients) daily for 12 weeks.
The women completed daily diaries recording the frequency and severity of hot flashes. They also reported every week on adverse events such as reduced appetite, nausea, sleepiness, dizziness, fatigue, dry mouth, and constipation. They recorded their sleep quality, anxiety, depression, and sexual function at 4 weeks and at the conclusion of treatment.
A total of 22 subjects (22%) either dropped out of the study or were lost to follow-up. Two patients (5%) in the venlafaxine group and six (15%) in the clonidine group cited adverse effects such as somnolence, dizziness, and dry mouth as their reason for discontinuing. Another 9% of patients discontinued because of noncompliance, which "had some effect on the observed differences between treatments in this study."
Among the 35 women assigned to venlafaxine who completed the trial, there was a 42% decline in hot flashes during weeks 1-4, compared with the placebo group. Over the entire study period, the reduction in hot flashes was 41% with venlafaxine, compared with placebo.
Among the 28 women assigned to clonidine who completed the trial, hot flashes declined by only 26% during weeks 1-4 but then declined another 22% during the remainder of the study, for an overall reduction of approximately 45%.
Thus, both active agents decreased the frequency and severity of hot flashes compared with placebo, with no discernible difference between the two by week 12. "A more rapid reduction of hot flashes suggests that venlafaxine is to be preferred over clonidine," Dr. Boekhout and her colleagues said (J. Clin. Oncol. 2011 Sept. 12 [doi:10.1200/JCO.2010.33.1298]).
They added that it is "advisable to treat patients to manage hot flashes with venlafaxine 37.5 mg daily in the first week and increase the venlafaxine dose to 75 mg if greater efficacy is desired."
A total of 14 patients (34%) in the clonidine group, 23 (56%) in the venlafaxine group, and 4 (20%) in the placebo group said they wished to continue the study treatment at the end of the trial.
Women taking clonidine reported more symptoms of anxiety and women taking venlafaxine reported more symptoms of depression. Sexual function and sleep quality did not differ between the two groups. However, the duration of this study may have been too short to adequately assess these adverse effects, the researchers noted.
No conflicts of interest were reported.
The main weakness of this study was that "the patient numbers were too small to reliably identify suspected differences between the two active study arms," said Dr. Charles L. Loprinzi, Dr. Debra L. Barton, and Dr. Rui Qin.
The unbalanced randomization scheme and the unequal dropout rates, which likely were due to perceived toxicities, meant that only 35 patients were available for analysis in the venlafaxine group, 28 in the clonidine group, and 17 in the placebo group. To detect a 10% difference between the two active drugs, 156 subjects would have been needed per study arm, and to detect a 5% difference, 620 would have been needed. "With the currently reported sample size ... the power of detecting a 10% difference is only 29%," they noted.
For clinicians, they added, available data suggest multiple nonestrogenic options are available for treating hot flashes. "Our suggestion is that these nonhormonal options be tried in the order in which they are listed (an antidepressant, then an antiseizure medication, then clonidine), unless there are contraindications to particular drugs in individual patients," they wrote.
Dr. Loprinzi, Dr. Barton, and Dr. Qin are at the Mayo Clinic in Rochester, Minn. Dr. Loprinzi reported ties to Pfizer. These remarks were taken from their editorial accompanying Dr. Boekhout’s report (J. Clin. Oncol. 2011 Sept. 12 [doi:10.120o/JCO.2011.37.5865]).
The main weakness of this study was that "the patient numbers were too small to reliably identify suspected differences between the two active study arms," said Dr. Charles L. Loprinzi, Dr. Debra L. Barton, and Dr. Rui Qin.
The unbalanced randomization scheme and the unequal dropout rates, which likely were due to perceived toxicities, meant that only 35 patients were available for analysis in the venlafaxine group, 28 in the clonidine group, and 17 in the placebo group. To detect a 10% difference between the two active drugs, 156 subjects would have been needed per study arm, and to detect a 5% difference, 620 would have been needed. "With the currently reported sample size ... the power of detecting a 10% difference is only 29%," they noted.
For clinicians, they added, available data suggest multiple nonestrogenic options are available for treating hot flashes. "Our suggestion is that these nonhormonal options be tried in the order in which they are listed (an antidepressant, then an antiseizure medication, then clonidine), unless there are contraindications to particular drugs in individual patients," they wrote.
Dr. Loprinzi, Dr. Barton, and Dr. Qin are at the Mayo Clinic in Rochester, Minn. Dr. Loprinzi reported ties to Pfizer. These remarks were taken from their editorial accompanying Dr. Boekhout’s report (J. Clin. Oncol. 2011 Sept. 12 [doi:10.120o/JCO.2011.37.5865]).
The main weakness of this study was that "the patient numbers were too small to reliably identify suspected differences between the two active study arms," said Dr. Charles L. Loprinzi, Dr. Debra L. Barton, and Dr. Rui Qin.
The unbalanced randomization scheme and the unequal dropout rates, which likely were due to perceived toxicities, meant that only 35 patients were available for analysis in the venlafaxine group, 28 in the clonidine group, and 17 in the placebo group. To detect a 10% difference between the two active drugs, 156 subjects would have been needed per study arm, and to detect a 5% difference, 620 would have been needed. "With the currently reported sample size ... the power of detecting a 10% difference is only 29%," they noted.
For clinicians, they added, available data suggest multiple nonestrogenic options are available for treating hot flashes. "Our suggestion is that these nonhormonal options be tried in the order in which they are listed (an antidepressant, then an antiseizure medication, then clonidine), unless there are contraindications to particular drugs in individual patients," they wrote.
Dr. Loprinzi, Dr. Barton, and Dr. Qin are at the Mayo Clinic in Rochester, Minn. Dr. Loprinzi reported ties to Pfizer. These remarks were taken from their editorial accompanying Dr. Boekhout’s report (J. Clin. Oncol. 2011 Sept. 12 [doi:10.120o/JCO.2011.37.5865]).
Venlafaxine and clonidine both outperformed placebo in controlling hot flashes among women with breast cancer in a study published online Sept. 12 in the Journal of Clinical Oncology.
Effective treatments for hot flashes may improve these patients’ ability to continue their anticancer therapies, said Dr. Annelies H. Boekhout of The Netherlands Cancer Institute, Amsterdam, and her associates.
The SSNRI venlafaxine (Effexor) and the antihypertensive clonidine "both are often prescribed treatments and are recommended in clinical guidelines in the management of hot flashes. However, a three-arm trial comparing clonidine, venlafaxine, and placebo in patients with breast cancer has not been conducted" until now, they noted.
In their double-blind study at three Dutch hospitals, 102 women with breast cancer who experienced at least two hot flashes per day were stratified by age, duration of symptoms, concurrent endocrine therapy, and previous chemotherapy, and randomly assigned to receive 75 mg venlafaxine (41 patients), 0.1 mg clonidine (41 patients), or matching placebo (20 patients) daily for 12 weeks.
The women completed daily diaries recording the frequency and severity of hot flashes. They also reported every week on adverse events such as reduced appetite, nausea, sleepiness, dizziness, fatigue, dry mouth, and constipation. They recorded their sleep quality, anxiety, depression, and sexual function at 4 weeks and at the conclusion of treatment.
A total of 22 subjects (22%) either dropped out of the study or were lost to follow-up. Two patients (5%) in the venlafaxine group and six (15%) in the clonidine group cited adverse effects such as somnolence, dizziness, and dry mouth as their reason for discontinuing. Another 9% of patients discontinued because of noncompliance, which "had some effect on the observed differences between treatments in this study."
Among the 35 women assigned to venlafaxine who completed the trial, there was a 42% decline in hot flashes during weeks 1-4, compared with the placebo group. Over the entire study period, the reduction in hot flashes was 41% with venlafaxine, compared with placebo.
Among the 28 women assigned to clonidine who completed the trial, hot flashes declined by only 26% during weeks 1-4 but then declined another 22% during the remainder of the study, for an overall reduction of approximately 45%.
Thus, both active agents decreased the frequency and severity of hot flashes compared with placebo, with no discernible difference between the two by week 12. "A more rapid reduction of hot flashes suggests that venlafaxine is to be preferred over clonidine," Dr. Boekhout and her colleagues said (J. Clin. Oncol. 2011 Sept. 12 [doi:10.1200/JCO.2010.33.1298]).
They added that it is "advisable to treat patients to manage hot flashes with venlafaxine 37.5 mg daily in the first week and increase the venlafaxine dose to 75 mg if greater efficacy is desired."
A total of 14 patients (34%) in the clonidine group, 23 (56%) in the venlafaxine group, and 4 (20%) in the placebo group said they wished to continue the study treatment at the end of the trial.
Women taking clonidine reported more symptoms of anxiety and women taking venlafaxine reported more symptoms of depression. Sexual function and sleep quality did not differ between the two groups. However, the duration of this study may have been too short to adequately assess these adverse effects, the researchers noted.
No conflicts of interest were reported.
Venlafaxine and clonidine both outperformed placebo in controlling hot flashes among women with breast cancer in a study published online Sept. 12 in the Journal of Clinical Oncology.
Effective treatments for hot flashes may improve these patients’ ability to continue their anticancer therapies, said Dr. Annelies H. Boekhout of The Netherlands Cancer Institute, Amsterdam, and her associates.
The SSNRI venlafaxine (Effexor) and the antihypertensive clonidine "both are often prescribed treatments and are recommended in clinical guidelines in the management of hot flashes. However, a three-arm trial comparing clonidine, venlafaxine, and placebo in patients with breast cancer has not been conducted" until now, they noted.
In their double-blind study at three Dutch hospitals, 102 women with breast cancer who experienced at least two hot flashes per day were stratified by age, duration of symptoms, concurrent endocrine therapy, and previous chemotherapy, and randomly assigned to receive 75 mg venlafaxine (41 patients), 0.1 mg clonidine (41 patients), or matching placebo (20 patients) daily for 12 weeks.
The women completed daily diaries recording the frequency and severity of hot flashes. They also reported every week on adverse events such as reduced appetite, nausea, sleepiness, dizziness, fatigue, dry mouth, and constipation. They recorded their sleep quality, anxiety, depression, and sexual function at 4 weeks and at the conclusion of treatment.
A total of 22 subjects (22%) either dropped out of the study or were lost to follow-up. Two patients (5%) in the venlafaxine group and six (15%) in the clonidine group cited adverse effects such as somnolence, dizziness, and dry mouth as their reason for discontinuing. Another 9% of patients discontinued because of noncompliance, which "had some effect on the observed differences between treatments in this study."
Among the 35 women assigned to venlafaxine who completed the trial, there was a 42% decline in hot flashes during weeks 1-4, compared with the placebo group. Over the entire study period, the reduction in hot flashes was 41% with venlafaxine, compared with placebo.
Among the 28 women assigned to clonidine who completed the trial, hot flashes declined by only 26% during weeks 1-4 but then declined another 22% during the remainder of the study, for an overall reduction of approximately 45%.
Thus, both active agents decreased the frequency and severity of hot flashes compared with placebo, with no discernible difference between the two by week 12. "A more rapid reduction of hot flashes suggests that venlafaxine is to be preferred over clonidine," Dr. Boekhout and her colleagues said (J. Clin. Oncol. 2011 Sept. 12 [doi:10.1200/JCO.2010.33.1298]).
They added that it is "advisable to treat patients to manage hot flashes with venlafaxine 37.5 mg daily in the first week and increase the venlafaxine dose to 75 mg if greater efficacy is desired."
A total of 14 patients (34%) in the clonidine group, 23 (56%) in the venlafaxine group, and 4 (20%) in the placebo group said they wished to continue the study treatment at the end of the trial.
Women taking clonidine reported more symptoms of anxiety and women taking venlafaxine reported more symptoms of depression. Sexual function and sleep quality did not differ between the two groups. However, the duration of this study may have been too short to adequately assess these adverse effects, the researchers noted.
No conflicts of interest were reported.
FROM THE JOURNAL OF CLINICAL ONCOLOGY
Major Finding: Both venlafaxine and clonidine reduced the frequency and severity of hot flashes by approximately 45%, compared with placebo.
Data Source: A prospective, randomized, double-blind, multicenter clinical trial comparing 12 weeks of venlafaxine, clonidine, or placebo for control of hot flashes in 102 Dutch women with breast cancer.
Disclosures: No financial conflicts of interest were reported.
Poststroke Statins May Not Raise Hemorrhage Risk
Statin therapy did not raise the risk of intracerebral hemorrhage among older survivors of ischemic stroke in a large observational study published online Sept. 12 in Archives of Neurology.
"At present, more than 80% of patients discharged from the hospital with a diagnosis of ischemic stroke are prescribed statin therapy. ... We found no evidence that such patients are at higher risk for cerebral bleeding than individuals who do not receive statins.
"Physicians should continue to adhere to current treatment guidelines recommending statin therapy for most patients with a history of ischemic stroke," wrote Dr. Daniel G. Hackam of the department of clinical neurologic sciences, University of Western Ontario, London, and his associates.
After clinical practice guidelines recommended statin therapy as protective against recurrent ischemic stroke in 2006, exploratory analyses in two clinical trials suggested that the drugs may actually raise the risk of hemorrhagic stroke. These reports prompted uncertainty and controversy over whether the known benefits of statin therapy in this patient population outweighed the possible risks.
Dr. Hackam and his colleagues performed a retrospective, population-based cohort study to examine the association between statin therapy and intracerebral hemorrhage in older survivors of ischemic stroke. They assessed the medical records of 17,872 patients 66 years and older (mean age, 78 years) who were treated at any Ontario hospital for ischemic stroke between 1994 and 2008 and whose records were available through 2010 to track the development of intracerebral hemorrhage.
The investigators compared the outcomes of 8,936 study subjects who began taking statins within 120 days of hospital discharge with the same number of control subjects who did not take statins. The two groups were matched on the basis of 75 patient characteristics.
During a median follow-up of 4 years, there were 213 episodes of intracerebral hemorrhage. The rate was slightly lower among patients taking statins (2.94/1,000 patient-years) than among controls (3.71/1,000 patient-years).
"The hazard ratio for statin exposure was 0.87, indicating no association between statins and intracerebral hemorrhage," the investigators wrote (Arch. Neurol. 2011 Sept. 12 [doi:10.1001/archneurol.2011.228]).
There were no associations between statin therapy and hemorrhage across numerous subgroups of patients; the risks were the same regardless of patient age, sex, socioeconomic status, major comorbidities, use of antiplatelet therapy, and use of anticoagulants.
In addition, unexposed control subjects had the same risks as did patients taking low doses of statins and patients taking high doses, so no dose-response relationship was observed.
There also were no differences in the use of statin therapy in the subgroup of study subjects who developed fatal hemorrhagic stroke during follow-up.
And in an analysis excluding "crossover" subjects – patients in the statin group who were nonadherent and patients in the control group who began statin therapy during follow-up – the results showed a significantly lower rate of intracranial hemorrhage in those who actually took statins compared with those who did not.
Furthermore, the researchers examined the use of several unrelated medical and surgical procedures in the study population, in an effort to adjust for the possibility that statin users might simply be more health conscious or heavier users of the health care system than nonusers. "As anticipated, we found no association between statin exposure and any of these events ... [which] argues against healthy user bias or screening bias in our cohort," Dr. Hackam and his associates wrote.
They cautioned that a recent study suggested that people with a history of lobar hemorrhage might be at particular risk from statin therapy. Since their study "could not test this important subset" of stroke survivors, clinicians should remain cautious about prescribing statins for such patients, the researchers said.
This study was supported by the Physicians’ Services Incorporated Foundation (a nonprofit medical research charity), the Canadian Institutes for Health Research, the Heart and Stroke Foundation of Ontario, the Canadian Stroke Network, the Institute for Clinical Evaluative Sciences, and the Ontario Ministry of Health and Long-Term Care. One of Dr. Hackam’s associates reported ties to Pfizer, Eli Lilly, Novartis, GlaxoSmithKline, and Boehringer Ingelheim.
Despite the findings of this "carefully thought out" study, "the clinical decision to administer a statin following intracerebral hemorrhage remains a challenging one, with available evidence tilting in the direction of withholding such therapy, especially when there is a history of lobar brain hemorrhage," wrote Dr. Philip B. Gorelick.
"I recommend careful control of modifiable risk factors for brain hemorrhage, such as blood pressure, in those who are treated with a statin. Other statin-associated risks for ICH [intracerebral hemorrhage] such as history of [hemorrhagic stroke] or use of antithrombotic therapy, and possibly the presence of cerebral microbleeds, should be carefully considered in the clinical decision-making process," he said.
Dr. Gorelick is in the department of neurology and rehabilitation at the center for stroke research at the University of Illinois at Chicago. He reported serving as a consultant to AstraZeneca and Pfizer. These remarks were taken from his editorial accompanying Dr. Hackam’s report (Arch. Neurol. 2011 Sept. 12 [doi:10.1001/archneurol.2011.234]).
Despite the findings of this "carefully thought out" study, "the clinical decision to administer a statin following intracerebral hemorrhage remains a challenging one, with available evidence tilting in the direction of withholding such therapy, especially when there is a history of lobar brain hemorrhage," wrote Dr. Philip B. Gorelick.
"I recommend careful control of modifiable risk factors for brain hemorrhage, such as blood pressure, in those who are treated with a statin. Other statin-associated risks for ICH [intracerebral hemorrhage] such as history of [hemorrhagic stroke] or use of antithrombotic therapy, and possibly the presence of cerebral microbleeds, should be carefully considered in the clinical decision-making process," he said.
Dr. Gorelick is in the department of neurology and rehabilitation at the center for stroke research at the University of Illinois at Chicago. He reported serving as a consultant to AstraZeneca and Pfizer. These remarks were taken from his editorial accompanying Dr. Hackam’s report (Arch. Neurol. 2011 Sept. 12 [doi:10.1001/archneurol.2011.234]).
Despite the findings of this "carefully thought out" study, "the clinical decision to administer a statin following intracerebral hemorrhage remains a challenging one, with available evidence tilting in the direction of withholding such therapy, especially when there is a history of lobar brain hemorrhage," wrote Dr. Philip B. Gorelick.
"I recommend careful control of modifiable risk factors for brain hemorrhage, such as blood pressure, in those who are treated with a statin. Other statin-associated risks for ICH [intracerebral hemorrhage] such as history of [hemorrhagic stroke] or use of antithrombotic therapy, and possibly the presence of cerebral microbleeds, should be carefully considered in the clinical decision-making process," he said.
Dr. Gorelick is in the department of neurology and rehabilitation at the center for stroke research at the University of Illinois at Chicago. He reported serving as a consultant to AstraZeneca and Pfizer. These remarks were taken from his editorial accompanying Dr. Hackam’s report (Arch. Neurol. 2011 Sept. 12 [doi:10.1001/archneurol.2011.234]).
Statin therapy did not raise the risk of intracerebral hemorrhage among older survivors of ischemic stroke in a large observational study published online Sept. 12 in Archives of Neurology.
"At present, more than 80% of patients discharged from the hospital with a diagnosis of ischemic stroke are prescribed statin therapy. ... We found no evidence that such patients are at higher risk for cerebral bleeding than individuals who do not receive statins.
"Physicians should continue to adhere to current treatment guidelines recommending statin therapy for most patients with a history of ischemic stroke," wrote Dr. Daniel G. Hackam of the department of clinical neurologic sciences, University of Western Ontario, London, and his associates.
After clinical practice guidelines recommended statin therapy as protective against recurrent ischemic stroke in 2006, exploratory analyses in two clinical trials suggested that the drugs may actually raise the risk of hemorrhagic stroke. These reports prompted uncertainty and controversy over whether the known benefits of statin therapy in this patient population outweighed the possible risks.
Dr. Hackam and his colleagues performed a retrospective, population-based cohort study to examine the association between statin therapy and intracerebral hemorrhage in older survivors of ischemic stroke. They assessed the medical records of 17,872 patients 66 years and older (mean age, 78 years) who were treated at any Ontario hospital for ischemic stroke between 1994 and 2008 and whose records were available through 2010 to track the development of intracerebral hemorrhage.
The investigators compared the outcomes of 8,936 study subjects who began taking statins within 120 days of hospital discharge with the same number of control subjects who did not take statins. The two groups were matched on the basis of 75 patient characteristics.
During a median follow-up of 4 years, there were 213 episodes of intracerebral hemorrhage. The rate was slightly lower among patients taking statins (2.94/1,000 patient-years) than among controls (3.71/1,000 patient-years).
"The hazard ratio for statin exposure was 0.87, indicating no association between statins and intracerebral hemorrhage," the investigators wrote (Arch. Neurol. 2011 Sept. 12 [doi:10.1001/archneurol.2011.228]).
There were no associations between statin therapy and hemorrhage across numerous subgroups of patients; the risks were the same regardless of patient age, sex, socioeconomic status, major comorbidities, use of antiplatelet therapy, and use of anticoagulants.
In addition, unexposed control subjects had the same risks as did patients taking low doses of statins and patients taking high doses, so no dose-response relationship was observed.
There also were no differences in the use of statin therapy in the subgroup of study subjects who developed fatal hemorrhagic stroke during follow-up.
And in an analysis excluding "crossover" subjects – patients in the statin group who were nonadherent and patients in the control group who began statin therapy during follow-up – the results showed a significantly lower rate of intracranial hemorrhage in those who actually took statins compared with those who did not.
Furthermore, the researchers examined the use of several unrelated medical and surgical procedures in the study population, in an effort to adjust for the possibility that statin users might simply be more health conscious or heavier users of the health care system than nonusers. "As anticipated, we found no association between statin exposure and any of these events ... [which] argues against healthy user bias or screening bias in our cohort," Dr. Hackam and his associates wrote.
They cautioned that a recent study suggested that people with a history of lobar hemorrhage might be at particular risk from statin therapy. Since their study "could not test this important subset" of stroke survivors, clinicians should remain cautious about prescribing statins for such patients, the researchers said.
This study was supported by the Physicians’ Services Incorporated Foundation (a nonprofit medical research charity), the Canadian Institutes for Health Research, the Heart and Stroke Foundation of Ontario, the Canadian Stroke Network, the Institute for Clinical Evaluative Sciences, and the Ontario Ministry of Health and Long-Term Care. One of Dr. Hackam’s associates reported ties to Pfizer, Eli Lilly, Novartis, GlaxoSmithKline, and Boehringer Ingelheim.
Statin therapy did not raise the risk of intracerebral hemorrhage among older survivors of ischemic stroke in a large observational study published online Sept. 12 in Archives of Neurology.
"At present, more than 80% of patients discharged from the hospital with a diagnosis of ischemic stroke are prescribed statin therapy. ... We found no evidence that such patients are at higher risk for cerebral bleeding than individuals who do not receive statins.
"Physicians should continue to adhere to current treatment guidelines recommending statin therapy for most patients with a history of ischemic stroke," wrote Dr. Daniel G. Hackam of the department of clinical neurologic sciences, University of Western Ontario, London, and his associates.
After clinical practice guidelines recommended statin therapy as protective against recurrent ischemic stroke in 2006, exploratory analyses in two clinical trials suggested that the drugs may actually raise the risk of hemorrhagic stroke. These reports prompted uncertainty and controversy over whether the known benefits of statin therapy in this patient population outweighed the possible risks.
Dr. Hackam and his colleagues performed a retrospective, population-based cohort study to examine the association between statin therapy and intracerebral hemorrhage in older survivors of ischemic stroke. They assessed the medical records of 17,872 patients 66 years and older (mean age, 78 years) who were treated at any Ontario hospital for ischemic stroke between 1994 and 2008 and whose records were available through 2010 to track the development of intracerebral hemorrhage.
The investigators compared the outcomes of 8,936 study subjects who began taking statins within 120 days of hospital discharge with the same number of control subjects who did not take statins. The two groups were matched on the basis of 75 patient characteristics.
During a median follow-up of 4 years, there were 213 episodes of intracerebral hemorrhage. The rate was slightly lower among patients taking statins (2.94/1,000 patient-years) than among controls (3.71/1,000 patient-years).
"The hazard ratio for statin exposure was 0.87, indicating no association between statins and intracerebral hemorrhage," the investigators wrote (Arch. Neurol. 2011 Sept. 12 [doi:10.1001/archneurol.2011.228]).
There were no associations between statin therapy and hemorrhage across numerous subgroups of patients; the risks were the same regardless of patient age, sex, socioeconomic status, major comorbidities, use of antiplatelet therapy, and use of anticoagulants.
In addition, unexposed control subjects had the same risks as did patients taking low doses of statins and patients taking high doses, so no dose-response relationship was observed.
There also were no differences in the use of statin therapy in the subgroup of study subjects who developed fatal hemorrhagic stroke during follow-up.
And in an analysis excluding "crossover" subjects – patients in the statin group who were nonadherent and patients in the control group who began statin therapy during follow-up – the results showed a significantly lower rate of intracranial hemorrhage in those who actually took statins compared with those who did not.
Furthermore, the researchers examined the use of several unrelated medical and surgical procedures in the study population, in an effort to adjust for the possibility that statin users might simply be more health conscious or heavier users of the health care system than nonusers. "As anticipated, we found no association between statin exposure and any of these events ... [which] argues against healthy user bias or screening bias in our cohort," Dr. Hackam and his associates wrote.
They cautioned that a recent study suggested that people with a history of lobar hemorrhage might be at particular risk from statin therapy. Since their study "could not test this important subset" of stroke survivors, clinicians should remain cautious about prescribing statins for such patients, the researchers said.
This study was supported by the Physicians’ Services Incorporated Foundation (a nonprofit medical research charity), the Canadian Institutes for Health Research, the Heart and Stroke Foundation of Ontario, the Canadian Stroke Network, the Institute for Clinical Evaluative Sciences, and the Ontario Ministry of Health and Long-Term Care. One of Dr. Hackam’s associates reported ties to Pfizer, Eli Lilly, Novartis, GlaxoSmithKline, and Boehringer Ingelheim.
FROM ARCHIVES OF NEUROLOGY
Major Finding: Among survivors of ischemic stroke aged 66 years and older, the rate of intracerebral hemorrhage was slightly lower in those taking statin therapy (2.94/1,000 patient-years) than in those not taking statins (3.71/1,000 patient-years).
Data Source: A retrospective cohort study involving 8,936 stroke survivors who took statin therapy and 8,936 matched controls followed for a median of 4 years.
Disclosures: This study was supported by the Physicians’ Services Incorporated Foundation (a nonprofit medical research charity), the Canadian Institutes for Health Research, the Heart and Stroke Foundation of Ontario, the Canadian Stroke Network, the Institute for Clinical Evaluative Sciences, and the Ontario Ministry of Health and Long-Term Care. One of Dr. Hackam’s associates reported ties to Pfizer, Eli Lilly, Novartis, GlaxoSmithKline, and Boehringer Ingelheim.