User login
Gait Predicts Outcomes in Geriatric Surgery
Gait speed independently predicts both major morbidity and mortality in elderly patients who are about to undergo cardiac surgery, according to a prospective, blinded study.
“This simple, rapid, and inexpensive test effectively stratifies patients beyond traditional estimates of risk, which tend to be inaccurate in the elderly,” said Dr. Jonathan Afilalo of McGill University, Montreal, and his associates.
Half the cardiac surgeries done in North America involve elderly patients (aged at least 70 years), but scoring systems for estimating operative risk perform poorly in this age group, “overestimating mortality by as much as 250%,” they noted (J. Am. Coll. Cardiol. 2010;56:1668-76).
Dr. Afilalo and his colleagues performed what they described as the first study to test the value of gait speed as a predictor of poor outcomes in elderly cardiac surgery patients.
The prospective, blinded study involved 131 patients (mean age, 76 years) who were scheduled to undergo elective coronary artery bypass and/or valve replacement or repair via standard sternotomy at four university-affiliated medical centers across Canada and the United States.
Before surgery, the study subjects were timed as they walked a distance of 5 meters in a well-lit hallway; subjects were permitted to use an aid such as a cane or walker if needed. A time of 6 seconds or longer was classified as a slow gait speed, whereas any time under 6 seconds was classified as a normal gait speed.
The primary composite end point was in-hospital mortality or any of five major complications (stroke, renal failure, prolonged ventilation, deep sternal wound infection, and need for reoperation).
In all, 60 patients (46%) were judged to have slow gait speed before surgery. Interestingly, gait speed did not correlate with the Society of Thoracic Surgeons' risk score, “suggesting that these were representing distinct domains,” the investigators said.
After surgery, 30 patients (23%) experienced the primary composite end point.
Slow gait speed was a strong and independent predictor, associated with a 3.17-fold increase in risk of the primary end point. Moreover, adding gait speed to existing risk prediction models improved their performance in predicting which patients would experience an adverse event and which patients would need “to be discharged to a health care facility for ongoing medical care or rehabilitation,” Dr. Afilalo and his associates noted
Women with slow gait speed appeared to be at particularly high risk for adverse outcomes.
The investigators reported no financial conflicts of interest.
View on the News
An Important New Tool
Existing risk-assessment tools for elderly cardiac patients are inadequate, according to Dr. Joseph C. Cleveland Jr. “We must prepare ourselves to face decisions regarding treatment options for this exponentially growing segment of our population with scant data to appropriately guide our decisions.”
In this context, Dr. Afilalo and his associates have given clinicians an important, simple, and “extraordinarily cost-effective” tool, he wrote in an editorial accompanying the study (J. Am. Coll. Cardiol. 2010;56:1677-8). Assessing patients' gait speed requires only an observer, a stopwatch, and a well-lit hallway.
DR. JOSEPH C. CLEVELAND JR., is with the University of Colorado Health Sciences Center, Denver. He reported ties to Thoratec Corp., Heartware Corp, and Baxter BioSurgery.
Gait speed independently predicts both major morbidity and mortality in elderly patients who are about to undergo cardiac surgery, according to a prospective, blinded study.
“This simple, rapid, and inexpensive test effectively stratifies patients beyond traditional estimates of risk, which tend to be inaccurate in the elderly,” said Dr. Jonathan Afilalo of McGill University, Montreal, and his associates.
Half the cardiac surgeries done in North America involve elderly patients (aged at least 70 years), but scoring systems for estimating operative risk perform poorly in this age group, “overestimating mortality by as much as 250%,” they noted (J. Am. Coll. Cardiol. 2010;56:1668-76).
Dr. Afilalo and his colleagues performed what they described as the first study to test the value of gait speed as a predictor of poor outcomes in elderly cardiac surgery patients.
The prospective, blinded study involved 131 patients (mean age, 76 years) who were scheduled to undergo elective coronary artery bypass and/or valve replacement or repair via standard sternotomy at four university-affiliated medical centers across Canada and the United States.
Before surgery, the study subjects were timed as they walked a distance of 5 meters in a well-lit hallway; subjects were permitted to use an aid such as a cane or walker if needed. A time of 6 seconds or longer was classified as a slow gait speed, whereas any time under 6 seconds was classified as a normal gait speed.
The primary composite end point was in-hospital mortality or any of five major complications (stroke, renal failure, prolonged ventilation, deep sternal wound infection, and need for reoperation).
In all, 60 patients (46%) were judged to have slow gait speed before surgery. Interestingly, gait speed did not correlate with the Society of Thoracic Surgeons' risk score, “suggesting that these were representing distinct domains,” the investigators said.
After surgery, 30 patients (23%) experienced the primary composite end point.
Slow gait speed was a strong and independent predictor, associated with a 3.17-fold increase in risk of the primary end point. Moreover, adding gait speed to existing risk prediction models improved their performance in predicting which patients would experience an adverse event and which patients would need “to be discharged to a health care facility for ongoing medical care or rehabilitation,” Dr. Afilalo and his associates noted
Women with slow gait speed appeared to be at particularly high risk for adverse outcomes.
The investigators reported no financial conflicts of interest.
View on the News
An Important New Tool
Existing risk-assessment tools for elderly cardiac patients are inadequate, according to Dr. Joseph C. Cleveland Jr. “We must prepare ourselves to face decisions regarding treatment options for this exponentially growing segment of our population with scant data to appropriately guide our decisions.”
In this context, Dr. Afilalo and his associates have given clinicians an important, simple, and “extraordinarily cost-effective” tool, he wrote in an editorial accompanying the study (J. Am. Coll. Cardiol. 2010;56:1677-8). Assessing patients' gait speed requires only an observer, a stopwatch, and a well-lit hallway.
DR. JOSEPH C. CLEVELAND JR., is with the University of Colorado Health Sciences Center, Denver. He reported ties to Thoratec Corp., Heartware Corp, and Baxter BioSurgery.
Gait speed independently predicts both major morbidity and mortality in elderly patients who are about to undergo cardiac surgery, according to a prospective, blinded study.
“This simple, rapid, and inexpensive test effectively stratifies patients beyond traditional estimates of risk, which tend to be inaccurate in the elderly,” said Dr. Jonathan Afilalo of McGill University, Montreal, and his associates.
Half the cardiac surgeries done in North America involve elderly patients (aged at least 70 years), but scoring systems for estimating operative risk perform poorly in this age group, “overestimating mortality by as much as 250%,” they noted (J. Am. Coll. Cardiol. 2010;56:1668-76).
Dr. Afilalo and his colleagues performed what they described as the first study to test the value of gait speed as a predictor of poor outcomes in elderly cardiac surgery patients.
The prospective, blinded study involved 131 patients (mean age, 76 years) who were scheduled to undergo elective coronary artery bypass and/or valve replacement or repair via standard sternotomy at four university-affiliated medical centers across Canada and the United States.
Before surgery, the study subjects were timed as they walked a distance of 5 meters in a well-lit hallway; subjects were permitted to use an aid such as a cane or walker if needed. A time of 6 seconds or longer was classified as a slow gait speed, whereas any time under 6 seconds was classified as a normal gait speed.
The primary composite end point was in-hospital mortality or any of five major complications (stroke, renal failure, prolonged ventilation, deep sternal wound infection, and need for reoperation).
In all, 60 patients (46%) were judged to have slow gait speed before surgery. Interestingly, gait speed did not correlate with the Society of Thoracic Surgeons' risk score, “suggesting that these were representing distinct domains,” the investigators said.
After surgery, 30 patients (23%) experienced the primary composite end point.
Slow gait speed was a strong and independent predictor, associated with a 3.17-fold increase in risk of the primary end point. Moreover, adding gait speed to existing risk prediction models improved their performance in predicting which patients would experience an adverse event and which patients would need “to be discharged to a health care facility for ongoing medical care or rehabilitation,” Dr. Afilalo and his associates noted
Women with slow gait speed appeared to be at particularly high risk for adverse outcomes.
The investigators reported no financial conflicts of interest.
View on the News
An Important New Tool
Existing risk-assessment tools for elderly cardiac patients are inadequate, according to Dr. Joseph C. Cleveland Jr. “We must prepare ourselves to face decisions regarding treatment options for this exponentially growing segment of our population with scant data to appropriately guide our decisions.”
In this context, Dr. Afilalo and his associates have given clinicians an important, simple, and “extraordinarily cost-effective” tool, he wrote in an editorial accompanying the study (J. Am. Coll. Cardiol. 2010;56:1677-8). Assessing patients' gait speed requires only an observer, a stopwatch, and a well-lit hallway.
DR. JOSEPH C. CLEVELAND JR., is with the University of Colorado Health Sciences Center, Denver. He reported ties to Thoratec Corp., Heartware Corp, and Baxter BioSurgery.
Barbershop Intervention Improves Hypertension Control
An outreach program in which barbers served as health educators – monitoring their black male clients' hypertension and referring them for medical treatment when necessary – improved the rate of blood pressure control by about 9% over 10 months.
The intervention, which was tested in 17 black-owned barbershops in a single Texas county, motivated about half the hypertensive patrons at participating barbershops to see a physician, and reduced their systolic blood pressure by a mean of 2.5 mm Hg, said Dr. Ronald G. Victor of the University of Texas Southwest Medical Center, Dallas, and his associates.
“If the intervention could be implemented in the approximately 18,000 black-owned barbershops in the United States to reduce blood pressure by 2.5 mm Hg in the approximately 50% of hypertensive U.S. black men who patronize these barbershops (2.2 million persons), we project that about 800 fewer myocardial infarctions, 550 fewer strokes, and 900 fewer deaths would occur in the first year alone, saving about $98 million in [coronary heart disease] care and $13 million in stroke care (but offset by $6 million in additional non-CHD costs contributed by persons who would otherwise have died),” the investigators noted.
Black-owned barbershops “are rapidly gaining traction as potential community partners for health promotion programs targeting hypertension as well as diabetes, prostate cancer, and other diseases that disproportionately affect black men,” the researchers said.
Such barbershops “are a cultural institution that draws a large and loyal male clientele and provides an open forum for discussion of numerous topics, including health, with influential peers.”
Dr. Victor and his colleagues offered free blood pressure screening to patrons of 17 barbershops representing four geographic sectors with sizeable black populations in the Barber-Assisted Reduction in Blood Pressure in Ethnic Residents (BARBER-1). Nine barbershops with 695 patrons who were found to have hypertension then were randomly allocated to the intervention, and eight barbershops with 602 patrons who had hypertension were randomly allocated to a comparison group.
Most of the barbershop clients were middle income.
The comparison group was not strictly a control group; patrons there underwent two BP screenings at baseline and received standard written explanations and recommendations for physician follow-up, because failing to advise them would have been unethical. The comparison barbershops also made available American Heart Association pamphlets entitled “High Blood Pressure in African Americans.”
For the intervention, barbers continually offered all male clients blood pressure checks along with their haircuts. They displayed large posters depicting authentic stories of other male hypertensive patrons of the same shop modeling treatment-seeking behavior, using the model's own words to tell the story. Barbers and other male patrons also discussed the issue conversationally.
The barbers were trained, equipped, and paid to conduct BP testing and interpret the results, with the main focus on encouraging clients who had positive results to consult a physician. They referred clients who had no physician to a nursing staff that then referred them to local physicians or safety-net clinics. Barbers also gave patrons found to be hypertensive a wallet-sized card for the physician to sign, documenting an office visit concerning hypertension.
The barbers were paid $3 for every recorded blood pressure they took, $10 for every referral they made to the nursing staff, and $50 for every BP card that clients returned to them with physicians' signatures. Patrons received free haircuts (a $12 value) for every BP card they returned with a physician's signature.
Overall, nearly half of the patrons who were screened had high blood pressure; 78% of them were already aware that they were hypertensive, and 69% said they were taking treatment for HT, yet only 38% had their blood pressure under control.
Barbers were able to measure blood pressure in three of every four patrons who had hypertension, and each hypertensive client averaged eight blood pressure checks during the 10-month study. “The barbers motivated 50% of their patrons with elevated BP readings to visit a physician,” the researchers said.
The rate of blood pressure control – the number of men who achieved blood pressure control during BARBER-1 – improved by about 10% in the comparison group, but improved by an additional and significant 8% in the intervention group. That represents a nearly 20% improvement over the baseline rate of blood pressure control.
The intervention group also showed an absolute decrease of 2.5 mm Hg in systolic blood pressure compared with the control group, a secondary outcome of borderline significance, the investigators said (Arch. Intern. Med. 2010 Oct. 25 [doi:10.1001/archinternmed.2010.390]).
The National Heart, Lung, and Blood Institute, Donald W. Reynolds Foundation, the Aetna Foundation Regional Health Disparity Program, Pfizer, Biovail, Cedars-Sinai Heart Institute, the Lincy Foundation, and the Robert Wood Johnson Foundation supported the trial. Dr. Victor reported ties to Pfizer and Biovail.
An outreach program in which barbers served as health educators – monitoring their black male clients' hypertension and referring them for medical treatment when necessary – improved the rate of blood pressure control by about 9% over 10 months.
The intervention, which was tested in 17 black-owned barbershops in a single Texas county, motivated about half the hypertensive patrons at participating barbershops to see a physician, and reduced their systolic blood pressure by a mean of 2.5 mm Hg, said Dr. Ronald G. Victor of the University of Texas Southwest Medical Center, Dallas, and his associates.
“If the intervention could be implemented in the approximately 18,000 black-owned barbershops in the United States to reduce blood pressure by 2.5 mm Hg in the approximately 50% of hypertensive U.S. black men who patronize these barbershops (2.2 million persons), we project that about 800 fewer myocardial infarctions, 550 fewer strokes, and 900 fewer deaths would occur in the first year alone, saving about $98 million in [coronary heart disease] care and $13 million in stroke care (but offset by $6 million in additional non-CHD costs contributed by persons who would otherwise have died),” the investigators noted.
Black-owned barbershops “are rapidly gaining traction as potential community partners for health promotion programs targeting hypertension as well as diabetes, prostate cancer, and other diseases that disproportionately affect black men,” the researchers said.
Such barbershops “are a cultural institution that draws a large and loyal male clientele and provides an open forum for discussion of numerous topics, including health, with influential peers.”
Dr. Victor and his colleagues offered free blood pressure screening to patrons of 17 barbershops representing four geographic sectors with sizeable black populations in the Barber-Assisted Reduction in Blood Pressure in Ethnic Residents (BARBER-1). Nine barbershops with 695 patrons who were found to have hypertension then were randomly allocated to the intervention, and eight barbershops with 602 patrons who had hypertension were randomly allocated to a comparison group.
Most of the barbershop clients were middle income.
The comparison group was not strictly a control group; patrons there underwent two BP screenings at baseline and received standard written explanations and recommendations for physician follow-up, because failing to advise them would have been unethical. The comparison barbershops also made available American Heart Association pamphlets entitled “High Blood Pressure in African Americans.”
For the intervention, barbers continually offered all male clients blood pressure checks along with their haircuts. They displayed large posters depicting authentic stories of other male hypertensive patrons of the same shop modeling treatment-seeking behavior, using the model's own words to tell the story. Barbers and other male patrons also discussed the issue conversationally.
The barbers were trained, equipped, and paid to conduct BP testing and interpret the results, with the main focus on encouraging clients who had positive results to consult a physician. They referred clients who had no physician to a nursing staff that then referred them to local physicians or safety-net clinics. Barbers also gave patrons found to be hypertensive a wallet-sized card for the physician to sign, documenting an office visit concerning hypertension.
The barbers were paid $3 for every recorded blood pressure they took, $10 for every referral they made to the nursing staff, and $50 for every BP card that clients returned to them with physicians' signatures. Patrons received free haircuts (a $12 value) for every BP card they returned with a physician's signature.
Overall, nearly half of the patrons who were screened had high blood pressure; 78% of them were already aware that they were hypertensive, and 69% said they were taking treatment for HT, yet only 38% had their blood pressure under control.
Barbers were able to measure blood pressure in three of every four patrons who had hypertension, and each hypertensive client averaged eight blood pressure checks during the 10-month study. “The barbers motivated 50% of their patrons with elevated BP readings to visit a physician,” the researchers said.
The rate of blood pressure control – the number of men who achieved blood pressure control during BARBER-1 – improved by about 10% in the comparison group, but improved by an additional and significant 8% in the intervention group. That represents a nearly 20% improvement over the baseline rate of blood pressure control.
The intervention group also showed an absolute decrease of 2.5 mm Hg in systolic blood pressure compared with the control group, a secondary outcome of borderline significance, the investigators said (Arch. Intern. Med. 2010 Oct. 25 [doi:10.1001/archinternmed.2010.390]).
The National Heart, Lung, and Blood Institute, Donald W. Reynolds Foundation, the Aetna Foundation Regional Health Disparity Program, Pfizer, Biovail, Cedars-Sinai Heart Institute, the Lincy Foundation, and the Robert Wood Johnson Foundation supported the trial. Dr. Victor reported ties to Pfizer and Biovail.
An outreach program in which barbers served as health educators – monitoring their black male clients' hypertension and referring them for medical treatment when necessary – improved the rate of blood pressure control by about 9% over 10 months.
The intervention, which was tested in 17 black-owned barbershops in a single Texas county, motivated about half the hypertensive patrons at participating barbershops to see a physician, and reduced their systolic blood pressure by a mean of 2.5 mm Hg, said Dr. Ronald G. Victor of the University of Texas Southwest Medical Center, Dallas, and his associates.
“If the intervention could be implemented in the approximately 18,000 black-owned barbershops in the United States to reduce blood pressure by 2.5 mm Hg in the approximately 50% of hypertensive U.S. black men who patronize these barbershops (2.2 million persons), we project that about 800 fewer myocardial infarctions, 550 fewer strokes, and 900 fewer deaths would occur in the first year alone, saving about $98 million in [coronary heart disease] care and $13 million in stroke care (but offset by $6 million in additional non-CHD costs contributed by persons who would otherwise have died),” the investigators noted.
Black-owned barbershops “are rapidly gaining traction as potential community partners for health promotion programs targeting hypertension as well as diabetes, prostate cancer, and other diseases that disproportionately affect black men,” the researchers said.
Such barbershops “are a cultural institution that draws a large and loyal male clientele and provides an open forum for discussion of numerous topics, including health, with influential peers.”
Dr. Victor and his colleagues offered free blood pressure screening to patrons of 17 barbershops representing four geographic sectors with sizeable black populations in the Barber-Assisted Reduction in Blood Pressure in Ethnic Residents (BARBER-1). Nine barbershops with 695 patrons who were found to have hypertension then were randomly allocated to the intervention, and eight barbershops with 602 patrons who had hypertension were randomly allocated to a comparison group.
Most of the barbershop clients were middle income.
The comparison group was not strictly a control group; patrons there underwent two BP screenings at baseline and received standard written explanations and recommendations for physician follow-up, because failing to advise them would have been unethical. The comparison barbershops also made available American Heart Association pamphlets entitled “High Blood Pressure in African Americans.”
For the intervention, barbers continually offered all male clients blood pressure checks along with their haircuts. They displayed large posters depicting authentic stories of other male hypertensive patrons of the same shop modeling treatment-seeking behavior, using the model's own words to tell the story. Barbers and other male patrons also discussed the issue conversationally.
The barbers were trained, equipped, and paid to conduct BP testing and interpret the results, with the main focus on encouraging clients who had positive results to consult a physician. They referred clients who had no physician to a nursing staff that then referred them to local physicians or safety-net clinics. Barbers also gave patrons found to be hypertensive a wallet-sized card for the physician to sign, documenting an office visit concerning hypertension.
The barbers were paid $3 for every recorded blood pressure they took, $10 for every referral they made to the nursing staff, and $50 for every BP card that clients returned to them with physicians' signatures. Patrons received free haircuts (a $12 value) for every BP card they returned with a physician's signature.
Overall, nearly half of the patrons who were screened had high blood pressure; 78% of them were already aware that they were hypertensive, and 69% said they were taking treatment for HT, yet only 38% had their blood pressure under control.
Barbers were able to measure blood pressure in three of every four patrons who had hypertension, and each hypertensive client averaged eight blood pressure checks during the 10-month study. “The barbers motivated 50% of their patrons with elevated BP readings to visit a physician,” the researchers said.
The rate of blood pressure control – the number of men who achieved blood pressure control during BARBER-1 – improved by about 10% in the comparison group, but improved by an additional and significant 8% in the intervention group. That represents a nearly 20% improvement over the baseline rate of blood pressure control.
The intervention group also showed an absolute decrease of 2.5 mm Hg in systolic blood pressure compared with the control group, a secondary outcome of borderline significance, the investigators said (Arch. Intern. Med. 2010 Oct. 25 [doi:10.1001/archinternmed.2010.390]).
The National Heart, Lung, and Blood Institute, Donald W. Reynolds Foundation, the Aetna Foundation Regional Health Disparity Program, Pfizer, Biovail, Cedars-Sinai Heart Institute, the Lincy Foundation, and the Robert Wood Johnson Foundation supported the trial. Dr. Victor reported ties to Pfizer and Biovail.
Guidelines Revised for Poststroke Prevention
To give clinicians “the most up-to-date evidence-based recommendations for the prevention of ischemic stroke among survivors of ischemic stroke or transient ischemic attack,” the American Heart Association and American Stroke Association published updated guidelines.
The new guidelines, which are intended to help clinicians select preventive therapies for individual patients, have been endorsed by the American Academy of Neurology as an educational tool for neurologists. The American Association of Neurological Surgeons and the Congress of Neurological Surgeons have affirmed their educational content as well.
“Since the last update [in 2006], we've had results from several studies testing different interventions. We need to reevaluate the science every few years to optimize prevention,” Dr. Karen L. Furie, chair of the 18-member writing committee and director of the stroke service at Massachusetts General Hospital, Boston, said in a statement accompanying the updated guidelines.
Approximately one-fourth of the nearly 800,000 strokes that occur each year in the United States are recurrences in patients who have already had a stroke or TIA, Dr. Furie and her colleagues noted (Stroke 2010 Oct. 21 [doi:10.1161/STR.0b013e3181f7d043]).
New recommendations in the guidelines cover control of risk factors, interventions for atherosclerotic disease, antithrombotic therapies for cardioembolism, and use of antiplatelet drugs for noncardioembolic stroke.
Controlling Risk Factors
While the clinical usefulness of screening patients for the metabolic syndrome remains controversial, the guidelines advise that if patients are already diagnosed as having the disorder, they should be counseled to improve their diet, exercise, and lose weight to reduce their stroke risk.
The individual components of the metabolic syndrome that raise the risk of stroke – particularly dyslipidemia and hypertension – should be treated. Survivors of TIA or stroke who have diabetes should follow existing treatment guidelines for glycemic control and blood pressure management.
Atherosclerotic Disease Interventions
The writing committee recommended that patients with stenosis of the carotid artery or vertebral artery should receive optimal medical therapy, including antiplatelet drugs, statins, and risk factor modification. In patients whose TIA or stroke was due to 50%–99% stenosis of a major intracranial artery, they advised prescribing aspirin therapy (50–325 mg daily) over warfarin. Long-term maintenance of blood pressure at less than 140/90 mm Hg and total cholesterol at less than 200 mg/dL “may be reasonable,” they wrote. The usefulness of angioplasty, with or without stent placement, for an intracranial artery stenosis is not yet known in this population and is considered investigational. Extracranial-intracranial bypass surgery is not recommended.
For patients with atherosclerotic ischemic stroke or TIA who do not have coronary heart disease, the committee stated that “it is reasonable to target a reduction of at least 50% in LDL-C or a target LDL-C level of less than 70 mg/dL.”
Antithrombotics for Cardio- and Noncardioembolic Stroke
The guidelines recommend that patients who need anticoagulation therapy but cannot take oral anticoagulants should be given aspirin alone. They warn that the combination of aspirin plus clopidogrel “carries a risk of bleeding similar to that of warfarin and therefore is not recommended for patients with a hemorrhagic contraindication to warfarin.”
Any temporary interruption to anticoagulation therapy in patients who have atrial fibrillation and are otherwise at high risk for stroke calls for the use of bridging therapy with subcutaneous administration of low-molecular-weight heparin, according to the guidelines.
Dr. Furie and the committee members recommended caution in using warfarin in patients who have cardiomyopathy characterized by systolic dysfunction (a left ventricular ejection fraction of 35% or less) because of a lack of proven benefit.
Evidence is also insufficient to establish whether anticoagulation therapy is better than aspirin therapy for secondary stroke prevention in patients who have a patent foramen ovale.
The guidelines also address secondary stroke prevention under a variety of special circumstances, such as cases of arterial dissection, hyperhomocysteinemia, hypercoagulable states, and sickle cell disease. They also detail management specific to women, particularly concerning pregnancy and the use of postmenopausal hormone replacement.
Dr. Furie reported receiving research grants from the National Institute of Neurological Disorders and Stroke as well as the ASA-Bugher Foundation Center for Stroke Prevention Research. Some of her 17 coauthors disclosed receiving research support from, being a speaker for, or consulting to or sitting on an advisory board for, companies that manufacture drugs commonly prescribed for stroke prevention.
The new guidelines can be obtained at
To give clinicians “the most up-to-date evidence-based recommendations for the prevention of ischemic stroke among survivors of ischemic stroke or transient ischemic attack,” the American Heart Association and American Stroke Association published updated guidelines.
The new guidelines, which are intended to help clinicians select preventive therapies for individual patients, have been endorsed by the American Academy of Neurology as an educational tool for neurologists. The American Association of Neurological Surgeons and the Congress of Neurological Surgeons have affirmed their educational content as well.
“Since the last update [in 2006], we've had results from several studies testing different interventions. We need to reevaluate the science every few years to optimize prevention,” Dr. Karen L. Furie, chair of the 18-member writing committee and director of the stroke service at Massachusetts General Hospital, Boston, said in a statement accompanying the updated guidelines.
Approximately one-fourth of the nearly 800,000 strokes that occur each year in the United States are recurrences in patients who have already had a stroke or TIA, Dr. Furie and her colleagues noted (Stroke 2010 Oct. 21 [doi:10.1161/STR.0b013e3181f7d043]).
New recommendations in the guidelines cover control of risk factors, interventions for atherosclerotic disease, antithrombotic therapies for cardioembolism, and use of antiplatelet drugs for noncardioembolic stroke.
Controlling Risk Factors
While the clinical usefulness of screening patients for the metabolic syndrome remains controversial, the guidelines advise that if patients are already diagnosed as having the disorder, they should be counseled to improve their diet, exercise, and lose weight to reduce their stroke risk.
The individual components of the metabolic syndrome that raise the risk of stroke – particularly dyslipidemia and hypertension – should be treated. Survivors of TIA or stroke who have diabetes should follow existing treatment guidelines for glycemic control and blood pressure management.
Atherosclerotic Disease Interventions
The writing committee recommended that patients with stenosis of the carotid artery or vertebral artery should receive optimal medical therapy, including antiplatelet drugs, statins, and risk factor modification. In patients whose TIA or stroke was due to 50%–99% stenosis of a major intracranial artery, they advised prescribing aspirin therapy (50–325 mg daily) over warfarin. Long-term maintenance of blood pressure at less than 140/90 mm Hg and total cholesterol at less than 200 mg/dL “may be reasonable,” they wrote. The usefulness of angioplasty, with or without stent placement, for an intracranial artery stenosis is not yet known in this population and is considered investigational. Extracranial-intracranial bypass surgery is not recommended.
For patients with atherosclerotic ischemic stroke or TIA who do not have coronary heart disease, the committee stated that “it is reasonable to target a reduction of at least 50% in LDL-C or a target LDL-C level of less than 70 mg/dL.”
Antithrombotics for Cardio- and Noncardioembolic Stroke
The guidelines recommend that patients who need anticoagulation therapy but cannot take oral anticoagulants should be given aspirin alone. They warn that the combination of aspirin plus clopidogrel “carries a risk of bleeding similar to that of warfarin and therefore is not recommended for patients with a hemorrhagic contraindication to warfarin.”
Any temporary interruption to anticoagulation therapy in patients who have atrial fibrillation and are otherwise at high risk for stroke calls for the use of bridging therapy with subcutaneous administration of low-molecular-weight heparin, according to the guidelines.
Dr. Furie and the committee members recommended caution in using warfarin in patients who have cardiomyopathy characterized by systolic dysfunction (a left ventricular ejection fraction of 35% or less) because of a lack of proven benefit.
Evidence is also insufficient to establish whether anticoagulation therapy is better than aspirin therapy for secondary stroke prevention in patients who have a patent foramen ovale.
The guidelines also address secondary stroke prevention under a variety of special circumstances, such as cases of arterial dissection, hyperhomocysteinemia, hypercoagulable states, and sickle cell disease. They also detail management specific to women, particularly concerning pregnancy and the use of postmenopausal hormone replacement.
Dr. Furie reported receiving research grants from the National Institute of Neurological Disorders and Stroke as well as the ASA-Bugher Foundation Center for Stroke Prevention Research. Some of her 17 coauthors disclosed receiving research support from, being a speaker for, or consulting to or sitting on an advisory board for, companies that manufacture drugs commonly prescribed for stroke prevention.
The new guidelines can be obtained at
To give clinicians “the most up-to-date evidence-based recommendations for the prevention of ischemic stroke among survivors of ischemic stroke or transient ischemic attack,” the American Heart Association and American Stroke Association published updated guidelines.
The new guidelines, which are intended to help clinicians select preventive therapies for individual patients, have been endorsed by the American Academy of Neurology as an educational tool for neurologists. The American Association of Neurological Surgeons and the Congress of Neurological Surgeons have affirmed their educational content as well.
“Since the last update [in 2006], we've had results from several studies testing different interventions. We need to reevaluate the science every few years to optimize prevention,” Dr. Karen L. Furie, chair of the 18-member writing committee and director of the stroke service at Massachusetts General Hospital, Boston, said in a statement accompanying the updated guidelines.
Approximately one-fourth of the nearly 800,000 strokes that occur each year in the United States are recurrences in patients who have already had a stroke or TIA, Dr. Furie and her colleagues noted (Stroke 2010 Oct. 21 [doi:10.1161/STR.0b013e3181f7d043]).
New recommendations in the guidelines cover control of risk factors, interventions for atherosclerotic disease, antithrombotic therapies for cardioembolism, and use of antiplatelet drugs for noncardioembolic stroke.
Controlling Risk Factors
While the clinical usefulness of screening patients for the metabolic syndrome remains controversial, the guidelines advise that if patients are already diagnosed as having the disorder, they should be counseled to improve their diet, exercise, and lose weight to reduce their stroke risk.
The individual components of the metabolic syndrome that raise the risk of stroke – particularly dyslipidemia and hypertension – should be treated. Survivors of TIA or stroke who have diabetes should follow existing treatment guidelines for glycemic control and blood pressure management.
Atherosclerotic Disease Interventions
The writing committee recommended that patients with stenosis of the carotid artery or vertebral artery should receive optimal medical therapy, including antiplatelet drugs, statins, and risk factor modification. In patients whose TIA or stroke was due to 50%–99% stenosis of a major intracranial artery, they advised prescribing aspirin therapy (50–325 mg daily) over warfarin. Long-term maintenance of blood pressure at less than 140/90 mm Hg and total cholesterol at less than 200 mg/dL “may be reasonable,” they wrote. The usefulness of angioplasty, with or without stent placement, for an intracranial artery stenosis is not yet known in this population and is considered investigational. Extracranial-intracranial bypass surgery is not recommended.
For patients with atherosclerotic ischemic stroke or TIA who do not have coronary heart disease, the committee stated that “it is reasonable to target a reduction of at least 50% in LDL-C or a target LDL-C level of less than 70 mg/dL.”
Antithrombotics for Cardio- and Noncardioembolic Stroke
The guidelines recommend that patients who need anticoagulation therapy but cannot take oral anticoagulants should be given aspirin alone. They warn that the combination of aspirin plus clopidogrel “carries a risk of bleeding similar to that of warfarin and therefore is not recommended for patients with a hemorrhagic contraindication to warfarin.”
Any temporary interruption to anticoagulation therapy in patients who have atrial fibrillation and are otherwise at high risk for stroke calls for the use of bridging therapy with subcutaneous administration of low-molecular-weight heparin, according to the guidelines.
Dr. Furie and the committee members recommended caution in using warfarin in patients who have cardiomyopathy characterized by systolic dysfunction (a left ventricular ejection fraction of 35% or less) because of a lack of proven benefit.
Evidence is also insufficient to establish whether anticoagulation therapy is better than aspirin therapy for secondary stroke prevention in patients who have a patent foramen ovale.
The guidelines also address secondary stroke prevention under a variety of special circumstances, such as cases of arterial dissection, hyperhomocysteinemia, hypercoagulable states, and sickle cell disease. They also detail management specific to women, particularly concerning pregnancy and the use of postmenopausal hormone replacement.
Dr. Furie reported receiving research grants from the National Institute of Neurological Disorders and Stroke as well as the ASA-Bugher Foundation Center for Stroke Prevention Research. Some of her 17 coauthors disclosed receiving research support from, being a speaker for, or consulting to or sitting on an advisory board for, companies that manufacture drugs commonly prescribed for stroke prevention.
The new guidelines can be obtained at
Higher Stroke, Death Rates Persist With Carotid Stenting
Major Finding: The long-term risk of stroke is 48% higher after carotid stenting than after carotid endarterectomy, and the long-term risk of death or stroke is 24% higher.
Data Source: A meta-analysis of 13 recent randomized clinical trials comparing the two approaches.
Disclosures: One investigator reported receiving research grants from AstraZeneca, Bristol-Myers Squibb, Eisai, Ethicon, Heartscape, Sanofi Aventis, and the Medicines Company.
Carotid artery stenting carries higher intermediate- and long-term risks than does carotid endarterectomy, not just higher periprocedural risks, according to the largest, most comprehensive meta-analysis of evidence from randomized trials to date.
The safety and efficacy of carotid stenting as an alternative to endarterectomy are controversial. Studies have shown that stenting is more likely to cause periprocedural stroke, but data on longer-term outcomes are limited, said Dr. Sripal Bangalore of New York University, New York, and his associates.
They examined 13 randomized controlled trials that reported outcomes at 30 days or later and included 3,754 patients assigned to stenting and 3,723 to endarterectomy. The mean follow-up in the trials was 2.7 years.
In the short term, stenting was associated with a 31% increase in periprocedural death, MI, or stroke, compared with endarterectomy. Absolute rates of periprocedural death, MI, or stroke were 5.7% with stenting and 4.7% with endarterectomy, they said.
In the long term, the risk for that composite outcome plus later ipsilateral stroke or death was 19% higher after stenting than it was after endarterectomy. In comparison with endarterectomy, stenting carried a 38% higher risk of the composite outcome of periprocedural stroke or death plus later ipsilateral stroke, a 24% higher risk of the composite outcome of death or stroke, and a 48% increased risk of any stroke.
These increases in long-term risks were consistent across several subgroups: symptomatic or asymptomatic, low risk or high risk, American or non-American; and regardless of whether an embolic protection device was used, Dr. Bangalore and his colleagues wrote (Arch. Neurol. 2010 Oct. 11 [doi:10.1001/archneurol.2010.262]).
However, the rate of periprocedural MI was significantly lower with carotid stenting (0.3%) than with endarterectomy (1.2%). And stenting was associated with an 85% reduction in the risk of cranial nerve injury, all of which occurred in the periprocedural period.
Major Finding: The long-term risk of stroke is 48% higher after carotid stenting than after carotid endarterectomy, and the long-term risk of death or stroke is 24% higher.
Data Source: A meta-analysis of 13 recent randomized clinical trials comparing the two approaches.
Disclosures: One investigator reported receiving research grants from AstraZeneca, Bristol-Myers Squibb, Eisai, Ethicon, Heartscape, Sanofi Aventis, and the Medicines Company.
Carotid artery stenting carries higher intermediate- and long-term risks than does carotid endarterectomy, not just higher periprocedural risks, according to the largest, most comprehensive meta-analysis of evidence from randomized trials to date.
The safety and efficacy of carotid stenting as an alternative to endarterectomy are controversial. Studies have shown that stenting is more likely to cause periprocedural stroke, but data on longer-term outcomes are limited, said Dr. Sripal Bangalore of New York University, New York, and his associates.
They examined 13 randomized controlled trials that reported outcomes at 30 days or later and included 3,754 patients assigned to stenting and 3,723 to endarterectomy. The mean follow-up in the trials was 2.7 years.
In the short term, stenting was associated with a 31% increase in periprocedural death, MI, or stroke, compared with endarterectomy. Absolute rates of periprocedural death, MI, or stroke were 5.7% with stenting and 4.7% with endarterectomy, they said.
In the long term, the risk for that composite outcome plus later ipsilateral stroke or death was 19% higher after stenting than it was after endarterectomy. In comparison with endarterectomy, stenting carried a 38% higher risk of the composite outcome of periprocedural stroke or death plus later ipsilateral stroke, a 24% higher risk of the composite outcome of death or stroke, and a 48% increased risk of any stroke.
These increases in long-term risks were consistent across several subgroups: symptomatic or asymptomatic, low risk or high risk, American or non-American; and regardless of whether an embolic protection device was used, Dr. Bangalore and his colleagues wrote (Arch. Neurol. 2010 Oct. 11 [doi:10.1001/archneurol.2010.262]).
However, the rate of periprocedural MI was significantly lower with carotid stenting (0.3%) than with endarterectomy (1.2%). And stenting was associated with an 85% reduction in the risk of cranial nerve injury, all of which occurred in the periprocedural period.
Major Finding: The long-term risk of stroke is 48% higher after carotid stenting than after carotid endarterectomy, and the long-term risk of death or stroke is 24% higher.
Data Source: A meta-analysis of 13 recent randomized clinical trials comparing the two approaches.
Disclosures: One investigator reported receiving research grants from AstraZeneca, Bristol-Myers Squibb, Eisai, Ethicon, Heartscape, Sanofi Aventis, and the Medicines Company.
Carotid artery stenting carries higher intermediate- and long-term risks than does carotid endarterectomy, not just higher periprocedural risks, according to the largest, most comprehensive meta-analysis of evidence from randomized trials to date.
The safety and efficacy of carotid stenting as an alternative to endarterectomy are controversial. Studies have shown that stenting is more likely to cause periprocedural stroke, but data on longer-term outcomes are limited, said Dr. Sripal Bangalore of New York University, New York, and his associates.
They examined 13 randomized controlled trials that reported outcomes at 30 days or later and included 3,754 patients assigned to stenting and 3,723 to endarterectomy. The mean follow-up in the trials was 2.7 years.
In the short term, stenting was associated with a 31% increase in periprocedural death, MI, or stroke, compared with endarterectomy. Absolute rates of periprocedural death, MI, or stroke were 5.7% with stenting and 4.7% with endarterectomy, they said.
In the long term, the risk for that composite outcome plus later ipsilateral stroke or death was 19% higher after stenting than it was after endarterectomy. In comparison with endarterectomy, stenting carried a 38% higher risk of the composite outcome of periprocedural stroke or death plus later ipsilateral stroke, a 24% higher risk of the composite outcome of death or stroke, and a 48% increased risk of any stroke.
These increases in long-term risks were consistent across several subgroups: symptomatic or asymptomatic, low risk or high risk, American or non-American; and regardless of whether an embolic protection device was used, Dr. Bangalore and his colleagues wrote (Arch. Neurol. 2010 Oct. 11 [doi:10.1001/archneurol.2010.262]).
However, the rate of periprocedural MI was significantly lower with carotid stenting (0.3%) than with endarterectomy (1.2%). And stenting was associated with an 85% reduction in the risk of cranial nerve injury, all of which occurred in the periprocedural period.
Early Dialysis Linked to Greater 1-Year Mortality
Although the use of early hemodialysis in relatively young and healthy end-stage renal disease patients has more than doubled since 1996, that practice may have increased their 1-year mortality risk, new research suggests.
The findings, together with those of several other studies of the issue, indicate that early hemodialysis – begun when the estimated glomerular filtration rate (eGFR) is still 10 mL/minute per 1.73 m
According to U.S. Renal Data System (USRDS) records, the proportion of patients initiating hemodialysis early rose from 20% to 52% between 1996 and 2008, even though there is no evidence of substantial benefit with the practice. In fact, nine recent studies have reported a survival disadvantage with early hemodialysis.
Critics of those studies say that earlier hemodialysis makes intuitive sense, and that the high rates of comorbidities and older age in most study subjects confounded the results. To “reduce or eliminate much of the selection biases and lessen the need for multiple adjustments for comorbid conditions that confounded earlier studies,” Dr. Rosansky and his colleagues undertook a large observational study restricted to the Medicare records of 81,176 relatively young ESRD patients (aged 20-64 years) who had no comorbidities other than hypertension.
In that healthy cohort, mortality in the first year after the start of hemodialysis was 9.4%, compared with an average 24% 1-year mortality in the entire USRDS population.
The investigators said that 1-year mortality was 20.1% in patients who started hemodialysis early, compared with 6.8% in those who started later.
Patients with the lowest albumin levels (less than 2.5 g/dL) were five times as likely to die in the first year of hemodialysis as were patients with the highest albumin levels (at least 3.5 g/dL), at 21% vs. 4.7%, respectively.
Among the healthiest group (albumin level at least 3.5 g/dL), those patients with an eGFR of at least 15 were 3.5 times as likely to die as were those with an eGFR of less than 5 (1-year mortality rates of 12.5% vs. 3.6%, respectively).
It is possible that the poorer survival might be related to fewer competing factors for mortality in those young and relatively healthy patients, the researchers noted.
Alternatively, relatively healthy patients with a higher eGFR at the start of hemodialysis “might have been more susceptible to potential harm from the hemodialysis procedure,” the investigators said.
The authors replicated their analyses using serum creatinine values rather than eGFR as a measure of kidney function, “and the results were the same; ostensibly, better kidney function … was associated with higher mortality,” they added.
A potential limitation of the study was the fact that it was based on registry data. Approximately one-third of the subjects in the study who were listed as having no comorbidities were missing one or more laboratory values corroborating that classification. However, separate analysis excluding that group produced the same result as with the entire study cohort.
The mechanism by which earlier hemodialysis may raise 1-year mortality is not yet known. “Possible mechanisms might include recurrent episodes of myocardial ischemia and 'stunning,' and eventual functional and structural changes with fixed systolic dysfunction induced by conventional thrice-weekly hemodialysis,” Dr. Rosansky and his associates said.
In addition, research has shown that endogenous renal function provides a survival benefit over hemodialytic clearance, and more than half of endogenous renal function can be lost during the first months of hemodialysis therapy, they noted.
The results of the study, together with those of other studies, “provide evidence questioning the trend to early start of hemodialysis,” the investigators said. “Initiation of hemodialysis should not be based on an arbitrary level of eGFR or serum creatinine level unless this measure is accompanied by definitive end-stage renal failure-related indications for hemodialysis.”
One of Dr. Rosansky's associates reported ties to numerous industry sources.
The findings of this study, like those of the randomized controlled
Initiating Dialysis Early and Late (IDEAL) clinical trial, do not
support the widespread practice of beginning hemodialysis based on
numerical criteria alone, said Dr. Kirsten L. Johansen.
“Rather,
we need to reexamine what we consider to be uremic symptoms worthy of
dialysis initiation. The bar for these symptoms has been dramatically
lowered in recent years, with no data to support a benefit to patients,”
she said.
Early hemodialysis in the study not only failed to improve survival, it also failed to improve quality of life.
“I
am suggesting that (in the absence of uremic indications) we shift our
paradigm to consider starting dialysis when the symptoms are worse than
the anticipated lifestyle burden and effects of dialysis, which are
considerable and include a substantial time commitment, frequent
fatigue, and infections, among other things,” Dr. Johansen noted.
That
approach of carefully weighing clinical factors and quality-of-life
issues “will require close follow-up and ongoing discussion with our
patients,” she added.
KIRSTEN L. JOHANSEN, M.D., is at the San
Francisco VA Medical Center and the University of California, San
Francisco. She reported no financial disclosures. The comments are taken
from her editorial accompanying Dr. Rosansky's report (Arch. Intern.
Med. 2010 [doi:10.1001/archinternmed. 2010.413]).
The findings of this study, like those of the randomized controlled
Initiating Dialysis Early and Late (IDEAL) clinical trial, do not
support the widespread practice of beginning hemodialysis based on
numerical criteria alone, said Dr. Kirsten L. Johansen.
“Rather,
we need to reexamine what we consider to be uremic symptoms worthy of
dialysis initiation. The bar for these symptoms has been dramatically
lowered in recent years, with no data to support a benefit to patients,”
she said.
Early hemodialysis in the study not only failed to improve survival, it also failed to improve quality of life.
“I
am suggesting that (in the absence of uremic indications) we shift our
paradigm to consider starting dialysis when the symptoms are worse than
the anticipated lifestyle burden and effects of dialysis, which are
considerable and include a substantial time commitment, frequent
fatigue, and infections, among other things,” Dr. Johansen noted.
That
approach of carefully weighing clinical factors and quality-of-life
issues “will require close follow-up and ongoing discussion with our
patients,” she added.
KIRSTEN L. JOHANSEN, M.D., is at the San
Francisco VA Medical Center and the University of California, San
Francisco. She reported no financial disclosures. The comments are taken
from her editorial accompanying Dr. Rosansky's report (Arch. Intern.
Med. 2010 [doi:10.1001/archinternmed. 2010.413]).
The findings of this study, like those of the randomized controlled
Initiating Dialysis Early and Late (IDEAL) clinical trial, do not
support the widespread practice of beginning hemodialysis based on
numerical criteria alone, said Dr. Kirsten L. Johansen.
“Rather,
we need to reexamine what we consider to be uremic symptoms worthy of
dialysis initiation. The bar for these symptoms has been dramatically
lowered in recent years, with no data to support a benefit to patients,”
she said.
Early hemodialysis in the study not only failed to improve survival, it also failed to improve quality of life.
“I
am suggesting that (in the absence of uremic indications) we shift our
paradigm to consider starting dialysis when the symptoms are worse than
the anticipated lifestyle burden and effects of dialysis, which are
considerable and include a substantial time commitment, frequent
fatigue, and infections, among other things,” Dr. Johansen noted.
That
approach of carefully weighing clinical factors and quality-of-life
issues “will require close follow-up and ongoing discussion with our
patients,” she added.
KIRSTEN L. JOHANSEN, M.D., is at the San
Francisco VA Medical Center and the University of California, San
Francisco. She reported no financial disclosures. The comments are taken
from her editorial accompanying Dr. Rosansky's report (Arch. Intern.
Med. 2010 [doi:10.1001/archinternmed. 2010.413]).
Although the use of early hemodialysis in relatively young and healthy end-stage renal disease patients has more than doubled since 1996, that practice may have increased their 1-year mortality risk, new research suggests.
The findings, together with those of several other studies of the issue, indicate that early hemodialysis – begun when the estimated glomerular filtration rate (eGFR) is still 10 mL/minute per 1.73 m
According to U.S. Renal Data System (USRDS) records, the proportion of patients initiating hemodialysis early rose from 20% to 52% between 1996 and 2008, even though there is no evidence of substantial benefit with the practice. In fact, nine recent studies have reported a survival disadvantage with early hemodialysis.
Critics of those studies say that earlier hemodialysis makes intuitive sense, and that the high rates of comorbidities and older age in most study subjects confounded the results. To “reduce or eliminate much of the selection biases and lessen the need for multiple adjustments for comorbid conditions that confounded earlier studies,” Dr. Rosansky and his colleagues undertook a large observational study restricted to the Medicare records of 81,176 relatively young ESRD patients (aged 20-64 years) who had no comorbidities other than hypertension.
In that healthy cohort, mortality in the first year after the start of hemodialysis was 9.4%, compared with an average 24% 1-year mortality in the entire USRDS population.
The investigators said that 1-year mortality was 20.1% in patients who started hemodialysis early, compared with 6.8% in those who started later.
Patients with the lowest albumin levels (less than 2.5 g/dL) were five times as likely to die in the first year of hemodialysis as were patients with the highest albumin levels (at least 3.5 g/dL), at 21% vs. 4.7%, respectively.
Among the healthiest group (albumin level at least 3.5 g/dL), those patients with an eGFR of at least 15 were 3.5 times as likely to die as were those with an eGFR of less than 5 (1-year mortality rates of 12.5% vs. 3.6%, respectively).
It is possible that the poorer survival might be related to fewer competing factors for mortality in those young and relatively healthy patients, the researchers noted.
Alternatively, relatively healthy patients with a higher eGFR at the start of hemodialysis “might have been more susceptible to potential harm from the hemodialysis procedure,” the investigators said.
The authors replicated their analyses using serum creatinine values rather than eGFR as a measure of kidney function, “and the results were the same; ostensibly, better kidney function … was associated with higher mortality,” they added.
A potential limitation of the study was the fact that it was based on registry data. Approximately one-third of the subjects in the study who were listed as having no comorbidities were missing one or more laboratory values corroborating that classification. However, separate analysis excluding that group produced the same result as with the entire study cohort.
The mechanism by which earlier hemodialysis may raise 1-year mortality is not yet known. “Possible mechanisms might include recurrent episodes of myocardial ischemia and 'stunning,' and eventual functional and structural changes with fixed systolic dysfunction induced by conventional thrice-weekly hemodialysis,” Dr. Rosansky and his associates said.
In addition, research has shown that endogenous renal function provides a survival benefit over hemodialytic clearance, and more than half of endogenous renal function can be lost during the first months of hemodialysis therapy, they noted.
The results of the study, together with those of other studies, “provide evidence questioning the trend to early start of hemodialysis,” the investigators said. “Initiation of hemodialysis should not be based on an arbitrary level of eGFR or serum creatinine level unless this measure is accompanied by definitive end-stage renal failure-related indications for hemodialysis.”
One of Dr. Rosansky's associates reported ties to numerous industry sources.
Although the use of early hemodialysis in relatively young and healthy end-stage renal disease patients has more than doubled since 1996, that practice may have increased their 1-year mortality risk, new research suggests.
The findings, together with those of several other studies of the issue, indicate that early hemodialysis – begun when the estimated glomerular filtration rate (eGFR) is still 10 mL/minute per 1.73 m
According to U.S. Renal Data System (USRDS) records, the proportion of patients initiating hemodialysis early rose from 20% to 52% between 1996 and 2008, even though there is no evidence of substantial benefit with the practice. In fact, nine recent studies have reported a survival disadvantage with early hemodialysis.
Critics of those studies say that earlier hemodialysis makes intuitive sense, and that the high rates of comorbidities and older age in most study subjects confounded the results. To “reduce or eliminate much of the selection biases and lessen the need for multiple adjustments for comorbid conditions that confounded earlier studies,” Dr. Rosansky and his colleagues undertook a large observational study restricted to the Medicare records of 81,176 relatively young ESRD patients (aged 20-64 years) who had no comorbidities other than hypertension.
In that healthy cohort, mortality in the first year after the start of hemodialysis was 9.4%, compared with an average 24% 1-year mortality in the entire USRDS population.
The investigators said that 1-year mortality was 20.1% in patients who started hemodialysis early, compared with 6.8% in those who started later.
Patients with the lowest albumin levels (less than 2.5 g/dL) were five times as likely to die in the first year of hemodialysis as were patients with the highest albumin levels (at least 3.5 g/dL), at 21% vs. 4.7%, respectively.
Among the healthiest group (albumin level at least 3.5 g/dL), those patients with an eGFR of at least 15 were 3.5 times as likely to die as were those with an eGFR of less than 5 (1-year mortality rates of 12.5% vs. 3.6%, respectively).
It is possible that the poorer survival might be related to fewer competing factors for mortality in those young and relatively healthy patients, the researchers noted.
Alternatively, relatively healthy patients with a higher eGFR at the start of hemodialysis “might have been more susceptible to potential harm from the hemodialysis procedure,” the investigators said.
The authors replicated their analyses using serum creatinine values rather than eGFR as a measure of kidney function, “and the results were the same; ostensibly, better kidney function … was associated with higher mortality,” they added.
A potential limitation of the study was the fact that it was based on registry data. Approximately one-third of the subjects in the study who were listed as having no comorbidities were missing one or more laboratory values corroborating that classification. However, separate analysis excluding that group produced the same result as with the entire study cohort.
The mechanism by which earlier hemodialysis may raise 1-year mortality is not yet known. “Possible mechanisms might include recurrent episodes of myocardial ischemia and 'stunning,' and eventual functional and structural changes with fixed systolic dysfunction induced by conventional thrice-weekly hemodialysis,” Dr. Rosansky and his associates said.
In addition, research has shown that endogenous renal function provides a survival benefit over hemodialytic clearance, and more than half of endogenous renal function can be lost during the first months of hemodialysis therapy, they noted.
The results of the study, together with those of other studies, “provide evidence questioning the trend to early start of hemodialysis,” the investigators said. “Initiation of hemodialysis should not be based on an arbitrary level of eGFR or serum creatinine level unless this measure is accompanied by definitive end-stage renal failure-related indications for hemodialysis.”
One of Dr. Rosansky's associates reported ties to numerous industry sources.
Combo Exercise Regimen Lowers HbA1c in Diabetes
Combined aerobic and resistance exercise training lowered hemoglobin A1c levels modestly in patients with type 2 diabetes, while either type of training alone did not, according to a recent report.
Patients who participated in the combined exercise also were able to decrease their hypoglycemic medication more often than were those who participated in either type of exercise alone, said Dr. Timothy S. Church of Pennington Biomedical Research Center at Louisiana State University, Baton Rouge, and his associates.
The investigators assessed outcomes in 262 sedentary adults (mean age 56 years) with type 2 diabetes during a 9-month exercise intervention in which no attempt was made to alter patients' diets, medication usage, or other lifestyle factors. The study subjects were randomly assigned to undergo aerobic training only (72 patients), resistance training only (73 patients), a combination of both (76 patients), or no exercise training (41 patients serving as a control group).
The interventions were specifically designed so that all study subjects would spend the same amount of time exercising – approximately 140 minutes per week. This ensured that any differences between the combined-exercise group and the other exercise groups could be attributed to the activity itself, rather than to an extended time spent exercising in the combination group.
All the interventions took place in a laboratory facility and were closely supervised. In addition, study subjects had monthly visits with a certified diabetes educator who reviewed fasting glucose records and measured weight and HbA1c levels from finger-prick blood samples.
The study population was ethnically diverse (44% African American) and included a relatively high proportion of women (63%). The mean duration of diabetes was 7 years and the mean BMI was 34.9. A total of 97% of the subjects were taking diabetes medications, including 18% taking insulin.
Compared with the control group, the combination exercise group showed an absolute decrease in HbA1c levels of 0.34%. Patients who performed resistance training only showed a 0.16% decrease, and those who performed aerobic training only showed a 0.24% decrease, neither of which was statistically significant.
“An absolute decrease of 1% in HbA1c levels has been associated with a 15%-20% decrease in major cardiovascular disease events and 37% decrease in microvascular complications. Thus, our observed reduction [of 0.3%-0.4%] might be expected to produce a 5%-7% reduction in cardiovascular disease risk and a 12% reduction in microvascular complications,” Dr. Church and his associates said (JAMA 2010;304:2253-62).
This study was supported by the National Institutes of Health. Dr. Church and his associates reported numerous ties to scientific, educational, and lay groups, as well as to makers of pharmaceuticals and medical devices.
When combined with aerobic exercise, resistance training with weights lowered hemoglobin A1c levels in patients with type 2 diabetes.
Source ©diego cervo/iStockphoto.com
Combined aerobic and resistance exercise training lowered hemoglobin A1c levels modestly in patients with type 2 diabetes, while either type of training alone did not, according to a recent report.
Patients who participated in the combined exercise also were able to decrease their hypoglycemic medication more often than were those who participated in either type of exercise alone, said Dr. Timothy S. Church of Pennington Biomedical Research Center at Louisiana State University, Baton Rouge, and his associates.
The investigators assessed outcomes in 262 sedentary adults (mean age 56 years) with type 2 diabetes during a 9-month exercise intervention in which no attempt was made to alter patients' diets, medication usage, or other lifestyle factors. The study subjects were randomly assigned to undergo aerobic training only (72 patients), resistance training only (73 patients), a combination of both (76 patients), or no exercise training (41 patients serving as a control group).
The interventions were specifically designed so that all study subjects would spend the same amount of time exercising – approximately 140 minutes per week. This ensured that any differences between the combined-exercise group and the other exercise groups could be attributed to the activity itself, rather than to an extended time spent exercising in the combination group.
All the interventions took place in a laboratory facility and were closely supervised. In addition, study subjects had monthly visits with a certified diabetes educator who reviewed fasting glucose records and measured weight and HbA1c levels from finger-prick blood samples.
The study population was ethnically diverse (44% African American) and included a relatively high proportion of women (63%). The mean duration of diabetes was 7 years and the mean BMI was 34.9. A total of 97% of the subjects were taking diabetes medications, including 18% taking insulin.
Compared with the control group, the combination exercise group showed an absolute decrease in HbA1c levels of 0.34%. Patients who performed resistance training only showed a 0.16% decrease, and those who performed aerobic training only showed a 0.24% decrease, neither of which was statistically significant.
“An absolute decrease of 1% in HbA1c levels has been associated with a 15%-20% decrease in major cardiovascular disease events and 37% decrease in microvascular complications. Thus, our observed reduction [of 0.3%-0.4%] might be expected to produce a 5%-7% reduction in cardiovascular disease risk and a 12% reduction in microvascular complications,” Dr. Church and his associates said (JAMA 2010;304:2253-62).
This study was supported by the National Institutes of Health. Dr. Church and his associates reported numerous ties to scientific, educational, and lay groups, as well as to makers of pharmaceuticals and medical devices.
When combined with aerobic exercise, resistance training with weights lowered hemoglobin A1c levels in patients with type 2 diabetes.
Source ©diego cervo/iStockphoto.com
Combined aerobic and resistance exercise training lowered hemoglobin A1c levels modestly in patients with type 2 diabetes, while either type of training alone did not, according to a recent report.
Patients who participated in the combined exercise also were able to decrease their hypoglycemic medication more often than were those who participated in either type of exercise alone, said Dr. Timothy S. Church of Pennington Biomedical Research Center at Louisiana State University, Baton Rouge, and his associates.
The investigators assessed outcomes in 262 sedentary adults (mean age 56 years) with type 2 diabetes during a 9-month exercise intervention in which no attempt was made to alter patients' diets, medication usage, or other lifestyle factors. The study subjects were randomly assigned to undergo aerobic training only (72 patients), resistance training only (73 patients), a combination of both (76 patients), or no exercise training (41 patients serving as a control group).
The interventions were specifically designed so that all study subjects would spend the same amount of time exercising – approximately 140 minutes per week. This ensured that any differences between the combined-exercise group and the other exercise groups could be attributed to the activity itself, rather than to an extended time spent exercising in the combination group.
All the interventions took place in a laboratory facility and were closely supervised. In addition, study subjects had monthly visits with a certified diabetes educator who reviewed fasting glucose records and measured weight and HbA1c levels from finger-prick blood samples.
The study population was ethnically diverse (44% African American) and included a relatively high proportion of women (63%). The mean duration of diabetes was 7 years and the mean BMI was 34.9. A total of 97% of the subjects were taking diabetes medications, including 18% taking insulin.
Compared with the control group, the combination exercise group showed an absolute decrease in HbA1c levels of 0.34%. Patients who performed resistance training only showed a 0.16% decrease, and those who performed aerobic training only showed a 0.24% decrease, neither of which was statistically significant.
“An absolute decrease of 1% in HbA1c levels has been associated with a 15%-20% decrease in major cardiovascular disease events and 37% decrease in microvascular complications. Thus, our observed reduction [of 0.3%-0.4%] might be expected to produce a 5%-7% reduction in cardiovascular disease risk and a 12% reduction in microvascular complications,” Dr. Church and his associates said (JAMA 2010;304:2253-62).
This study was supported by the National Institutes of Health. Dr. Church and his associates reported numerous ties to scientific, educational, and lay groups, as well as to makers of pharmaceuticals and medical devices.
When combined with aerobic exercise, resistance training with weights lowered hemoglobin A1c levels in patients with type 2 diabetes.
Source ©diego cervo/iStockphoto.com
Otitis Research Supports New AAP Guidelines
Findings from a systematic review of the literature published through July 2010 will support the new acute otitis media practice guidelines now being prepared by the American Academy of Pediatrics, according to a recent report.
Experts looked to the latest results on AOM diagnosis, the changing microbial epidemiology associated with introduction of the heptavalent pneumococcal conjugate vaccine (PCV7) vaccine, the decision about whether to treat with antibiotics, and the comparative effectiveness of various antibiotics to inform the upcoming AAP practice guideline – an update of their 2001 study that was the basis of the 2004 AAP–American Academy of Family Physicians joint practice guideline on AOM, said Dr. Tumaini R. Coker of the University of California, Los Angeles, and the RAND Corp., Los Angeles, and her associates.
They included 80 articles used in the previous systematic review and 55 published since that time, reviewing both randomized controlled trials and observational studies (JAMA 2010;304:2161-9). Among their findings were the following:
▸ Otoscopic signs of inflammation (redness) and effusion (bulging or immobile tympanic membrane) are strongly associated with accurate diagnosis of AOM, while the importance of clinical symptoms is “less convincing.”
“Perhaps the most important way to improve diagnosis is to increase clinicians' ability to recognize and rely on key otoscopic findings,” Dr. Coker and her colleagues said.
▸ AOM microbiology has shifted significantly since the introduction of PCV7, with Haemophilus influenzae becoming more prevalent and Streptococcus pneumoniae becoming less so. However, a recent study indicates that this balance may be shifting back again “because of an increase in the proportion of AOM with nonvaccine S. pneumoniae serotypes.” Clinicians must stay current with microbial trends, especially given the recent approval of PCV13, the researchers said.
▸ Immediate ampicillin/amoxicillin treatment has a modest advantage over delayed antibiotic therapy or placebo, but also is more likely to cause diarrhea and rash. “Of 100 average-risk children with AOM, approximately 80 would likely get better within 3 days without antibiotics. If all were treated with immediate ampicillin/amoxicillin, an additional 12 would likely improve, but 3-10 children would develop rash and 5-10 would develop diarrhea. Clinicians need to weigh these risks (including possible long-term effects on antibiotic resistance) before prescribing immediate antibiotics for uncomplicated AOM,” the investigators said.
▸ Most antibiotics have similar clinical efficacy in children at average risk who have uncomplicated AOM. “We found no evidence of the superiority of any other antibiotic over amoxicillin,” they noted.
In particular, there is no evidence to support first-line use of more expensive antibiotics such as cefdinir or cefixime. In a given year, cefdinir is prescribed at 14% of the estimated 8 million physician visits for AOM, according to an analysis of data from the National Ambulatory Medical Care Survey. Assuming that such prescription is appropriate in approximately half of these cases because of a penicillin allergy, if physicians prescribed amoxicillin instead of cefdinir in the other half of cases, annual savings would exceed $34 million, Dr. Coker and her associates said.
This study was supported by the Agency for Healthcare Research and Quality. One of Dr. Coker's associates reported selling Pfizer stock at the start of the study.
Findings from a systematic review of the literature published through July 2010 will support the new acute otitis media practice guidelines now being prepared by the American Academy of Pediatrics, according to a recent report.
Experts looked to the latest results on AOM diagnosis, the changing microbial epidemiology associated with introduction of the heptavalent pneumococcal conjugate vaccine (PCV7) vaccine, the decision about whether to treat with antibiotics, and the comparative effectiveness of various antibiotics to inform the upcoming AAP practice guideline – an update of their 2001 study that was the basis of the 2004 AAP–American Academy of Family Physicians joint practice guideline on AOM, said Dr. Tumaini R. Coker of the University of California, Los Angeles, and the RAND Corp., Los Angeles, and her associates.
They included 80 articles used in the previous systematic review and 55 published since that time, reviewing both randomized controlled trials and observational studies (JAMA 2010;304:2161-9). Among their findings were the following:
▸ Otoscopic signs of inflammation (redness) and effusion (bulging or immobile tympanic membrane) are strongly associated with accurate diagnosis of AOM, while the importance of clinical symptoms is “less convincing.”
“Perhaps the most important way to improve diagnosis is to increase clinicians' ability to recognize and rely on key otoscopic findings,” Dr. Coker and her colleagues said.
▸ AOM microbiology has shifted significantly since the introduction of PCV7, with Haemophilus influenzae becoming more prevalent and Streptococcus pneumoniae becoming less so. However, a recent study indicates that this balance may be shifting back again “because of an increase in the proportion of AOM with nonvaccine S. pneumoniae serotypes.” Clinicians must stay current with microbial trends, especially given the recent approval of PCV13, the researchers said.
▸ Immediate ampicillin/amoxicillin treatment has a modest advantage over delayed antibiotic therapy or placebo, but also is more likely to cause diarrhea and rash. “Of 100 average-risk children with AOM, approximately 80 would likely get better within 3 days without antibiotics. If all were treated with immediate ampicillin/amoxicillin, an additional 12 would likely improve, but 3-10 children would develop rash and 5-10 would develop diarrhea. Clinicians need to weigh these risks (including possible long-term effects on antibiotic resistance) before prescribing immediate antibiotics for uncomplicated AOM,” the investigators said.
▸ Most antibiotics have similar clinical efficacy in children at average risk who have uncomplicated AOM. “We found no evidence of the superiority of any other antibiotic over amoxicillin,” they noted.
In particular, there is no evidence to support first-line use of more expensive antibiotics such as cefdinir or cefixime. In a given year, cefdinir is prescribed at 14% of the estimated 8 million physician visits for AOM, according to an analysis of data from the National Ambulatory Medical Care Survey. Assuming that such prescription is appropriate in approximately half of these cases because of a penicillin allergy, if physicians prescribed amoxicillin instead of cefdinir in the other half of cases, annual savings would exceed $34 million, Dr. Coker and her associates said.
This study was supported by the Agency for Healthcare Research and Quality. One of Dr. Coker's associates reported selling Pfizer stock at the start of the study.
Findings from a systematic review of the literature published through July 2010 will support the new acute otitis media practice guidelines now being prepared by the American Academy of Pediatrics, according to a recent report.
Experts looked to the latest results on AOM diagnosis, the changing microbial epidemiology associated with introduction of the heptavalent pneumococcal conjugate vaccine (PCV7) vaccine, the decision about whether to treat with antibiotics, and the comparative effectiveness of various antibiotics to inform the upcoming AAP practice guideline – an update of their 2001 study that was the basis of the 2004 AAP–American Academy of Family Physicians joint practice guideline on AOM, said Dr. Tumaini R. Coker of the University of California, Los Angeles, and the RAND Corp., Los Angeles, and her associates.
They included 80 articles used in the previous systematic review and 55 published since that time, reviewing both randomized controlled trials and observational studies (JAMA 2010;304:2161-9). Among their findings were the following:
▸ Otoscopic signs of inflammation (redness) and effusion (bulging or immobile tympanic membrane) are strongly associated with accurate diagnosis of AOM, while the importance of clinical symptoms is “less convincing.”
“Perhaps the most important way to improve diagnosis is to increase clinicians' ability to recognize and rely on key otoscopic findings,” Dr. Coker and her colleagues said.
▸ AOM microbiology has shifted significantly since the introduction of PCV7, with Haemophilus influenzae becoming more prevalent and Streptococcus pneumoniae becoming less so. However, a recent study indicates that this balance may be shifting back again “because of an increase in the proportion of AOM with nonvaccine S. pneumoniae serotypes.” Clinicians must stay current with microbial trends, especially given the recent approval of PCV13, the researchers said.
▸ Immediate ampicillin/amoxicillin treatment has a modest advantage over delayed antibiotic therapy or placebo, but also is more likely to cause diarrhea and rash. “Of 100 average-risk children with AOM, approximately 80 would likely get better within 3 days without antibiotics. If all were treated with immediate ampicillin/amoxicillin, an additional 12 would likely improve, but 3-10 children would develop rash and 5-10 would develop diarrhea. Clinicians need to weigh these risks (including possible long-term effects on antibiotic resistance) before prescribing immediate antibiotics for uncomplicated AOM,” the investigators said.
▸ Most antibiotics have similar clinical efficacy in children at average risk who have uncomplicated AOM. “We found no evidence of the superiority of any other antibiotic over amoxicillin,” they noted.
In particular, there is no evidence to support first-line use of more expensive antibiotics such as cefdinir or cefixime. In a given year, cefdinir is prescribed at 14% of the estimated 8 million physician visits for AOM, according to an analysis of data from the National Ambulatory Medical Care Survey. Assuming that such prescription is appropriate in approximately half of these cases because of a penicillin allergy, if physicians prescribed amoxicillin instead of cefdinir in the other half of cases, annual savings would exceed $34 million, Dr. Coker and her associates said.
This study was supported by the Agency for Healthcare Research and Quality. One of Dr. Coker's associates reported selling Pfizer stock at the start of the study.
Depressed Mood Is Related To High Intake of Chocolate
Depressed mood was significantly related to higher consumption of chocolate in a cross-sectional study of 931 men and women.
A cross-sectional design precludes drawing conclusions as to causality or even directionality of the association, so this study could not determine which of the two comparators – depressed mood or chocolate intake – precedes, much less causes, the other, said Dr. Natalie Rose, of the department of obstetrics and gynecology, University of California, Davis, and her associates.
A recent exploratory study of sweets in general noted that subjects with depressive symptoms showed a higher than average intake of chocolate.
That study was confined to women only, used a single measure of chocolate consumption, and used a measure of mood symptoms that is not widely recognized.
In contrast, Dr. Rose and her colleagues examined this possible link using a larger sample of both men and women, two measures of chocolate consumption (a food frequency questionnaire and another questionnaire that quantified chocolate intake specifically), and the Center for Epidemiological Studies–Depression Scale (CES-D) to measure mood (Arch. Intern. Med. 2010;170:699–703).
A CES-D score of 16 or higher indicates possible depressed mood, while a score of 22 or higher indicates major depression. In this study, subjects with scores above 16 were found to eat significantly more chocolate than subjects with lower scores (8.4 vs. 5.4 servings per month). A serving was considered to be “one small bar or 1 ounce (28 g) chocolate candy.” Those with a score of at least 22 ate an average of almost 12 servings per month.
To ensure that this association was specific to chocolate, the investigators looked for similar links between depressive symptoms and the intake of fat, carbohydrates, and total energy.
There were no significant associations. In addition, no link was found between mood and the consumption of other antioxidant foods, such as fish, coffee, caffeine, or fruits and vegetables.
There are several possible explanations for the association between depressive symptoms and high chocolate consumption.
Depression could stimulate chocolate cravings as a means of self-medication. Or a high intake of chocolate could contribute to depressed mood. Or some unknown factor such as oxidative stress or inflammation could produce both depressive symptoms and chocolate cravings.
Alternatively, the association might be more complex, with chocolate itself producing mood-elevating effects but with some constituent that is frequently combined with chocolate (such as trans fats) neutralizing or reversing this benefit. Or it might be that chocolate intake is analogous to alcohol intake, in that it produces short-term mood elevation but longer term depressive effects.
“Distinguishing among these possibilities will require different study designs,” Dr. Rose and her associates said.
This study was funded by the National Heart, Lung, and Blood Institute; the National Institutes of Health; and the University of California, San Diego, General Clinical Research Center. No financial conflicts of interest were reported.
Depressed mood was significantly related to higher consumption of chocolate in a cross-sectional study of 931 men and women.
A cross-sectional design precludes drawing conclusions as to causality or even directionality of the association, so this study could not determine which of the two comparators – depressed mood or chocolate intake – precedes, much less causes, the other, said Dr. Natalie Rose, of the department of obstetrics and gynecology, University of California, Davis, and her associates.
A recent exploratory study of sweets in general noted that subjects with depressive symptoms showed a higher than average intake of chocolate.
That study was confined to women only, used a single measure of chocolate consumption, and used a measure of mood symptoms that is not widely recognized.
In contrast, Dr. Rose and her colleagues examined this possible link using a larger sample of both men and women, two measures of chocolate consumption (a food frequency questionnaire and another questionnaire that quantified chocolate intake specifically), and the Center for Epidemiological Studies–Depression Scale (CES-D) to measure mood (Arch. Intern. Med. 2010;170:699–703).
A CES-D score of 16 or higher indicates possible depressed mood, while a score of 22 or higher indicates major depression. In this study, subjects with scores above 16 were found to eat significantly more chocolate than subjects with lower scores (8.4 vs. 5.4 servings per month). A serving was considered to be “one small bar or 1 ounce (28 g) chocolate candy.” Those with a score of at least 22 ate an average of almost 12 servings per month.
To ensure that this association was specific to chocolate, the investigators looked for similar links between depressive symptoms and the intake of fat, carbohydrates, and total energy.
There were no significant associations. In addition, no link was found between mood and the consumption of other antioxidant foods, such as fish, coffee, caffeine, or fruits and vegetables.
There are several possible explanations for the association between depressive symptoms and high chocolate consumption.
Depression could stimulate chocolate cravings as a means of self-medication. Or a high intake of chocolate could contribute to depressed mood. Or some unknown factor such as oxidative stress or inflammation could produce both depressive symptoms and chocolate cravings.
Alternatively, the association might be more complex, with chocolate itself producing mood-elevating effects but with some constituent that is frequently combined with chocolate (such as trans fats) neutralizing or reversing this benefit. Or it might be that chocolate intake is analogous to alcohol intake, in that it produces short-term mood elevation but longer term depressive effects.
“Distinguishing among these possibilities will require different study designs,” Dr. Rose and her associates said.
This study was funded by the National Heart, Lung, and Blood Institute; the National Institutes of Health; and the University of California, San Diego, General Clinical Research Center. No financial conflicts of interest were reported.
Depressed mood was significantly related to higher consumption of chocolate in a cross-sectional study of 931 men and women.
A cross-sectional design precludes drawing conclusions as to causality or even directionality of the association, so this study could not determine which of the two comparators – depressed mood or chocolate intake – precedes, much less causes, the other, said Dr. Natalie Rose, of the department of obstetrics and gynecology, University of California, Davis, and her associates.
A recent exploratory study of sweets in general noted that subjects with depressive symptoms showed a higher than average intake of chocolate.
That study was confined to women only, used a single measure of chocolate consumption, and used a measure of mood symptoms that is not widely recognized.
In contrast, Dr. Rose and her colleagues examined this possible link using a larger sample of both men and women, two measures of chocolate consumption (a food frequency questionnaire and another questionnaire that quantified chocolate intake specifically), and the Center for Epidemiological Studies–Depression Scale (CES-D) to measure mood (Arch. Intern. Med. 2010;170:699–703).
A CES-D score of 16 or higher indicates possible depressed mood, while a score of 22 or higher indicates major depression. In this study, subjects with scores above 16 were found to eat significantly more chocolate than subjects with lower scores (8.4 vs. 5.4 servings per month). A serving was considered to be “one small bar or 1 ounce (28 g) chocolate candy.” Those with a score of at least 22 ate an average of almost 12 servings per month.
To ensure that this association was specific to chocolate, the investigators looked for similar links between depressive symptoms and the intake of fat, carbohydrates, and total energy.
There were no significant associations. In addition, no link was found between mood and the consumption of other antioxidant foods, such as fish, coffee, caffeine, or fruits and vegetables.
There are several possible explanations for the association between depressive symptoms and high chocolate consumption.
Depression could stimulate chocolate cravings as a means of self-medication. Or a high intake of chocolate could contribute to depressed mood. Or some unknown factor such as oxidative stress or inflammation could produce both depressive symptoms and chocolate cravings.
Alternatively, the association might be more complex, with chocolate itself producing mood-elevating effects but with some constituent that is frequently combined with chocolate (such as trans fats) neutralizing or reversing this benefit. Or it might be that chocolate intake is analogous to alcohol intake, in that it produces short-term mood elevation but longer term depressive effects.
“Distinguishing among these possibilities will require different study designs,” Dr. Rose and her associates said.
This study was funded by the National Heart, Lung, and Blood Institute; the National Institutes of Health; and the University of California, San Diego, General Clinical Research Center. No financial conflicts of interest were reported.
FROM THE ARCHIVES OF INTERNAL MEDICINE
Anxiety Disorders Program Bests Usual Care
A program aimed at treating the most common anxiety disorders in primary care clinics proved more effective than usual care, according to the findings of a randomized controlled trial reported in JAMA.
The Coordinated Anxiety Learning and Management (CALM) program involves evidence-based treatment of panic disorder, generalized anxiety disorder, social anxiety disorder, and posttraumatic stress disorder, with or without the presence of comorbid depression, said Dr. Peter Roy-Byrne of the University of Washington, Seattle, and his associates.
The CALM model uses an Internet-based system to monitor the delivery of care by “anxiety clinical specialists” such as nurses, social workers, or psychologists who are trained to deliver the program's treatment. These specialists keep in close touch with a primary care physician throughout the 10–12 weeks of treatment. They use a computer program to help them administer cognitive-behavioral therapy and/or pharmacotherapy with selective serotonin reuptake inhibitors, serotonin and norepinephrine reuptake inhibitors, other types of antidepressants, or benzodiazepines.
Outcomes among 503 patients randomized to the CALM program were compared with 501 patients assigned to usual care. Patients were enrolled from 17 primary care clinics in Arkansas, California, and Washington. Usual care involved in-clinic mental health resources – which often involved “a single clinician with limited familiarity with evidence-based psychotherapy”– or referral to a mental health specialist. Treatment duration lasted 3–12 months (JAMA 2010;303:1921–8).
The study participants were diagnosed as having one or more of the four anxiety disorders, with or without comorbid depression, and were referred by 120 internists and 28 family physicians. The patient population was ethnically diverse and had a broad age range (18–75 years). Patients underwent a battery of assessments at baseline and at 6-month intervals for 18 months to track their outcomes.
Patients in the intervention group were significantly more likely than those in the usual-care group to receive psychotherapy that included elements of cognitive-behavioral therapy and to receive the appropriate type, dose, and duration of medication. In addition, their scores on the Brief Symptom Inventory measuring psychic and somatic anxiety were significantly lower than those of the usual-care group at all follow-up assessments, Dr. Roy-Byrne and his associates said.
Accordingly, a significantly higher proportion of patients in the CALM program responded at 6 months (57%), 12 months (64%), and 18 months (65%) than patients who received usual care (37%, 45%, and 51% response rates, respectively).
Similarly, a significantly higher proportion of patients in the CALM program were in remission at these intervals (43%, 51%, and 51%, respectively) than were usual-care patients (27%, 33%, and 37%).
At 1 year, the number needed to treat was 5.3 for response and 5.5 for remission. This “was well within the range for treatments in medicine that are generally considered to be efficacious, and beneficial effects of the intervention persisted for at least 1 year after clinical visits had ceased, suggesting a long-term effect,” the investigators noted.
This study was supported by the National Institute of Mental Health.
Dr. Roy-Byrne reported receiving support from the National Institutes of Health. The researchers reported receiving support or have relationships with Jazz Pharmaceuticals, Solvay Pharmaceuticals, the American Psychiatric Association, the Anxiety Disorders Association of America, CMP Media, Current Medical Directions, Imedex, Massachusetts General Hospital Academy, and PRIMEDIA Healthcare, as well as serving as expert witnesses on multiple legal cases related to anxiety.
A program aimed at treating the most common anxiety disorders in primary care clinics proved more effective than usual care, according to the findings of a randomized controlled trial reported in JAMA.
The Coordinated Anxiety Learning and Management (CALM) program involves evidence-based treatment of panic disorder, generalized anxiety disorder, social anxiety disorder, and posttraumatic stress disorder, with or without the presence of comorbid depression, said Dr. Peter Roy-Byrne of the University of Washington, Seattle, and his associates.
The CALM model uses an Internet-based system to monitor the delivery of care by “anxiety clinical specialists” such as nurses, social workers, or psychologists who are trained to deliver the program's treatment. These specialists keep in close touch with a primary care physician throughout the 10–12 weeks of treatment. They use a computer program to help them administer cognitive-behavioral therapy and/or pharmacotherapy with selective serotonin reuptake inhibitors, serotonin and norepinephrine reuptake inhibitors, other types of antidepressants, or benzodiazepines.
Outcomes among 503 patients randomized to the CALM program were compared with 501 patients assigned to usual care. Patients were enrolled from 17 primary care clinics in Arkansas, California, and Washington. Usual care involved in-clinic mental health resources – which often involved “a single clinician with limited familiarity with evidence-based psychotherapy”– or referral to a mental health specialist. Treatment duration lasted 3–12 months (JAMA 2010;303:1921–8).
The study participants were diagnosed as having one or more of the four anxiety disorders, with or without comorbid depression, and were referred by 120 internists and 28 family physicians. The patient population was ethnically diverse and had a broad age range (18–75 years). Patients underwent a battery of assessments at baseline and at 6-month intervals for 18 months to track their outcomes.
Patients in the intervention group were significantly more likely than those in the usual-care group to receive psychotherapy that included elements of cognitive-behavioral therapy and to receive the appropriate type, dose, and duration of medication. In addition, their scores on the Brief Symptom Inventory measuring psychic and somatic anxiety were significantly lower than those of the usual-care group at all follow-up assessments, Dr. Roy-Byrne and his associates said.
Accordingly, a significantly higher proportion of patients in the CALM program responded at 6 months (57%), 12 months (64%), and 18 months (65%) than patients who received usual care (37%, 45%, and 51% response rates, respectively).
Similarly, a significantly higher proportion of patients in the CALM program were in remission at these intervals (43%, 51%, and 51%, respectively) than were usual-care patients (27%, 33%, and 37%).
At 1 year, the number needed to treat was 5.3 for response and 5.5 for remission. This “was well within the range for treatments in medicine that are generally considered to be efficacious, and beneficial effects of the intervention persisted for at least 1 year after clinical visits had ceased, suggesting a long-term effect,” the investigators noted.
This study was supported by the National Institute of Mental Health.
Dr. Roy-Byrne reported receiving support from the National Institutes of Health. The researchers reported receiving support or have relationships with Jazz Pharmaceuticals, Solvay Pharmaceuticals, the American Psychiatric Association, the Anxiety Disorders Association of America, CMP Media, Current Medical Directions, Imedex, Massachusetts General Hospital Academy, and PRIMEDIA Healthcare, as well as serving as expert witnesses on multiple legal cases related to anxiety.
A program aimed at treating the most common anxiety disorders in primary care clinics proved more effective than usual care, according to the findings of a randomized controlled trial reported in JAMA.
The Coordinated Anxiety Learning and Management (CALM) program involves evidence-based treatment of panic disorder, generalized anxiety disorder, social anxiety disorder, and posttraumatic stress disorder, with or without the presence of comorbid depression, said Dr. Peter Roy-Byrne of the University of Washington, Seattle, and his associates.
The CALM model uses an Internet-based system to monitor the delivery of care by “anxiety clinical specialists” such as nurses, social workers, or psychologists who are trained to deliver the program's treatment. These specialists keep in close touch with a primary care physician throughout the 10–12 weeks of treatment. They use a computer program to help them administer cognitive-behavioral therapy and/or pharmacotherapy with selective serotonin reuptake inhibitors, serotonin and norepinephrine reuptake inhibitors, other types of antidepressants, or benzodiazepines.
Outcomes among 503 patients randomized to the CALM program were compared with 501 patients assigned to usual care. Patients were enrolled from 17 primary care clinics in Arkansas, California, and Washington. Usual care involved in-clinic mental health resources – which often involved “a single clinician with limited familiarity with evidence-based psychotherapy”– or referral to a mental health specialist. Treatment duration lasted 3–12 months (JAMA 2010;303:1921–8).
The study participants were diagnosed as having one or more of the four anxiety disorders, with or without comorbid depression, and were referred by 120 internists and 28 family physicians. The patient population was ethnically diverse and had a broad age range (18–75 years). Patients underwent a battery of assessments at baseline and at 6-month intervals for 18 months to track their outcomes.
Patients in the intervention group were significantly more likely than those in the usual-care group to receive psychotherapy that included elements of cognitive-behavioral therapy and to receive the appropriate type, dose, and duration of medication. In addition, their scores on the Brief Symptom Inventory measuring psychic and somatic anxiety were significantly lower than those of the usual-care group at all follow-up assessments, Dr. Roy-Byrne and his associates said.
Accordingly, a significantly higher proportion of patients in the CALM program responded at 6 months (57%), 12 months (64%), and 18 months (65%) than patients who received usual care (37%, 45%, and 51% response rates, respectively).
Similarly, a significantly higher proportion of patients in the CALM program were in remission at these intervals (43%, 51%, and 51%, respectively) than were usual-care patients (27%, 33%, and 37%).
At 1 year, the number needed to treat was 5.3 for response and 5.5 for remission. This “was well within the range for treatments in medicine that are generally considered to be efficacious, and beneficial effects of the intervention persisted for at least 1 year after clinical visits had ceased, suggesting a long-term effect,” the investigators noted.
This study was supported by the National Institute of Mental Health.
Dr. Roy-Byrne reported receiving support from the National Institutes of Health. The researchers reported receiving support or have relationships with Jazz Pharmaceuticals, Solvay Pharmaceuticals, the American Psychiatric Association, the Anxiety Disorders Association of America, CMP Media, Current Medical Directions, Imedex, Massachusetts General Hospital Academy, and PRIMEDIA Healthcare, as well as serving as expert witnesses on multiple legal cases related to anxiety.
FROM JAMA
Outcomes After Allogeneic Hematopoietic-Cell Transplant Dramatically Better
Mortality risk has declined dramatically for patients undergoing allogeneic hematopoietic-cell transplantation in recent years, and long-term survival has improved substantially, according to a report in the Nov. 25 issue of the New England Journal of Medicine.
The improved outcomes appear to be related to marked reductions in organ damage, infection, and severe acute graft-vs.-host disease (GVHD), said Ted A. Gooley, Ph.D., and his associates at the Fred Hutchinson Cancer Research Center and the University of Washington, Seattle.
The investigators hypothesized that several changes in the care of transplant patients have likely improved outcomes in recent years, and they tested their hypothesis by comparing several outcome measures between two large patient cohorts at their cancer center: 1,418 patients who received their first allogeneic transplants in 1993-1997 and 1,148 who did so in 2003-2007.
Overall mortality decreased by 41% between the two time periods.
The overall risk of death not preceded by relapse decreased by 52%, and the risk of death not preceded by relapse within 200 days of transplant decreased by 60%.
The rate of relapse or progression of the malignant condition decreased by 21%.
These improvements were seen consistently across numerous subgroups of patients. They occurred across diagnostic categories including acute lymphocytic leukemia, acute myeloid leukemia, chronic myeloid leukemia, and myelodysplastic syndrome. And they were seen in patients who received transplants from a matched donor sibling, a mismatched sibling, a relative who was not a sibling, or an unrelated donor.
Almost every complication associated with transplantation showed improvement over time.
Concerning organ damage, the odds of developing jaundice dropped by more than 70%, and the odds of respiratory failure decreased by 36%. The risk of developing acute renal injury also declined significantly.
Regarding infections, the rate of early cytomegalovirus infection declined by 48%. The rate of bacteremia with a gram-negative organism decreased by 39%, that of invasive mold infection dropped by 51%, and the rate of invasive candida infection was cut by an impressive 88% between the two study periods.
At the same time, the proportion of patients with any degree of acute GVHD – mild, moderate, or severe – also decreased significantly. In particular, the odds of developing grade 3 or 4 GVHD declined 67%, even though the use of peripheral-blood hematopoietic cells instead of bone marrow has increased.
All of these improvements occurred despite the fact that during this time interval, transplantation was expanded to include patients who were older, were more seriously ill, and had more advanced disease, Dr. Gooley and his associates said (N. Engl. J. Med. 2010;363[22]:2091-101).
"Several changes in our transplantation practice appear to have contributed to improved outcomes," they noted.
Patients who have coexisting medical conditions now receive a less toxic conditioning regimen to preserve their organ function. In some patients, the dose of total-body irradiation is limited, fludarabine is substituted for cyclophosphamide, or cyclophosphamide dosing is individualized to head off organ failure.
The decrease in GVHD is due in part to the introduction of ursodiol prophylaxis, which prevents cholestasis and accounts for the near-disappearance of stage 4 hepatic GVHD. Treatment of emergent GVHD also has changed, with universal prednisone therapy being replaced by a more individualized approach. This reduced the average exposure to prednisone by 48%, which in turn cut the rate of prednisone-related CMV, fungal, and bacterial infections.
The increased use of peripheral-blood donor cells allowed significantly faster neutrophil engraftment and earlier recovery of immunity against fungal and bacterial infection. In addition, antibacterial prophylaxis in neutropenic patients switched from cephalosporins to quinolones, and "preemptive antiviral therapy is now based on a more sensitive diagnostic test for CMV viremia," the investigators said.
"In conclusion, our data show clear improvement in outcomes of transplantation between the period from 1993 through 1997 and the period from 2003 through 2007. The data also indicate areas of transplantation biology and patient care in which research is needed to achieve further progress – specifically, GVHD and graft-versus-tumor effects, immunologic tolerance, and the management of infection and recurrent malignant conditions," they said.
Dr. Gooley and his associates reported numerous ties to pharmaceutical and device manufacturers.
Mortality risk has declined dramatically for patients undergoing allogeneic hematopoietic-cell transplantation in recent years, and long-term survival has improved substantially, according to a report in the Nov. 25 issue of the New England Journal of Medicine.
The improved outcomes appear to be related to marked reductions in organ damage, infection, and severe acute graft-vs.-host disease (GVHD), said Ted A. Gooley, Ph.D., and his associates at the Fred Hutchinson Cancer Research Center and the University of Washington, Seattle.
The investigators hypothesized that several changes in the care of transplant patients have likely improved outcomes in recent years, and they tested their hypothesis by comparing several outcome measures between two large patient cohorts at their cancer center: 1,418 patients who received their first allogeneic transplants in 1993-1997 and 1,148 who did so in 2003-2007.
Overall mortality decreased by 41% between the two time periods.
The overall risk of death not preceded by relapse decreased by 52%, and the risk of death not preceded by relapse within 200 days of transplant decreased by 60%.
The rate of relapse or progression of the malignant condition decreased by 21%.
These improvements were seen consistently across numerous subgroups of patients. They occurred across diagnostic categories including acute lymphocytic leukemia, acute myeloid leukemia, chronic myeloid leukemia, and myelodysplastic syndrome. And they were seen in patients who received transplants from a matched donor sibling, a mismatched sibling, a relative who was not a sibling, or an unrelated donor.
Almost every complication associated with transplantation showed improvement over time.
Concerning organ damage, the odds of developing jaundice dropped by more than 70%, and the odds of respiratory failure decreased by 36%. The risk of developing acute renal injury also declined significantly.
Regarding infections, the rate of early cytomegalovirus infection declined by 48%. The rate of bacteremia with a gram-negative organism decreased by 39%, that of invasive mold infection dropped by 51%, and the rate of invasive candida infection was cut by an impressive 88% between the two study periods.
At the same time, the proportion of patients with any degree of acute GVHD – mild, moderate, or severe – also decreased significantly. In particular, the odds of developing grade 3 or 4 GVHD declined 67%, even though the use of peripheral-blood hematopoietic cells instead of bone marrow has increased.
All of these improvements occurred despite the fact that during this time interval, transplantation was expanded to include patients who were older, were more seriously ill, and had more advanced disease, Dr. Gooley and his associates said (N. Engl. J. Med. 2010;363[22]:2091-101).
"Several changes in our transplantation practice appear to have contributed to improved outcomes," they noted.
Patients who have coexisting medical conditions now receive a less toxic conditioning regimen to preserve their organ function. In some patients, the dose of total-body irradiation is limited, fludarabine is substituted for cyclophosphamide, or cyclophosphamide dosing is individualized to head off organ failure.
The decrease in GVHD is due in part to the introduction of ursodiol prophylaxis, which prevents cholestasis and accounts for the near-disappearance of stage 4 hepatic GVHD. Treatment of emergent GVHD also has changed, with universal prednisone therapy being replaced by a more individualized approach. This reduced the average exposure to prednisone by 48%, which in turn cut the rate of prednisone-related CMV, fungal, and bacterial infections.
The increased use of peripheral-blood donor cells allowed significantly faster neutrophil engraftment and earlier recovery of immunity against fungal and bacterial infection. In addition, antibacterial prophylaxis in neutropenic patients switched from cephalosporins to quinolones, and "preemptive antiviral therapy is now based on a more sensitive diagnostic test for CMV viremia," the investigators said.
"In conclusion, our data show clear improvement in outcomes of transplantation between the period from 1993 through 1997 and the period from 2003 through 2007. The data also indicate areas of transplantation biology and patient care in which research is needed to achieve further progress – specifically, GVHD and graft-versus-tumor effects, immunologic tolerance, and the management of infection and recurrent malignant conditions," they said.
Dr. Gooley and his associates reported numerous ties to pharmaceutical and device manufacturers.
Mortality risk has declined dramatically for patients undergoing allogeneic hematopoietic-cell transplantation in recent years, and long-term survival has improved substantially, according to a report in the Nov. 25 issue of the New England Journal of Medicine.
The improved outcomes appear to be related to marked reductions in organ damage, infection, and severe acute graft-vs.-host disease (GVHD), said Ted A. Gooley, Ph.D., and his associates at the Fred Hutchinson Cancer Research Center and the University of Washington, Seattle.
The investigators hypothesized that several changes in the care of transplant patients have likely improved outcomes in recent years, and they tested their hypothesis by comparing several outcome measures between two large patient cohorts at their cancer center: 1,418 patients who received their first allogeneic transplants in 1993-1997 and 1,148 who did so in 2003-2007.
Overall mortality decreased by 41% between the two time periods.
The overall risk of death not preceded by relapse decreased by 52%, and the risk of death not preceded by relapse within 200 days of transplant decreased by 60%.
The rate of relapse or progression of the malignant condition decreased by 21%.
These improvements were seen consistently across numerous subgroups of patients. They occurred across diagnostic categories including acute lymphocytic leukemia, acute myeloid leukemia, chronic myeloid leukemia, and myelodysplastic syndrome. And they were seen in patients who received transplants from a matched donor sibling, a mismatched sibling, a relative who was not a sibling, or an unrelated donor.
Almost every complication associated with transplantation showed improvement over time.
Concerning organ damage, the odds of developing jaundice dropped by more than 70%, and the odds of respiratory failure decreased by 36%. The risk of developing acute renal injury also declined significantly.
Regarding infections, the rate of early cytomegalovirus infection declined by 48%. The rate of bacteremia with a gram-negative organism decreased by 39%, that of invasive mold infection dropped by 51%, and the rate of invasive candida infection was cut by an impressive 88% between the two study periods.
At the same time, the proportion of patients with any degree of acute GVHD – mild, moderate, or severe – also decreased significantly. In particular, the odds of developing grade 3 or 4 GVHD declined 67%, even though the use of peripheral-blood hematopoietic cells instead of bone marrow has increased.
All of these improvements occurred despite the fact that during this time interval, transplantation was expanded to include patients who were older, were more seriously ill, and had more advanced disease, Dr. Gooley and his associates said (N. Engl. J. Med. 2010;363[22]:2091-101).
"Several changes in our transplantation practice appear to have contributed to improved outcomes," they noted.
Patients who have coexisting medical conditions now receive a less toxic conditioning regimen to preserve their organ function. In some patients, the dose of total-body irradiation is limited, fludarabine is substituted for cyclophosphamide, or cyclophosphamide dosing is individualized to head off organ failure.
The decrease in GVHD is due in part to the introduction of ursodiol prophylaxis, which prevents cholestasis and accounts for the near-disappearance of stage 4 hepatic GVHD. Treatment of emergent GVHD also has changed, with universal prednisone therapy being replaced by a more individualized approach. This reduced the average exposure to prednisone by 48%, which in turn cut the rate of prednisone-related CMV, fungal, and bacterial infections.
The increased use of peripheral-blood donor cells allowed significantly faster neutrophil engraftment and earlier recovery of immunity against fungal and bacterial infection. In addition, antibacterial prophylaxis in neutropenic patients switched from cephalosporins to quinolones, and "preemptive antiviral therapy is now based on a more sensitive diagnostic test for CMV viremia," the investigators said.
"In conclusion, our data show clear improvement in outcomes of transplantation between the period from 1993 through 1997 and the period from 2003 through 2007. The data also indicate areas of transplantation biology and patient care in which research is needed to achieve further progress – specifically, GVHD and graft-versus-tumor effects, immunologic tolerance, and the management of infection and recurrent malignant conditions," they said.
Dr. Gooley and his associates reported numerous ties to pharmaceutical and device manufacturers.
FROM THE NEW ENGLAND JOURNAL OF MEDICINE
Major Finding: Overall mortality following allogeneic hematopoietic-cell transplantation has decreased by 41% in recent years, mortality not preceded by relapse has decreased by 52%, and mortality within 200 days not preceded by relapse has decreased by 60%.
Data Source: A single-center study comparing mortality; GVHD; and hepatic, renal, pulmonary, and infectious complications related to allogeneic hematopoietic-cell transplantation in two cohorts: 1,418 patients treated in 1993-1997 and 1,148 patients treated in 2003-2007.
Disclosures: Dr. Gooley and his associates reported numerous ties to pharmaceutical and device manufacturers.