User login
Diabetic Foot Infection Classification System Found Valid in 2-Year Study
A system for classifying diabetic foot infection proved effective at predicting adverse clinical outcomes in a 2-year cohort study, reported Lawrence A. Lavery, D.P.M., of Scott & White Hospital, Round Rock, Tex., and his associates.
In 2004, the Infectious Diseases Society of America (IDSA) and the International Working Group on the Diabetic Foot (IWGDF) each published guidelines for managing diabetic foot infections. Both sets of guidelines included “essentially identical” systems for classifying the severity of infection. In contrast, previous guidelines “either did not specifically define infection or, if they did, only noted its presence or absence,” the researchers said (Clin. Infect. Dis. 2007 Jan. 17 [Epub DOI:10.1086/511036]).
Both of the 2004 classification systems first categorize foot wounds as infected or not, based on the presence or absence of purulent secretions or local or systemic signs of inflammation or infection. They further categorize the infections as mild, moderate, or severe based on wound depth and size (especially the extent of cellulitis) and on the presence or absence of systemic manifestations of infection, such as fever, chills, leukocytosis, or metabolic aberrations.
The new classification systems were developed by “an international consensus of experts in various fields,” but until now no study has validated their ability to predict outcomes. Dr. Lavery and his associates did so by applying the classification systems to data that had already been collected on 1,666 subjects enrolled in a foot-care management program and followed for a mean of 27 months.
A total of 247 patients (14.8%) developed a foot wound and 151 (9.1%) developed a foot infection. Of the foot infections, 27 were classified as severe, and 50 patients required an amputation of some type. “Considering that these patients were screened for foot disorders at enrollment in the study, were educated about proper foot care, and had ready access to a foot clinic, we observed a higher incidence of foot infection than expected,” the investigators noted (Clin. Infect. Dis. 2007;44:562–5).
With increasing infection severity on the IDSA-IWGDF classification system, there were increasing risks of hospitalization, osteomyelitis, amputation, and other complications such as peripheral neuropathy and vascular disease.
“We believe the results of this study are the first to validate these new guidelines,” Dr. Lavery and his associates said.
They added that a reliable infection classification system, “designed to be simple to apply and easy to remember,” should help clinicians decide whether a patient should be hospitalized, whether to use parenteral or oral antibiotics, and how urgently surgery or other treatments should be performed.
A system for classifying diabetic foot infection proved effective at predicting adverse clinical outcomes in a 2-year cohort study, reported Lawrence A. Lavery, D.P.M., of Scott & White Hospital, Round Rock, Tex., and his associates.
In 2004, the Infectious Diseases Society of America (IDSA) and the International Working Group on the Diabetic Foot (IWGDF) each published guidelines for managing diabetic foot infections. Both sets of guidelines included “essentially identical” systems for classifying the severity of infection. In contrast, previous guidelines “either did not specifically define infection or, if they did, only noted its presence or absence,” the researchers said (Clin. Infect. Dis. 2007 Jan. 17 [Epub DOI:10.1086/511036]).
Both of the 2004 classification systems first categorize foot wounds as infected or not, based on the presence or absence of purulent secretions or local or systemic signs of inflammation or infection. They further categorize the infections as mild, moderate, or severe based on wound depth and size (especially the extent of cellulitis) and on the presence or absence of systemic manifestations of infection, such as fever, chills, leukocytosis, or metabolic aberrations.
The new classification systems were developed by “an international consensus of experts in various fields,” but until now no study has validated their ability to predict outcomes. Dr. Lavery and his associates did so by applying the classification systems to data that had already been collected on 1,666 subjects enrolled in a foot-care management program and followed for a mean of 27 months.
A total of 247 patients (14.8%) developed a foot wound and 151 (9.1%) developed a foot infection. Of the foot infections, 27 were classified as severe, and 50 patients required an amputation of some type. “Considering that these patients were screened for foot disorders at enrollment in the study, were educated about proper foot care, and had ready access to a foot clinic, we observed a higher incidence of foot infection than expected,” the investigators noted (Clin. Infect. Dis. 2007;44:562–5).
With increasing infection severity on the IDSA-IWGDF classification system, there were increasing risks of hospitalization, osteomyelitis, amputation, and other complications such as peripheral neuropathy and vascular disease.
“We believe the results of this study are the first to validate these new guidelines,” Dr. Lavery and his associates said.
They added that a reliable infection classification system, “designed to be simple to apply and easy to remember,” should help clinicians decide whether a patient should be hospitalized, whether to use parenteral or oral antibiotics, and how urgently surgery or other treatments should be performed.
A system for classifying diabetic foot infection proved effective at predicting adverse clinical outcomes in a 2-year cohort study, reported Lawrence A. Lavery, D.P.M., of Scott & White Hospital, Round Rock, Tex., and his associates.
In 2004, the Infectious Diseases Society of America (IDSA) and the International Working Group on the Diabetic Foot (IWGDF) each published guidelines for managing diabetic foot infections. Both sets of guidelines included “essentially identical” systems for classifying the severity of infection. In contrast, previous guidelines “either did not specifically define infection or, if they did, only noted its presence or absence,” the researchers said (Clin. Infect. Dis. 2007 Jan. 17 [Epub DOI:10.1086/511036]).
Both of the 2004 classification systems first categorize foot wounds as infected or not, based on the presence or absence of purulent secretions or local or systemic signs of inflammation or infection. They further categorize the infections as mild, moderate, or severe based on wound depth and size (especially the extent of cellulitis) and on the presence or absence of systemic manifestations of infection, such as fever, chills, leukocytosis, or metabolic aberrations.
The new classification systems were developed by “an international consensus of experts in various fields,” but until now no study has validated their ability to predict outcomes. Dr. Lavery and his associates did so by applying the classification systems to data that had already been collected on 1,666 subjects enrolled in a foot-care management program and followed for a mean of 27 months.
A total of 247 patients (14.8%) developed a foot wound and 151 (9.1%) developed a foot infection. Of the foot infections, 27 were classified as severe, and 50 patients required an amputation of some type. “Considering that these patients were screened for foot disorders at enrollment in the study, were educated about proper foot care, and had ready access to a foot clinic, we observed a higher incidence of foot infection than expected,” the investigators noted (Clin. Infect. Dis. 2007;44:562–5).
With increasing infection severity on the IDSA-IWGDF classification system, there were increasing risks of hospitalization, osteomyelitis, amputation, and other complications such as peripheral neuropathy and vascular disease.
“We believe the results of this study are the first to validate these new guidelines,” Dr. Lavery and his associates said.
They added that a reliable infection classification system, “designed to be simple to apply and easy to remember,” should help clinicians decide whether a patient should be hospitalized, whether to use parenteral or oral antibiotics, and how urgently surgery or other treatments should be performed.
Etoricoxib Caused Fewer GI Events Than Diclofenac in Arthritis Patients
The cyclooxygenase-2 inhibitor etoricoxib caused fewer clinically important upper GI events than the traditional NSAID diclofenac in a large study designed to reflect the real-world experience of treating osteoarthritis and rheumatoid arthritis.
The Multinational Etoricoxib and Diclofenac Arthritis Long-Term Program (MEDAL) pooled the results of three large randomized clinical trials involving nearly 35,000 patients treated at 1,380 sites in 46 countries. Unlike in most clinical trials, subjects in the MEDAL program were encouraged to use proton pump inhibitor therapy to protect against GI damage, and those at cardiovascular risk were encouraged to add low-dose aspirin to their regimens, investigators reported.
Etoricoxib and diclofenac had similar efficacy against arthritis. Upper GI events, primarily uncomplicated ulcers, were significantly less frequent with etoricoxib than with diclofenac. There was no difference between the two drugs in rates of more serious complicated events, reported Dr. Loren Laine and associates in the MEDAL program (Lancet 2007;369:465–73).
Significantly fewer patients taking etoricoxib discontinued treatment because of dyspepsia, compared with those taking diclofenac.
This study was sponsored by Merck Research Laboratories, which conducted the statistical analyses and was involved in data analysis, safety monitoring, and reporting.
The cyclooxygenase-2 inhibitor etoricoxib caused fewer clinically important upper GI events than the traditional NSAID diclofenac in a large study designed to reflect the real-world experience of treating osteoarthritis and rheumatoid arthritis.
The Multinational Etoricoxib and Diclofenac Arthritis Long-Term Program (MEDAL) pooled the results of three large randomized clinical trials involving nearly 35,000 patients treated at 1,380 sites in 46 countries. Unlike in most clinical trials, subjects in the MEDAL program were encouraged to use proton pump inhibitor therapy to protect against GI damage, and those at cardiovascular risk were encouraged to add low-dose aspirin to their regimens, investigators reported.
Etoricoxib and diclofenac had similar efficacy against arthritis. Upper GI events, primarily uncomplicated ulcers, were significantly less frequent with etoricoxib than with diclofenac. There was no difference between the two drugs in rates of more serious complicated events, reported Dr. Loren Laine and associates in the MEDAL program (Lancet 2007;369:465–73).
Significantly fewer patients taking etoricoxib discontinued treatment because of dyspepsia, compared with those taking diclofenac.
This study was sponsored by Merck Research Laboratories, which conducted the statistical analyses and was involved in data analysis, safety monitoring, and reporting.
The cyclooxygenase-2 inhibitor etoricoxib caused fewer clinically important upper GI events than the traditional NSAID diclofenac in a large study designed to reflect the real-world experience of treating osteoarthritis and rheumatoid arthritis.
The Multinational Etoricoxib and Diclofenac Arthritis Long-Term Program (MEDAL) pooled the results of three large randomized clinical trials involving nearly 35,000 patients treated at 1,380 sites in 46 countries. Unlike in most clinical trials, subjects in the MEDAL program were encouraged to use proton pump inhibitor therapy to protect against GI damage, and those at cardiovascular risk were encouraged to add low-dose aspirin to their regimens, investigators reported.
Etoricoxib and diclofenac had similar efficacy against arthritis. Upper GI events, primarily uncomplicated ulcers, were significantly less frequent with etoricoxib than with diclofenac. There was no difference between the two drugs in rates of more serious complicated events, reported Dr. Loren Laine and associates in the MEDAL program (Lancet 2007;369:465–73).
Significantly fewer patients taking etoricoxib discontinued treatment because of dyspepsia, compared with those taking diclofenac.
This study was sponsored by Merck Research Laboratories, which conducted the statistical analyses and was involved in data analysis, safety monitoring, and reporting.
SLE Activity Predictive of Severity of Ischemic Stroke
Severe ischemic strokes are common in systemic lupus erythematosus patients, and a high level of disease activity predicts their occurrence, reported Dr. Jamal Mikdashi and his associates.
“The pathogenesis of ischemic stroke in SLE involves more than the traditional Framingham risk factors,” but the features that predict stroke are not well understood in this patient population, the researchers wrote (Stroke 2007;38:281–85).
They studied predictive factors using data from the University of Maryland lupus cohort, in which 238 SLE patients were enrolled from 1992 to 2004 and were followed for a mean of 8 years. Of these subjects, 90% were women and 66% were black.
Ischemic strokes occurred in 44 patients (18%), and 34 of these (77%) were severe strokes, Dr. Mikdashi, of the University of Maryland, Baltimore, and his associates reported.
The most prevalent subtype was large-artery/atherothrombotic strokes (45%), followed by small-vessel/lacunar infarcts (39%). Seven patients (16%) had recurrent strokes during follow-up.
Baseline SLE activity, the presence of cutaneous vasculitis, and higher prednisone doses were significantly more frequent on univariate analysis in subjects who had a stroke than in those who did not have a stroke. On multivariate analysis, only high SLE activity predicted stroke.
When subjects were divided according to the severity of SLE activity at baseline, those with higher SLE activity scores were at twice the risk for ischemic stroke and at nearly three times the risk for severe ischemic stroke, compared with subjects with low SLE activity scores.
These findings suggest that besides conventional risk factors, “SLE patients may possess other characteristics that render them at greater risk for ischemic strokes,” the investigators wrote.
Not surprisingly, hypercholesterolemia and hypertension also were found to be strong independent predictors of ischemic stroke. A substudy of statin therapy in this cohort indicated that it may reduce stroke risk.
“Further studies will determine whether treating hyperlipidemia and other traditional risk factors in SLE patients may substantially reduce or prevent the development of severe stroke and whether such measures will have impact on mortality, disability, and quality of life in SLE,” Dr. Mikdashi and his associates noted.
Severe ischemic strokes are common in systemic lupus erythematosus patients, and a high level of disease activity predicts their occurrence, reported Dr. Jamal Mikdashi and his associates.
“The pathogenesis of ischemic stroke in SLE involves more than the traditional Framingham risk factors,” but the features that predict stroke are not well understood in this patient population, the researchers wrote (Stroke 2007;38:281–85).
They studied predictive factors using data from the University of Maryland lupus cohort, in which 238 SLE patients were enrolled from 1992 to 2004 and were followed for a mean of 8 years. Of these subjects, 90% were women and 66% were black.
Ischemic strokes occurred in 44 patients (18%), and 34 of these (77%) were severe strokes, Dr. Mikdashi, of the University of Maryland, Baltimore, and his associates reported.
The most prevalent subtype was large-artery/atherothrombotic strokes (45%), followed by small-vessel/lacunar infarcts (39%). Seven patients (16%) had recurrent strokes during follow-up.
Baseline SLE activity, the presence of cutaneous vasculitis, and higher prednisone doses were significantly more frequent on univariate analysis in subjects who had a stroke than in those who did not have a stroke. On multivariate analysis, only high SLE activity predicted stroke.
When subjects were divided according to the severity of SLE activity at baseline, those with higher SLE activity scores were at twice the risk for ischemic stroke and at nearly three times the risk for severe ischemic stroke, compared with subjects with low SLE activity scores.
These findings suggest that besides conventional risk factors, “SLE patients may possess other characteristics that render them at greater risk for ischemic strokes,” the investigators wrote.
Not surprisingly, hypercholesterolemia and hypertension also were found to be strong independent predictors of ischemic stroke. A substudy of statin therapy in this cohort indicated that it may reduce stroke risk.
“Further studies will determine whether treating hyperlipidemia and other traditional risk factors in SLE patients may substantially reduce or prevent the development of severe stroke and whether such measures will have impact on mortality, disability, and quality of life in SLE,” Dr. Mikdashi and his associates noted.
Severe ischemic strokes are common in systemic lupus erythematosus patients, and a high level of disease activity predicts their occurrence, reported Dr. Jamal Mikdashi and his associates.
“The pathogenesis of ischemic stroke in SLE involves more than the traditional Framingham risk factors,” but the features that predict stroke are not well understood in this patient population, the researchers wrote (Stroke 2007;38:281–85).
They studied predictive factors using data from the University of Maryland lupus cohort, in which 238 SLE patients were enrolled from 1992 to 2004 and were followed for a mean of 8 years. Of these subjects, 90% were women and 66% were black.
Ischemic strokes occurred in 44 patients (18%), and 34 of these (77%) were severe strokes, Dr. Mikdashi, of the University of Maryland, Baltimore, and his associates reported.
The most prevalent subtype was large-artery/atherothrombotic strokes (45%), followed by small-vessel/lacunar infarcts (39%). Seven patients (16%) had recurrent strokes during follow-up.
Baseline SLE activity, the presence of cutaneous vasculitis, and higher prednisone doses were significantly more frequent on univariate analysis in subjects who had a stroke than in those who did not have a stroke. On multivariate analysis, only high SLE activity predicted stroke.
When subjects were divided according to the severity of SLE activity at baseline, those with higher SLE activity scores were at twice the risk for ischemic stroke and at nearly three times the risk for severe ischemic stroke, compared with subjects with low SLE activity scores.
These findings suggest that besides conventional risk factors, “SLE patients may possess other characteristics that render them at greater risk for ischemic strokes,” the investigators wrote.
Not surprisingly, hypercholesterolemia and hypertension also were found to be strong independent predictors of ischemic stroke. A substudy of statin therapy in this cohort indicated that it may reduce stroke risk.
“Further studies will determine whether treating hyperlipidemia and other traditional risk factors in SLE patients may substantially reduce or prevent the development of severe stroke and whether such measures will have impact on mortality, disability, and quality of life in SLE,” Dr. Mikdashi and his associates noted.
Moderate Kidney Dysfunction Ups Risk for Hip Fractures in Women
Moderate renal impairment raises the risk of hip fracture, particularly trochanter fracture, in older white women, reported Dr. Kristine E. Ensrud, and her associates in the Study of Osteoporotic Fractures.
“These findings suggest that clinicians should consider including renal function as part of the risk assessment for hip fracture in elderly women,” the researchers reported. An increased rate of hip fractures has been reported in patients with end-stage renal disease, those undergoing dialysis, and those who have received a renal transplant. However, this is the first longitudinal study of the link between hip fracture and mild to moderate renal insufficiency, according to Dr. Ensrud of the Veterans Affairs Medical Center, Minneapolis, and her associates.
They conducted a case-cohort study within the Study of Osteoporotic Fractures, a prospective study of over 9,700 women living in four U.S. regions that were aged 65 and older when enrolled in 1986–1988. The investigators assessed 149 white patients randomly selected from among those who sustained hip fractures during a mean follow-up of 6 years, and 377 without hip fractures.
A decreased estimated glomerular filtration (GFR) rate was significantly associated with an increased risk for hip fracture, even after the data were adjusted to account for traditional risk factors, the researchers reported (Arch. Intern. Med. 2007;167:133–9). In patients with a mildly decreased GFR the hazard ratio for hip fracture was 1.7, and in those with a moderately decreased GFR the hazard ratio was 2.3, compared with subjects who had a normal GFR.
Similarly, in subjects with a mildly decreased GFR the risk of trochanteric fracture in particular was increased nearly fourfold, and in those with moderately decreased GFR it was increased fivefold, compared with those who had a normal GFR. The underlying mechanisms for these associations are not yet understood. Abnormalities in phosphorous, calcium, and vitamin D metabolism occur in even mild renal insufficiency. And moderate renal dysfunction has been linked with increased inflammation, impaired coagulation, anemia, and malnutrition, Dr. Ensrud and her associates noted.
In an editorial comment accompanying the report, Dr. Stuart M. Sprague of Northwestern University, Chicago, said that “a staggering 19.2 million Americans, or 11% of the adult population,” currently have chronic kidney disease (CKD).
The study findings “are potentially very important, as they support the concept that a diagnosis of osteoporosis based on [bmd] criteria should not be made in patients with CKD and used as a predictor of fracture outcome,” Dr. Sprague wrote (Arch. Intern Med. 2007;167:115–6).
Moderate renal impairment raises the risk of hip fracture, particularly trochanter fracture, in older white women, reported Dr. Kristine E. Ensrud, and her associates in the Study of Osteoporotic Fractures.
“These findings suggest that clinicians should consider including renal function as part of the risk assessment for hip fracture in elderly women,” the researchers reported. An increased rate of hip fractures has been reported in patients with end-stage renal disease, those undergoing dialysis, and those who have received a renal transplant. However, this is the first longitudinal study of the link between hip fracture and mild to moderate renal insufficiency, according to Dr. Ensrud of the Veterans Affairs Medical Center, Minneapolis, and her associates.
They conducted a case-cohort study within the Study of Osteoporotic Fractures, a prospective study of over 9,700 women living in four U.S. regions that were aged 65 and older when enrolled in 1986–1988. The investigators assessed 149 white patients randomly selected from among those who sustained hip fractures during a mean follow-up of 6 years, and 377 without hip fractures.
A decreased estimated glomerular filtration (GFR) rate was significantly associated with an increased risk for hip fracture, even after the data were adjusted to account for traditional risk factors, the researchers reported (Arch. Intern. Med. 2007;167:133–9). In patients with a mildly decreased GFR the hazard ratio for hip fracture was 1.7, and in those with a moderately decreased GFR the hazard ratio was 2.3, compared with subjects who had a normal GFR.
Similarly, in subjects with a mildly decreased GFR the risk of trochanteric fracture in particular was increased nearly fourfold, and in those with moderately decreased GFR it was increased fivefold, compared with those who had a normal GFR. The underlying mechanisms for these associations are not yet understood. Abnormalities in phosphorous, calcium, and vitamin D metabolism occur in even mild renal insufficiency. And moderate renal dysfunction has been linked with increased inflammation, impaired coagulation, anemia, and malnutrition, Dr. Ensrud and her associates noted.
In an editorial comment accompanying the report, Dr. Stuart M. Sprague of Northwestern University, Chicago, said that “a staggering 19.2 million Americans, or 11% of the adult population,” currently have chronic kidney disease (CKD).
The study findings “are potentially very important, as they support the concept that a diagnosis of osteoporosis based on [bmd] criteria should not be made in patients with CKD and used as a predictor of fracture outcome,” Dr. Sprague wrote (Arch. Intern Med. 2007;167:115–6).
Moderate renal impairment raises the risk of hip fracture, particularly trochanter fracture, in older white women, reported Dr. Kristine E. Ensrud, and her associates in the Study of Osteoporotic Fractures.
“These findings suggest that clinicians should consider including renal function as part of the risk assessment for hip fracture in elderly women,” the researchers reported. An increased rate of hip fractures has been reported in patients with end-stage renal disease, those undergoing dialysis, and those who have received a renal transplant. However, this is the first longitudinal study of the link between hip fracture and mild to moderate renal insufficiency, according to Dr. Ensrud of the Veterans Affairs Medical Center, Minneapolis, and her associates.
They conducted a case-cohort study within the Study of Osteoporotic Fractures, a prospective study of over 9,700 women living in four U.S. regions that were aged 65 and older when enrolled in 1986–1988. The investigators assessed 149 white patients randomly selected from among those who sustained hip fractures during a mean follow-up of 6 years, and 377 without hip fractures.
A decreased estimated glomerular filtration (GFR) rate was significantly associated with an increased risk for hip fracture, even after the data were adjusted to account for traditional risk factors, the researchers reported (Arch. Intern. Med. 2007;167:133–9). In patients with a mildly decreased GFR the hazard ratio for hip fracture was 1.7, and in those with a moderately decreased GFR the hazard ratio was 2.3, compared with subjects who had a normal GFR.
Similarly, in subjects with a mildly decreased GFR the risk of trochanteric fracture in particular was increased nearly fourfold, and in those with moderately decreased GFR it was increased fivefold, compared with those who had a normal GFR. The underlying mechanisms for these associations are not yet understood. Abnormalities in phosphorous, calcium, and vitamin D metabolism occur in even mild renal insufficiency. And moderate renal dysfunction has been linked with increased inflammation, impaired coagulation, anemia, and malnutrition, Dr. Ensrud and her associates noted.
In an editorial comment accompanying the report, Dr. Stuart M. Sprague of Northwestern University, Chicago, said that “a staggering 19.2 million Americans, or 11% of the adult population,” currently have chronic kidney disease (CKD).
The study findings “are potentially very important, as they support the concept that a diagnosis of osteoporosis based on [bmd] criteria should not be made in patients with CKD and used as a predictor of fracture outcome,” Dr. Sprague wrote (Arch. Intern Med. 2007;167:115–6).
Repeat BMD Test of No Value for Older Women
Repeat bone mineral density testing 8 years after initial measurement does not improve the ability to predict fractures in healthy elderly women, according to Dr. Teresa A. Hillier and her associates.
Repeat BMD testing is done “commonly” in clinical practice, even though “there is little evidence evaluating the additional value of repeat BMD testing for fracture risk,” the investigators reported (Arch. Intern. Med. 2007;167:155–60).
The Study of Osteoporotic Fractures included 9,704 white women aged 65 years and older who were living in four regions of the United States. Of the women, 4,124 underwent initial BMD measurement in 1989–1990 and then had a repeat BMD measurement a mean of 8 years later, forming the sample for the current study, said Dr. Hillier of Kaiser Permanente Center for Health Research Northwest, Portland, Ore., and her associates.
The subjects were followed for an additional 5 years to track the incidence of fractures. The BMD measurements were taken at the proximal femur, intertrochanter, trochanter, femoral neck, and Ward's triangle. The 513 subjects who sustained a fracture between the initial and the repeat BMD assessments were excluded from the study.
Both measurements of BMD were significant predictors of hip fracture and nonspinal fracture risks. “Each standard deviation lower in either initial or repeat BMD was associated with a 55%–61% increased risk of incident nonspine fracture, a 102%–121% increased risk of incident hip fracture, and a 75%–86% increased risk of spine fracture,” Dr. Hillier and her associates reported.
However, the repeat BMD did not add to the overall predictive value for any type of fracture risk. These results persisted in subgroup analyses of women who used estrogen or bisphosphonate, compared with those who did not.
Their findings do not imply that repeat BMD measurement may not be useful for certain individual patients, “particularly if intervening clinical factors are present that would likely accelerate BMD loss greater than average,” Dr. Hillier and her associates noted.
“However, our results do suggest that, for the average healthy older woman…a repeat BMD measurement has little or no value in classifying risk for future fracture—even for the average older woman who has osteoporosis by initial BMD measure, or high BMD loss,” they wrote, noting this study did not address BMD testing to monitor osteoporosis treatment response. These results may not be generalizable to men, nonwhite women, or women younger than 65.
Repeat bone mineral density testing 8 years after initial measurement does not improve the ability to predict fractures in healthy elderly women, according to Dr. Teresa A. Hillier and her associates.
Repeat BMD testing is done “commonly” in clinical practice, even though “there is little evidence evaluating the additional value of repeat BMD testing for fracture risk,” the investigators reported (Arch. Intern. Med. 2007;167:155–60).
The Study of Osteoporotic Fractures included 9,704 white women aged 65 years and older who were living in four regions of the United States. Of the women, 4,124 underwent initial BMD measurement in 1989–1990 and then had a repeat BMD measurement a mean of 8 years later, forming the sample for the current study, said Dr. Hillier of Kaiser Permanente Center for Health Research Northwest, Portland, Ore., and her associates.
The subjects were followed for an additional 5 years to track the incidence of fractures. The BMD measurements were taken at the proximal femur, intertrochanter, trochanter, femoral neck, and Ward's triangle. The 513 subjects who sustained a fracture between the initial and the repeat BMD assessments were excluded from the study.
Both measurements of BMD were significant predictors of hip fracture and nonspinal fracture risks. “Each standard deviation lower in either initial or repeat BMD was associated with a 55%–61% increased risk of incident nonspine fracture, a 102%–121% increased risk of incident hip fracture, and a 75%–86% increased risk of spine fracture,” Dr. Hillier and her associates reported.
However, the repeat BMD did not add to the overall predictive value for any type of fracture risk. These results persisted in subgroup analyses of women who used estrogen or bisphosphonate, compared with those who did not.
Their findings do not imply that repeat BMD measurement may not be useful for certain individual patients, “particularly if intervening clinical factors are present that would likely accelerate BMD loss greater than average,” Dr. Hillier and her associates noted.
“However, our results do suggest that, for the average healthy older woman…a repeat BMD measurement has little or no value in classifying risk for future fracture—even for the average older woman who has osteoporosis by initial BMD measure, or high BMD loss,” they wrote, noting this study did not address BMD testing to monitor osteoporosis treatment response. These results may not be generalizable to men, nonwhite women, or women younger than 65.
Repeat bone mineral density testing 8 years after initial measurement does not improve the ability to predict fractures in healthy elderly women, according to Dr. Teresa A. Hillier and her associates.
Repeat BMD testing is done “commonly” in clinical practice, even though “there is little evidence evaluating the additional value of repeat BMD testing for fracture risk,” the investigators reported (Arch. Intern. Med. 2007;167:155–60).
The Study of Osteoporotic Fractures included 9,704 white women aged 65 years and older who were living in four regions of the United States. Of the women, 4,124 underwent initial BMD measurement in 1989–1990 and then had a repeat BMD measurement a mean of 8 years later, forming the sample for the current study, said Dr. Hillier of Kaiser Permanente Center for Health Research Northwest, Portland, Ore., and her associates.
The subjects were followed for an additional 5 years to track the incidence of fractures. The BMD measurements were taken at the proximal femur, intertrochanter, trochanter, femoral neck, and Ward's triangle. The 513 subjects who sustained a fracture between the initial and the repeat BMD assessments were excluded from the study.
Both measurements of BMD were significant predictors of hip fracture and nonspinal fracture risks. “Each standard deviation lower in either initial or repeat BMD was associated with a 55%–61% increased risk of incident nonspine fracture, a 102%–121% increased risk of incident hip fracture, and a 75%–86% increased risk of spine fracture,” Dr. Hillier and her associates reported.
However, the repeat BMD did not add to the overall predictive value for any type of fracture risk. These results persisted in subgroup analyses of women who used estrogen or bisphosphonate, compared with those who did not.
Their findings do not imply that repeat BMD measurement may not be useful for certain individual patients, “particularly if intervening clinical factors are present that would likely accelerate BMD loss greater than average,” Dr. Hillier and her associates noted.
“However, our results do suggest that, for the average healthy older woman…a repeat BMD measurement has little or no value in classifying risk for future fracture—even for the average older woman who has osteoporosis by initial BMD measure, or high BMD loss,” they wrote, noting this study did not address BMD testing to monitor osteoporosis treatment response. These results may not be generalizable to men, nonwhite women, or women younger than 65.
Intervention Cut Central Catheter-Related Infections in ICUs by 66%
A “simple and inexpensive” intervention to reduce ICU infections related to central catheter lines decreased the infection rate by 66% in 107 hospitals throughout Michigan, according to a new study.
The overall median rate of central catheter-related bloodstream infections was held to zero throughout 18 months of follow-up, said Dr. Peter Pronovost of Johns Hopkins University, Baltimore, and his associates (N. Engl. J. Med. 2006;355:2725–32).
The intervention, part of a statewide program to improve patient safety, targeted clinicians' use of five procedures identified by the Centers for Disease Control and Prevention as having the greatest potential to reduce infection and the greatest ease of implementation. The procedures are appropriate hand washing, using full-barrier precautions during the insertion of central venous catheters, cleaning the skin with chlorhexidine, avoiding the femoral site for access if possible and removing unnecessary catheters.
A hospital-based practitioner was designated as the infection-control specialist. Clinicians were taught infection-control practices, provided with a central-line cart with necessary supplies, given a checklist to ensure adherence to infection-control practices, and stopped if they weren't following the checklist. Catheter removal was discussed every day at rounds, and ICU teams received feedback on infection rates at monthly and quarterly meetings.
This intervention was assessed at 67 Michigan hospitals of all types, which included 103 medical, surgical, cardiac, neurologic, and trauma ICUs and 1 pediatric ICU. Within 3 months of implementation, the overall median rate of central catheter-related bloodstream infection dropped from 2.7 per 1,000 catheter-days at baseline to 0. The corresponding average rates of infection were 7.7 and 2.3, respectively, Dr. Pronovost and his associates said.
A “simple and inexpensive” intervention to reduce ICU infections related to central catheter lines decreased the infection rate by 66% in 107 hospitals throughout Michigan, according to a new study.
The overall median rate of central catheter-related bloodstream infections was held to zero throughout 18 months of follow-up, said Dr. Peter Pronovost of Johns Hopkins University, Baltimore, and his associates (N. Engl. J. Med. 2006;355:2725–32).
The intervention, part of a statewide program to improve patient safety, targeted clinicians' use of five procedures identified by the Centers for Disease Control and Prevention as having the greatest potential to reduce infection and the greatest ease of implementation. The procedures are appropriate hand washing, using full-barrier precautions during the insertion of central venous catheters, cleaning the skin with chlorhexidine, avoiding the femoral site for access if possible and removing unnecessary catheters.
A hospital-based practitioner was designated as the infection-control specialist. Clinicians were taught infection-control practices, provided with a central-line cart with necessary supplies, given a checklist to ensure adherence to infection-control practices, and stopped if they weren't following the checklist. Catheter removal was discussed every day at rounds, and ICU teams received feedback on infection rates at monthly and quarterly meetings.
This intervention was assessed at 67 Michigan hospitals of all types, which included 103 medical, surgical, cardiac, neurologic, and trauma ICUs and 1 pediatric ICU. Within 3 months of implementation, the overall median rate of central catheter-related bloodstream infection dropped from 2.7 per 1,000 catheter-days at baseline to 0. The corresponding average rates of infection were 7.7 and 2.3, respectively, Dr. Pronovost and his associates said.
A “simple and inexpensive” intervention to reduce ICU infections related to central catheter lines decreased the infection rate by 66% in 107 hospitals throughout Michigan, according to a new study.
The overall median rate of central catheter-related bloodstream infections was held to zero throughout 18 months of follow-up, said Dr. Peter Pronovost of Johns Hopkins University, Baltimore, and his associates (N. Engl. J. Med. 2006;355:2725–32).
The intervention, part of a statewide program to improve patient safety, targeted clinicians' use of five procedures identified by the Centers for Disease Control and Prevention as having the greatest potential to reduce infection and the greatest ease of implementation. The procedures are appropriate hand washing, using full-barrier precautions during the insertion of central venous catheters, cleaning the skin with chlorhexidine, avoiding the femoral site for access if possible and removing unnecessary catheters.
A hospital-based practitioner was designated as the infection-control specialist. Clinicians were taught infection-control practices, provided with a central-line cart with necessary supplies, given a checklist to ensure adherence to infection-control practices, and stopped if they weren't following the checklist. Catheter removal was discussed every day at rounds, and ICU teams received feedback on infection rates at monthly and quarterly meetings.
This intervention was assessed at 67 Michigan hospitals of all types, which included 103 medical, surgical, cardiac, neurologic, and trauma ICUs and 1 pediatric ICU. Within 3 months of implementation, the overall median rate of central catheter-related bloodstream infection dropped from 2.7 per 1,000 catheter-days at baseline to 0. The corresponding average rates of infection were 7.7 and 2.3, respectively, Dr. Pronovost and his associates said.
Medical Error Reporting Systems Called Inadequate
Virtually all pediatricians favor reporting medical errors to hospitals and colleagues, as well as disclosing them to patients' families, but most of these specialists also consider the available systems for doing so to be inadequate, according to a survey by Dr. Jane Garbutt of Washington University, St. Louis, and her associates.
The researchers noted that although medical errors involving hospitalized children are frequent, most of the current data on reporting and disclosing medical errors have come from physicians who treat only adults. Because of this, the authors conducted what they described as the first study “to examine communication about medical errors in a large sample of pediatricians.”
A total of 557 academic and community pediatricians and pediatric residents completed an anonymous, 15-minute survey of the issue by mail or over the Internet in 2003–2004. The respondents were affiliated with medical centers in St. Louis or Seattle, which had electronic and other incident reporting systems in place that were designed to analyze errors so as to prevent their recurrence.
Despite the emphasis on open communication about medical errors to improve patient safety, such errors are underreported and “such transparency appears to be far from the norm,” the investigators said.
They defined medical errors as “the failure of a planned action to be completed as intended, or the use of a wrong plan to achieve an aim.”
Roughly equal numbers of male and female pediatricians responded to the survey. The mean respondent age was 43 years, and the mean number of years in practice was 12.
Only 7% of the respondents said they had never been involved in a medical error. Approximately 40% said they had been involved in a serious medical error, defined as one that caused permanent injury or transient but potentially life-threatening harm.
A total of 72% said they had been involved in a minor medical error, defined as one that caused harm that was neither permanent nor potentially life threatening. And 61% said they had been involved in a “near miss,” defined as an error that could have caused harm but did not, either by chance or by a timely intervention.
Nearly all the physicians surveyed (97%) endorsed open discussion about medical errors with the hospital, their colleagues, and patients' families, but only 39% felt that available systems were adequate to the task. Respondents said they had problems accessing formal reporting systems and that existing systems were too time consuming, did not ensure confidentiality, were punitive, and did not use the information to improve patient safety.
A full 40% of the respondents said they did not even know whether their hospitals had an error-reporting system that physicians could use to improve patient safety. Most respondents (74%) said they used informal mechanisms to report errors, such as telling a supervisor or manager, Dr. Garbutt and her associates wrote (Arch. Pediatr. Adolesc. Med. 2007;161:179–85).
Two encouraging findings concerned malpractice. Most respondents felt that open discussions of medical errors would make malpractice suits less, rather than more, likely. And respondents' predictions of whether they would be sued for malpractice in the near future had no bearing on their willingness to report and disclose their own errors.
Both of these findings refute the notion that pediatricians avoid reporting or disclosing medical errors because they are afraid of being sued for malpractice, the researchers said. The survey results indicate that redesigning error-reporting systems would encourage pediatricians to report and would thereby improve the safety of hospitalized children. In particular, “the medical profession should develop disclosure guidelines to help physicians with this difficult task,” they added.
Virtually all pediatricians favor reporting medical errors to hospitals and colleagues, as well as disclosing them to patients' families, but most of these specialists also consider the available systems for doing so to be inadequate, according to a survey by Dr. Jane Garbutt of Washington University, St. Louis, and her associates.
The researchers noted that although medical errors involving hospitalized children are frequent, most of the current data on reporting and disclosing medical errors have come from physicians who treat only adults. Because of this, the authors conducted what they described as the first study “to examine communication about medical errors in a large sample of pediatricians.”
A total of 557 academic and community pediatricians and pediatric residents completed an anonymous, 15-minute survey of the issue by mail or over the Internet in 2003–2004. The respondents were affiliated with medical centers in St. Louis or Seattle, which had electronic and other incident reporting systems in place that were designed to analyze errors so as to prevent their recurrence.
Despite the emphasis on open communication about medical errors to improve patient safety, such errors are underreported and “such transparency appears to be far from the norm,” the investigators said.
They defined medical errors as “the failure of a planned action to be completed as intended, or the use of a wrong plan to achieve an aim.”
Roughly equal numbers of male and female pediatricians responded to the survey. The mean respondent age was 43 years, and the mean number of years in practice was 12.
Only 7% of the respondents said they had never been involved in a medical error. Approximately 40% said they had been involved in a serious medical error, defined as one that caused permanent injury or transient but potentially life-threatening harm.
A total of 72% said they had been involved in a minor medical error, defined as one that caused harm that was neither permanent nor potentially life threatening. And 61% said they had been involved in a “near miss,” defined as an error that could have caused harm but did not, either by chance or by a timely intervention.
Nearly all the physicians surveyed (97%) endorsed open discussion about medical errors with the hospital, their colleagues, and patients' families, but only 39% felt that available systems were adequate to the task. Respondents said they had problems accessing formal reporting systems and that existing systems were too time consuming, did not ensure confidentiality, were punitive, and did not use the information to improve patient safety.
A full 40% of the respondents said they did not even know whether their hospitals had an error-reporting system that physicians could use to improve patient safety. Most respondents (74%) said they used informal mechanisms to report errors, such as telling a supervisor or manager, Dr. Garbutt and her associates wrote (Arch. Pediatr. Adolesc. Med. 2007;161:179–85).
Two encouraging findings concerned malpractice. Most respondents felt that open discussions of medical errors would make malpractice suits less, rather than more, likely. And respondents' predictions of whether they would be sued for malpractice in the near future had no bearing on their willingness to report and disclose their own errors.
Both of these findings refute the notion that pediatricians avoid reporting or disclosing medical errors because they are afraid of being sued for malpractice, the researchers said. The survey results indicate that redesigning error-reporting systems would encourage pediatricians to report and would thereby improve the safety of hospitalized children. In particular, “the medical profession should develop disclosure guidelines to help physicians with this difficult task,” they added.
Virtually all pediatricians favor reporting medical errors to hospitals and colleagues, as well as disclosing them to patients' families, but most of these specialists also consider the available systems for doing so to be inadequate, according to a survey by Dr. Jane Garbutt of Washington University, St. Louis, and her associates.
The researchers noted that although medical errors involving hospitalized children are frequent, most of the current data on reporting and disclosing medical errors have come from physicians who treat only adults. Because of this, the authors conducted what they described as the first study “to examine communication about medical errors in a large sample of pediatricians.”
A total of 557 academic and community pediatricians and pediatric residents completed an anonymous, 15-minute survey of the issue by mail or over the Internet in 2003–2004. The respondents were affiliated with medical centers in St. Louis or Seattle, which had electronic and other incident reporting systems in place that were designed to analyze errors so as to prevent their recurrence.
Despite the emphasis on open communication about medical errors to improve patient safety, such errors are underreported and “such transparency appears to be far from the norm,” the investigators said.
They defined medical errors as “the failure of a planned action to be completed as intended, or the use of a wrong plan to achieve an aim.”
Roughly equal numbers of male and female pediatricians responded to the survey. The mean respondent age was 43 years, and the mean number of years in practice was 12.
Only 7% of the respondents said they had never been involved in a medical error. Approximately 40% said they had been involved in a serious medical error, defined as one that caused permanent injury or transient but potentially life-threatening harm.
A total of 72% said they had been involved in a minor medical error, defined as one that caused harm that was neither permanent nor potentially life threatening. And 61% said they had been involved in a “near miss,” defined as an error that could have caused harm but did not, either by chance or by a timely intervention.
Nearly all the physicians surveyed (97%) endorsed open discussion about medical errors with the hospital, their colleagues, and patients' families, but only 39% felt that available systems were adequate to the task. Respondents said they had problems accessing formal reporting systems and that existing systems were too time consuming, did not ensure confidentiality, were punitive, and did not use the information to improve patient safety.
A full 40% of the respondents said they did not even know whether their hospitals had an error-reporting system that physicians could use to improve patient safety. Most respondents (74%) said they used informal mechanisms to report errors, such as telling a supervisor or manager, Dr. Garbutt and her associates wrote (Arch. Pediatr. Adolesc. Med. 2007;161:179–85).
Two encouraging findings concerned malpractice. Most respondents felt that open discussions of medical errors would make malpractice suits less, rather than more, likely. And respondents' predictions of whether they would be sued for malpractice in the near future had no bearing on their willingness to report and disclose their own errors.
Both of these findings refute the notion that pediatricians avoid reporting or disclosing medical errors because they are afraid of being sued for malpractice, the researchers said. The survey results indicate that redesigning error-reporting systems would encourage pediatricians to report and would thereby improve the safety of hospitalized children. In particular, “the medical profession should develop disclosure guidelines to help physicians with this difficult task,” they added.
Use of Antibiotics Drives Resistance, Study Shows
The use of azithromycin and clarithromycin clearly raises the proportion of macrolide-resistant organisms in the oral flora for a period of at least 6 months, reported Surbhi Malhotra-Kumar, Ph.D., of the University of Antwerp, Belgium, and associates.
This finding establishes that “macrolide use is the single most important driver of the emergence of macrolide resistance in human beings,” the researchers said.
Many studies in the past have demonstrated a clear relation between antibiotic use and resistance, but to date none of those studies have shown a definite causal effect.
Neither have any studies linked antibiotic exposure within an individual to later resistance in that individual.
Dr. Malhotra-Kumar and associates used subjects' oral commensal streptococcal flora, which harbors the same macrolide resistance genes as pathogenic strep organisms do, as a model to study the effects of azithromycin and clarithromycin exposure on antibiotic resistance.
In their double-blind trial, 224 healthy volunteers were randomly assigned to receive once daily azithromycin for 3 days, twice daily clarithromycin for 7 days, or placebo.
Samples of oral strep flora were then taken from the subjects' tonsils and posterior pharyngeal wall before treatment and on several occasions afterward for up to 180 days.
Immediately after treatment, the mean proportion of macrolide-resistant streptococci dramatically increased in both active treatment groups. This effect was not seen in the placebo group.
Resistance peaked at the fourth day for azithromycin and the eighth day for clarithromycin, the investigators said (Lancet 2007;369:482–90).
The proportion of resistant streptococci remained increased for both drugs through the final follow-up at 6 months, “which emphasises that the commensal flora could serve as a reservoir of resistance for potentially pathogenic bacteria,” they noted.
Although the study was able to reach definitive results after 6 months, a longer follow-up period “would have enabled us to define the time needed for the resistant oral flora to revert to baseline levels,” Dr. Malhotra-Kumar and associates said.
“In view of the consequences of antibiotic use seen here, physicians should take into account the striking ecological side-effects of antibiotics when prescribing such drugs to their patients,” the researchers added.
The use of azithromycin and clarithromycin clearly raises the proportion of macrolide-resistant organisms in the oral flora for a period of at least 6 months, reported Surbhi Malhotra-Kumar, Ph.D., of the University of Antwerp, Belgium, and associates.
This finding establishes that “macrolide use is the single most important driver of the emergence of macrolide resistance in human beings,” the researchers said.
Many studies in the past have demonstrated a clear relation between antibiotic use and resistance, but to date none of those studies have shown a definite causal effect.
Neither have any studies linked antibiotic exposure within an individual to later resistance in that individual.
Dr. Malhotra-Kumar and associates used subjects' oral commensal streptococcal flora, which harbors the same macrolide resistance genes as pathogenic strep organisms do, as a model to study the effects of azithromycin and clarithromycin exposure on antibiotic resistance.
In their double-blind trial, 224 healthy volunteers were randomly assigned to receive once daily azithromycin for 3 days, twice daily clarithromycin for 7 days, or placebo.
Samples of oral strep flora were then taken from the subjects' tonsils and posterior pharyngeal wall before treatment and on several occasions afterward for up to 180 days.
Immediately after treatment, the mean proportion of macrolide-resistant streptococci dramatically increased in both active treatment groups. This effect was not seen in the placebo group.
Resistance peaked at the fourth day for azithromycin and the eighth day for clarithromycin, the investigators said (Lancet 2007;369:482–90).
The proportion of resistant streptococci remained increased for both drugs through the final follow-up at 6 months, “which emphasises that the commensal flora could serve as a reservoir of resistance for potentially pathogenic bacteria,” they noted.
Although the study was able to reach definitive results after 6 months, a longer follow-up period “would have enabled us to define the time needed for the resistant oral flora to revert to baseline levels,” Dr. Malhotra-Kumar and associates said.
“In view of the consequences of antibiotic use seen here, physicians should take into account the striking ecological side-effects of antibiotics when prescribing such drugs to their patients,” the researchers added.
The use of azithromycin and clarithromycin clearly raises the proportion of macrolide-resistant organisms in the oral flora for a period of at least 6 months, reported Surbhi Malhotra-Kumar, Ph.D., of the University of Antwerp, Belgium, and associates.
This finding establishes that “macrolide use is the single most important driver of the emergence of macrolide resistance in human beings,” the researchers said.
Many studies in the past have demonstrated a clear relation between antibiotic use and resistance, but to date none of those studies have shown a definite causal effect.
Neither have any studies linked antibiotic exposure within an individual to later resistance in that individual.
Dr. Malhotra-Kumar and associates used subjects' oral commensal streptococcal flora, which harbors the same macrolide resistance genes as pathogenic strep organisms do, as a model to study the effects of azithromycin and clarithromycin exposure on antibiotic resistance.
In their double-blind trial, 224 healthy volunteers were randomly assigned to receive once daily azithromycin for 3 days, twice daily clarithromycin for 7 days, or placebo.
Samples of oral strep flora were then taken from the subjects' tonsils and posterior pharyngeal wall before treatment and on several occasions afterward for up to 180 days.
Immediately after treatment, the mean proportion of macrolide-resistant streptococci dramatically increased in both active treatment groups. This effect was not seen in the placebo group.
Resistance peaked at the fourth day for azithromycin and the eighth day for clarithromycin, the investigators said (Lancet 2007;369:482–90).
The proportion of resistant streptococci remained increased for both drugs through the final follow-up at 6 months, “which emphasises that the commensal flora could serve as a reservoir of resistance for potentially pathogenic bacteria,” they noted.
Although the study was able to reach definitive results after 6 months, a longer follow-up period “would have enabled us to define the time needed for the resistant oral flora to revert to baseline levels,” Dr. Malhotra-Kumar and associates said.
“In view of the consequences of antibiotic use seen here, physicians should take into account the striking ecological side-effects of antibiotics when prescribing such drugs to their patients,” the researchers added.
Mentally Ill Face Increased Cardiovascular Risk
People who have severe mental illness are at double to triple the risk of dying from coronary heart disease or stroke at all ages, compared with people who are not mentally ill, reported David P.J. Osborn, Ph.D., and his associates.
The social deprivation of the severely mentally ill and their higher rate of smoking do not explain this increased cardiovascular risk, and their use of antipsychotic medications “is only part of the explanation.” The exact mechanism underlying this increased vulnerability remains unknown, the researchers said.
Noting that the true burden of physical disease among the severely mentally ill has never been established, Dr. Osborn and his associates at the Royal Free and University College London tried to estimate the risks of heart disease, stroke, and cancer death using data from the United Kingdom's General Practice Research Database. “Precise estimation of the true population risk for CVD [cardiovascular disease] or cancer mortality requires data from large, representative populations followed up for periods long enough to include sufficient observed deaths,” they pointed out.
The GPRD covered some 8 million patients treated in 741 general practices throughout the United Kingdom between 1987 and 2002, and the sample included almost all those with severe mental illness at the time.
Compared with more than 300,000 randomly selected, matched control subjects who were free from severe mental illness, the 46,136 subjects with schizophrenia, schizoaffective disorder, bipolar disorder, delusional disorder, or other nonorganic psychoses showed triple the rate of death from coronary heart disease before age 50 and double the rate at aged 50–75 years.
Similarly, stroke mortality was 2.5 times higher in mentally ill people younger than 50 years and twice as high in those aged 50–75 years than it was in the controls, the investigators said (Arch. Gen. Psychiatry 2007;64:242–9).
In contrast, mortality from six of the seven most common cancers in the United Kingdom–colorectal, breast, prostate, stomach, esophageal, and pancreatic cancers–was no different between the control subjects and the mentally ill. Mortality from the seventh common malignancy, respiratory cancer, initially was higher in the severely mentally ill. However, after the data were adjusted to account for smoking and social deprivation, that difference was no longer significant.
Mentally ill people who did not take antipsychotic medications were at increased risk of coronary heart disease and stroke, and those who did take the medications were at even higher risk. People who took the highest doses were at the highest risk of cardiovascular death.
This dose-response relationship could be attributable to adverse drug effects at higher doses, or it could be that higher doses are simply a marker of the severity of mental illness, which itself may raise mortality risk, Dr. Osborn and his associates said.
The reasons why severe mental illness puts people at higher risk of CVD mortality remain unclear. It is possible that mentally ill patients may be less likely to present with CVD symptoms, to be correctly diagnosed, to be given correct treatment, and to adhere to treatment, the researchers said.
These findings underscore the fact that people with severe mental illness must be monitored for somatic conditions. Although the management of blood pressure, glucose levels, cholesterol levels, smoking, diet, and exercise may be best accomplished in the primary care setting, “psychiatric health care professionals cannot be viewed as exempt from responsibility for physical health monitoring,” Dr. Osborn and his associates noted.
People who have severe mental illness are at double to triple the risk of dying from coronary heart disease or stroke at all ages, compared with people who are not mentally ill, reported David P.J. Osborn, Ph.D., and his associates.
The social deprivation of the severely mentally ill and their higher rate of smoking do not explain this increased cardiovascular risk, and their use of antipsychotic medications “is only part of the explanation.” The exact mechanism underlying this increased vulnerability remains unknown, the researchers said.
Noting that the true burden of physical disease among the severely mentally ill has never been established, Dr. Osborn and his associates at the Royal Free and University College London tried to estimate the risks of heart disease, stroke, and cancer death using data from the United Kingdom's General Practice Research Database. “Precise estimation of the true population risk for CVD [cardiovascular disease] or cancer mortality requires data from large, representative populations followed up for periods long enough to include sufficient observed deaths,” they pointed out.
The GPRD covered some 8 million patients treated in 741 general practices throughout the United Kingdom between 1987 and 2002, and the sample included almost all those with severe mental illness at the time.
Compared with more than 300,000 randomly selected, matched control subjects who were free from severe mental illness, the 46,136 subjects with schizophrenia, schizoaffective disorder, bipolar disorder, delusional disorder, or other nonorganic psychoses showed triple the rate of death from coronary heart disease before age 50 and double the rate at aged 50–75 years.
Similarly, stroke mortality was 2.5 times higher in mentally ill people younger than 50 years and twice as high in those aged 50–75 years than it was in the controls, the investigators said (Arch. Gen. Psychiatry 2007;64:242–9).
In contrast, mortality from six of the seven most common cancers in the United Kingdom–colorectal, breast, prostate, stomach, esophageal, and pancreatic cancers–was no different between the control subjects and the mentally ill. Mortality from the seventh common malignancy, respiratory cancer, initially was higher in the severely mentally ill. However, after the data were adjusted to account for smoking and social deprivation, that difference was no longer significant.
Mentally ill people who did not take antipsychotic medications were at increased risk of coronary heart disease and stroke, and those who did take the medications were at even higher risk. People who took the highest doses were at the highest risk of cardiovascular death.
This dose-response relationship could be attributable to adverse drug effects at higher doses, or it could be that higher doses are simply a marker of the severity of mental illness, which itself may raise mortality risk, Dr. Osborn and his associates said.
The reasons why severe mental illness puts people at higher risk of CVD mortality remain unclear. It is possible that mentally ill patients may be less likely to present with CVD symptoms, to be correctly diagnosed, to be given correct treatment, and to adhere to treatment, the researchers said.
These findings underscore the fact that people with severe mental illness must be monitored for somatic conditions. Although the management of blood pressure, glucose levels, cholesterol levels, smoking, diet, and exercise may be best accomplished in the primary care setting, “psychiatric health care professionals cannot be viewed as exempt from responsibility for physical health monitoring,” Dr. Osborn and his associates noted.
People who have severe mental illness are at double to triple the risk of dying from coronary heart disease or stroke at all ages, compared with people who are not mentally ill, reported David P.J. Osborn, Ph.D., and his associates.
The social deprivation of the severely mentally ill and their higher rate of smoking do not explain this increased cardiovascular risk, and their use of antipsychotic medications “is only part of the explanation.” The exact mechanism underlying this increased vulnerability remains unknown, the researchers said.
Noting that the true burden of physical disease among the severely mentally ill has never been established, Dr. Osborn and his associates at the Royal Free and University College London tried to estimate the risks of heart disease, stroke, and cancer death using data from the United Kingdom's General Practice Research Database. “Precise estimation of the true population risk for CVD [cardiovascular disease] or cancer mortality requires data from large, representative populations followed up for periods long enough to include sufficient observed deaths,” they pointed out.
The GPRD covered some 8 million patients treated in 741 general practices throughout the United Kingdom between 1987 and 2002, and the sample included almost all those with severe mental illness at the time.
Compared with more than 300,000 randomly selected, matched control subjects who were free from severe mental illness, the 46,136 subjects with schizophrenia, schizoaffective disorder, bipolar disorder, delusional disorder, or other nonorganic psychoses showed triple the rate of death from coronary heart disease before age 50 and double the rate at aged 50–75 years.
Similarly, stroke mortality was 2.5 times higher in mentally ill people younger than 50 years and twice as high in those aged 50–75 years than it was in the controls, the investigators said (Arch. Gen. Psychiatry 2007;64:242–9).
In contrast, mortality from six of the seven most common cancers in the United Kingdom–colorectal, breast, prostate, stomach, esophageal, and pancreatic cancers–was no different between the control subjects and the mentally ill. Mortality from the seventh common malignancy, respiratory cancer, initially was higher in the severely mentally ill. However, after the data were adjusted to account for smoking and social deprivation, that difference was no longer significant.
Mentally ill people who did not take antipsychotic medications were at increased risk of coronary heart disease and stroke, and those who did take the medications were at even higher risk. People who took the highest doses were at the highest risk of cardiovascular death.
This dose-response relationship could be attributable to adverse drug effects at higher doses, or it could be that higher doses are simply a marker of the severity of mental illness, which itself may raise mortality risk, Dr. Osborn and his associates said.
The reasons why severe mental illness puts people at higher risk of CVD mortality remain unclear. It is possible that mentally ill patients may be less likely to present with CVD symptoms, to be correctly diagnosed, to be given correct treatment, and to adhere to treatment, the researchers said.
These findings underscore the fact that people with severe mental illness must be monitored for somatic conditions. Although the management of blood pressure, glucose levels, cholesterol levels, smoking, diet, and exercise may be best accomplished in the primary care setting, “psychiatric health care professionals cannot be viewed as exempt from responsibility for physical health monitoring,” Dr. Osborn and his associates noted.
Melanoma Screens Deemed Cost Effective
One-time melanoma screening in the general population for those aged 50 years and older was found to be very cost effectivecomparable with screening for breast, cervical, and colorectal cancerin a computer simulation model.
Similarly, the screening of siblings of melanoma patients every other year also was found to be cost effective, reported Elena Losina, Ph.D., of Boston University School of Public Health, and her associates. Siblings of melanoma patients are considered to be at risk.
"Melanoma is the only cancer for which [incidence and mortality] are rising unabated, while screening, the potential means for reducing the burden of disease, continues to be underused," the researchers said (Arch. Dermatol. 2007;143:218).
Several national committees have debated the usefulness of population-based melanoma screening, but have never included it in recommended guidelines because there is no conclusive evidence that skin examination by clinicians reduces skin cancer morbidity or mortality. This, in turn, may stem from the fact that no randomized clinical trials of the issue have been conducted because of prohibitive costs and logistic complexity, Dr. Losina and her associates said.
"Cost-effectiveness analysis is particularly useful when randomized controlled trials cannot be done because of ethical or logistic considerations. In the case of melanoma, the low overall disease prevalence and incidence would require more than 360,000 study participants [followed] for 10 years to identify statistically significant differences in the outcome of screening," they said.
The investigators developed a computer simulation model to assess the cost-effectiveness of four different strategies for melanoma screening. The first was background screening only (skin examination at a routine primary physician visit, followed by referral to a dermatologist if necessary). The second strategy was a one-time screening by a dermatologist. They also measured the cost-effectiveness of once per year as well as once every other year screening by a dermatologist.
All strategies commenced at age 50 years.
These strategies were applied to three patient populations: a general population; siblings of melanoma patients; and siblings with at least two first-degree relatives with melanoma, considered to be at high risk.
The simulation relied on unproven assumptions about melanoma progression; rates of recurrence and mortality; and costs of treatment for local, regional metastatic, and diffuse metastatic disease, the investigators noted.
One-time screening of the general population by a dermatologist had a cost-effectiveness ratio of $10,100 per quality-adjusted life year (QALY) gained, Dr. Losina and her associates said.
Meanwhile, screening of at-risk and high-risk siblings of melanoma patients every other year had a cost-effectiveness ratio of $35,500 per QALY gained.
"Interventions in the United States are generally considered cost effective at less than $50,000 per QALY gained," the researchers noted.
In comparison, the cost-effectiveness ratio is $30,500 per QALY for mammography every other year, $24,100 per QALY for annual Pap tests, and $47,400 per QALY for colorectal cancer screening every 5 years, the researchers said.
One-time melanoma screening in the general population for those aged 50 years and older was found to be very cost effectivecomparable with screening for breast, cervical, and colorectal cancerin a computer simulation model.
Similarly, the screening of siblings of melanoma patients every other year also was found to be cost effective, reported Elena Losina, Ph.D., of Boston University School of Public Health, and her associates. Siblings of melanoma patients are considered to be at risk.
"Melanoma is the only cancer for which [incidence and mortality] are rising unabated, while screening, the potential means for reducing the burden of disease, continues to be underused," the researchers said (Arch. Dermatol. 2007;143:218).
Several national committees have debated the usefulness of population-based melanoma screening, but have never included it in recommended guidelines because there is no conclusive evidence that skin examination by clinicians reduces skin cancer morbidity or mortality. This, in turn, may stem from the fact that no randomized clinical trials of the issue have been conducted because of prohibitive costs and logistic complexity, Dr. Losina and her associates said.
"Cost-effectiveness analysis is particularly useful when randomized controlled trials cannot be done because of ethical or logistic considerations. In the case of melanoma, the low overall disease prevalence and incidence would require more than 360,000 study participants [followed] for 10 years to identify statistically significant differences in the outcome of screening," they said.
The investigators developed a computer simulation model to assess the cost-effectiveness of four different strategies for melanoma screening. The first was background screening only (skin examination at a routine primary physician visit, followed by referral to a dermatologist if necessary). The second strategy was a one-time screening by a dermatologist. They also measured the cost-effectiveness of once per year as well as once every other year screening by a dermatologist.
All strategies commenced at age 50 years.
These strategies were applied to three patient populations: a general population; siblings of melanoma patients; and siblings with at least two first-degree relatives with melanoma, considered to be at high risk.
The simulation relied on unproven assumptions about melanoma progression; rates of recurrence and mortality; and costs of treatment for local, regional metastatic, and diffuse metastatic disease, the investigators noted.
One-time screening of the general population by a dermatologist had a cost-effectiveness ratio of $10,100 per quality-adjusted life year (QALY) gained, Dr. Losina and her associates said.
Meanwhile, screening of at-risk and high-risk siblings of melanoma patients every other year had a cost-effectiveness ratio of $35,500 per QALY gained.
"Interventions in the United States are generally considered cost effective at less than $50,000 per QALY gained," the researchers noted.
In comparison, the cost-effectiveness ratio is $30,500 per QALY for mammography every other year, $24,100 per QALY for annual Pap tests, and $47,400 per QALY for colorectal cancer screening every 5 years, the researchers said.
One-time melanoma screening in the general population for those aged 50 years and older was found to be very cost effectivecomparable with screening for breast, cervical, and colorectal cancerin a computer simulation model.
Similarly, the screening of siblings of melanoma patients every other year also was found to be cost effective, reported Elena Losina, Ph.D., of Boston University School of Public Health, and her associates. Siblings of melanoma patients are considered to be at risk.
"Melanoma is the only cancer for which [incidence and mortality] are rising unabated, while screening, the potential means for reducing the burden of disease, continues to be underused," the researchers said (Arch. Dermatol. 2007;143:218).
Several national committees have debated the usefulness of population-based melanoma screening, but have never included it in recommended guidelines because there is no conclusive evidence that skin examination by clinicians reduces skin cancer morbidity or mortality. This, in turn, may stem from the fact that no randomized clinical trials of the issue have been conducted because of prohibitive costs and logistic complexity, Dr. Losina and her associates said.
"Cost-effectiveness analysis is particularly useful when randomized controlled trials cannot be done because of ethical or logistic considerations. In the case of melanoma, the low overall disease prevalence and incidence would require more than 360,000 study participants [followed] for 10 years to identify statistically significant differences in the outcome of screening," they said.
The investigators developed a computer simulation model to assess the cost-effectiveness of four different strategies for melanoma screening. The first was background screening only (skin examination at a routine primary physician visit, followed by referral to a dermatologist if necessary). The second strategy was a one-time screening by a dermatologist. They also measured the cost-effectiveness of once per year as well as once every other year screening by a dermatologist.
All strategies commenced at age 50 years.
These strategies were applied to three patient populations: a general population; siblings of melanoma patients; and siblings with at least two first-degree relatives with melanoma, considered to be at high risk.
The simulation relied on unproven assumptions about melanoma progression; rates of recurrence and mortality; and costs of treatment for local, regional metastatic, and diffuse metastatic disease, the investigators noted.
One-time screening of the general population by a dermatologist had a cost-effectiveness ratio of $10,100 per quality-adjusted life year (QALY) gained, Dr. Losina and her associates said.
Meanwhile, screening of at-risk and high-risk siblings of melanoma patients every other year had a cost-effectiveness ratio of $35,500 per QALY gained.
"Interventions in the United States are generally considered cost effective at less than $50,000 per QALY gained," the researchers noted.
In comparison, the cost-effectiveness ratio is $30,500 per QALY for mammography every other year, $24,100 per QALY for annual Pap tests, and $47,400 per QALY for colorectal cancer screening every 5 years, the researchers said.