User login
Jeff Evans has been editor of Rheumatology News/MDedge Rheumatology and the EULAR Congress News since 2013. He started at Frontline Medical Communications in 2001 and was a reporter for 8 years before serving as editor of Clinical Neurology News and World Neurology, and briefly as editor of GI & Hepatology News. He graduated cum laude from Cornell University (New York) with a BA in biological sciences, concentrating in neurobiology and behavior.
Calcitonin Nasal Spray May Preserve Bone Architecture
BETHESDA, MD. — Calcitonin nasal spray appears to preserve trabecular bone microarchitecture at the distal radius without substantially altering bone mineral density, Charles H. Chestnut III, M.D., said at a meeting on bone quality.
In a 2-year, randomized, double-blind trial involving 91 women with an average age of 67 years, high-resolution MRI analysis of the distal radius showed that calcitonin nasal spray preserved significantly more trabecular bone architecture than placebo.
Calcitonin's effects included preservation of the volume, number, spacing, and thickness of trabecular bone, Dr. Chestnut wrote in a poster presentation at the meeting, which as sponsored by the National Institute for Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
Trabecular bone microarchitecture was significantly preserved—if not reinforced—in calcitonin patients, compared with placebo patients, despite loss in bone mineral density (BMD) at the distal radius or lumbar spine during the same period. In placebo patients, the number of trabeculae declined slightly at those sites even if the women had gained BMD.
The results are consistent with earlier reports showing that calcitonin spray was associated with reductions in osteoporotic fractures in postmenopausal women with a history of vertebral fracture, despite producing minimal increases in BMD, said Dr. Chestnut, professor of medicine and radiology at the University of Washington, Seattle.
Almost none of the measurements of BMD in the lumbar spine or midradius were significantly correlated with measures of trabecular microarchitecture change as shown on high-resolution MRI, suggesting that “BMD is a poor marker for trabecular microarchitecture,” Dr. Chestnut wrote.
In the calcitonin group, trabecular microarchitecture in the lower trochanter was preserved, according to T2-MRI findings, regardless of whether patients lost or gained total hip BMD. By comparison, trabecular microarchitecture deteriorated in the placebo group.
All women in the trial received calcium supplementation.
Dr. Chestnut reported that he has received research grants and consulting fees from Novartis Pharmaceuticals Corp., which funded the trial and manufactures calcitonin-salmon nasal spray (Miacalcin).
BETHESDA, MD. — Calcitonin nasal spray appears to preserve trabecular bone microarchitecture at the distal radius without substantially altering bone mineral density, Charles H. Chestnut III, M.D., said at a meeting on bone quality.
In a 2-year, randomized, double-blind trial involving 91 women with an average age of 67 years, high-resolution MRI analysis of the distal radius showed that calcitonin nasal spray preserved significantly more trabecular bone architecture than placebo.
Calcitonin's effects included preservation of the volume, number, spacing, and thickness of trabecular bone, Dr. Chestnut wrote in a poster presentation at the meeting, which as sponsored by the National Institute for Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
Trabecular bone microarchitecture was significantly preserved—if not reinforced—in calcitonin patients, compared with placebo patients, despite loss in bone mineral density (BMD) at the distal radius or lumbar spine during the same period. In placebo patients, the number of trabeculae declined slightly at those sites even if the women had gained BMD.
The results are consistent with earlier reports showing that calcitonin spray was associated with reductions in osteoporotic fractures in postmenopausal women with a history of vertebral fracture, despite producing minimal increases in BMD, said Dr. Chestnut, professor of medicine and radiology at the University of Washington, Seattle.
Almost none of the measurements of BMD in the lumbar spine or midradius were significantly correlated with measures of trabecular microarchitecture change as shown on high-resolution MRI, suggesting that “BMD is a poor marker for trabecular microarchitecture,” Dr. Chestnut wrote.
In the calcitonin group, trabecular microarchitecture in the lower trochanter was preserved, according to T2-MRI findings, regardless of whether patients lost or gained total hip BMD. By comparison, trabecular microarchitecture deteriorated in the placebo group.
All women in the trial received calcium supplementation.
Dr. Chestnut reported that he has received research grants and consulting fees from Novartis Pharmaceuticals Corp., which funded the trial and manufactures calcitonin-salmon nasal spray (Miacalcin).
BETHESDA, MD. — Calcitonin nasal spray appears to preserve trabecular bone microarchitecture at the distal radius without substantially altering bone mineral density, Charles H. Chestnut III, M.D., said at a meeting on bone quality.
In a 2-year, randomized, double-blind trial involving 91 women with an average age of 67 years, high-resolution MRI analysis of the distal radius showed that calcitonin nasal spray preserved significantly more trabecular bone architecture than placebo.
Calcitonin's effects included preservation of the volume, number, spacing, and thickness of trabecular bone, Dr. Chestnut wrote in a poster presentation at the meeting, which as sponsored by the National Institute for Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
Trabecular bone microarchitecture was significantly preserved—if not reinforced—in calcitonin patients, compared with placebo patients, despite loss in bone mineral density (BMD) at the distal radius or lumbar spine during the same period. In placebo patients, the number of trabeculae declined slightly at those sites even if the women had gained BMD.
The results are consistent with earlier reports showing that calcitonin spray was associated with reductions in osteoporotic fractures in postmenopausal women with a history of vertebral fracture, despite producing minimal increases in BMD, said Dr. Chestnut, professor of medicine and radiology at the University of Washington, Seattle.
Almost none of the measurements of BMD in the lumbar spine or midradius were significantly correlated with measures of trabecular microarchitecture change as shown on high-resolution MRI, suggesting that “BMD is a poor marker for trabecular microarchitecture,” Dr. Chestnut wrote.
In the calcitonin group, trabecular microarchitecture in the lower trochanter was preserved, according to T2-MRI findings, regardless of whether patients lost or gained total hip BMD. By comparison, trabecular microarchitecture deteriorated in the placebo group.
All women in the trial received calcium supplementation.
Dr. Chestnut reported that he has received research grants and consulting fees from Novartis Pharmaceuticals Corp., which funded the trial and manufactures calcitonin-salmon nasal spray (Miacalcin).
Search for Genes Controlling Bone Quality Narrows With New Findings
BETHESDA, MD. — New chromosomal regions that possibly contain genes controlling bone quality were recently identified in the first genome-wide linkage scan of cross-sectional bone geometry in humans.
The few reported genetic studies of cross-sectional geometry have shown that the heritability is greater than 50%, “which means that in the general population, more than 50% of the phenotypic variation can be attributable to genetic events,” Hui Shen said at a meeting on bone quality.
In a prospective study of 79 white pedigrees composed of 1,816 individuals, Mr. Shen of Creighton University, Omaha, Neb., and his colleagues calculated logarithmic odds (LOD) scores for bone geometry parameters at the femoral neck, including cross-sectional area, cortical thickness, endocortical diameter, sectional modulus, and buckling ratio in relation to 451 microsatellite markers.
On chromosome 10q26, the researchers calculated an LOD score of 3.29, the highest recorded in the study, for the buckling ratio at the femoral neck. This indicates that the odds are nearly 2,000 to 1 in favor of genetic linkage between the two loci.
Three bone geometry parameters (cross-sectional area, cortical thickness, and buckling ratio) were linked to a broad region on chromosome 20p12-q12 with LOD scores ranging from 1.95 to 2.29. A candidate gene called bone morphogenetic protein 2 (BMP2) is located in that region.
BMP2 is known to regulate bone growth and in a recent study was identified as a genetic determinant of risk for osteoporosis (PLoS Biol. 2003;1:E69).
The researchers also observed some difference in the linkages for buckling ratio and cortical thickness between males and females. “Taken together, this evidence suggests a gene or a group of genes appearing in this area may have significant effects on [bone mineral density], bone geometry, and probably other fracture-related factors,” said Mr. Shen, a doctoral student at Creighton's Osteoporosis Research Center.
The meeting was sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
BETHESDA, MD. — New chromosomal regions that possibly contain genes controlling bone quality were recently identified in the first genome-wide linkage scan of cross-sectional bone geometry in humans.
The few reported genetic studies of cross-sectional geometry have shown that the heritability is greater than 50%, “which means that in the general population, more than 50% of the phenotypic variation can be attributable to genetic events,” Hui Shen said at a meeting on bone quality.
In a prospective study of 79 white pedigrees composed of 1,816 individuals, Mr. Shen of Creighton University, Omaha, Neb., and his colleagues calculated logarithmic odds (LOD) scores for bone geometry parameters at the femoral neck, including cross-sectional area, cortical thickness, endocortical diameter, sectional modulus, and buckling ratio in relation to 451 microsatellite markers.
On chromosome 10q26, the researchers calculated an LOD score of 3.29, the highest recorded in the study, for the buckling ratio at the femoral neck. This indicates that the odds are nearly 2,000 to 1 in favor of genetic linkage between the two loci.
Three bone geometry parameters (cross-sectional area, cortical thickness, and buckling ratio) were linked to a broad region on chromosome 20p12-q12 with LOD scores ranging from 1.95 to 2.29. A candidate gene called bone morphogenetic protein 2 (BMP2) is located in that region.
BMP2 is known to regulate bone growth and in a recent study was identified as a genetic determinant of risk for osteoporosis (PLoS Biol. 2003;1:E69).
The researchers also observed some difference in the linkages for buckling ratio and cortical thickness between males and females. “Taken together, this evidence suggests a gene or a group of genes appearing in this area may have significant effects on [bone mineral density], bone geometry, and probably other fracture-related factors,” said Mr. Shen, a doctoral student at Creighton's Osteoporosis Research Center.
The meeting was sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
BETHESDA, MD. — New chromosomal regions that possibly contain genes controlling bone quality were recently identified in the first genome-wide linkage scan of cross-sectional bone geometry in humans.
The few reported genetic studies of cross-sectional geometry have shown that the heritability is greater than 50%, “which means that in the general population, more than 50% of the phenotypic variation can be attributable to genetic events,” Hui Shen said at a meeting on bone quality.
In a prospective study of 79 white pedigrees composed of 1,816 individuals, Mr. Shen of Creighton University, Omaha, Neb., and his colleagues calculated logarithmic odds (LOD) scores for bone geometry parameters at the femoral neck, including cross-sectional area, cortical thickness, endocortical diameter, sectional modulus, and buckling ratio in relation to 451 microsatellite markers.
On chromosome 10q26, the researchers calculated an LOD score of 3.29, the highest recorded in the study, for the buckling ratio at the femoral neck. This indicates that the odds are nearly 2,000 to 1 in favor of genetic linkage between the two loci.
Three bone geometry parameters (cross-sectional area, cortical thickness, and buckling ratio) were linked to a broad region on chromosome 20p12-q12 with LOD scores ranging from 1.95 to 2.29. A candidate gene called bone morphogenetic protein 2 (BMP2) is located in that region.
BMP2 is known to regulate bone growth and in a recent study was identified as a genetic determinant of risk for osteoporosis (PLoS Biol. 2003;1:E69).
The researchers also observed some difference in the linkages for buckling ratio and cortical thickness between males and females. “Taken together, this evidence suggests a gene or a group of genes appearing in this area may have significant effects on [bone mineral density], bone geometry, and probably other fracture-related factors,” said Mr. Shen, a doctoral student at Creighton's Osteoporosis Research Center.
The meeting was sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
Testosterone Tx Fortifies Bone in Hypogonadal Men
BETHESDA, MD. — Testosterone replacement therapy in hypogonadal men appears to significantly improve trabecular bone architecture, according to the results of a small study.
Findings from previous studies suggest that testosterone replacement therapy increases bone mineral density in hypogonadal men, but none of these investigations looked at the effect of the hormone on trabecular architecture.
Maria Benito, M.D., reported at a meeting on bone quality the improvements seen in trabecular architecture in 10 hypogonadal men (median age 51 years) after 2 years of testosterone gel (AndroGel) therapy.
Each patient applied 5 g of a transdermally absorbed gel once per day and then received doses titrated to keep their serum testosterone level within the normal range of 400–900 ng/dL. The men increased their serum testosterone level from a mean of 88 ng/dL at baseline to 468 ng/dL after 2 years, said Dr. Benito of the division of endocrinology, diabetes, and metabolism at the University of Pennsylvania, Philadelphia.
Using micro MRI scans of the distal tibia taken at baseline, 6, 12, and 24 months, Dr. Benito and her colleagues matched architectural parameters in the images from each subject at each time point to ensure that the same volume was analyzed each time. They measured the ratio of surface voxels (representing trabecular plates) to curve voxels (representing trabecular rods) and the ratio of topologic parameters expected to increase during trabecular deterioration to those expected to decrease (the topologic erosion index).
After 24 months of treatment, the ratio of surface to curve voxels increased significantly by 11% while the topologic erosion index decreased significantly by 8%; both measures indicate that trabecular architecture improved. Bone mineral density also rose significantly in the L1-L4 vertebrae by 7%.
The improvement in trabecular architecture could not be attributed to body mass index or calcium intake during treatment since neither factor changed substantially. Testosterone's effect on trabecular architecture suggests that it may exert an anabolic effect on bone, she said.
Men with a calcium intake of less than 750 mg per day, a history of disease, or on medications that could affect bone, were excluded from the study, Dr. Benito said at the meeting, sponsored by the National Institute for Arthritis, Musculoskeletal, and Skin Diseases and the American Society for Bone and Mineral Research.
Solvay Pharmaceuticals provided the AndroGel used in the study. Dr. Benito had no financial conflicts of interest to report.
BETHESDA, MD. — Testosterone replacement therapy in hypogonadal men appears to significantly improve trabecular bone architecture, according to the results of a small study.
Findings from previous studies suggest that testosterone replacement therapy increases bone mineral density in hypogonadal men, but none of these investigations looked at the effect of the hormone on trabecular architecture.
Maria Benito, M.D., reported at a meeting on bone quality the improvements seen in trabecular architecture in 10 hypogonadal men (median age 51 years) after 2 years of testosterone gel (AndroGel) therapy.
Each patient applied 5 g of a transdermally absorbed gel once per day and then received doses titrated to keep their serum testosterone level within the normal range of 400–900 ng/dL. The men increased their serum testosterone level from a mean of 88 ng/dL at baseline to 468 ng/dL after 2 years, said Dr. Benito of the division of endocrinology, diabetes, and metabolism at the University of Pennsylvania, Philadelphia.
Using micro MRI scans of the distal tibia taken at baseline, 6, 12, and 24 months, Dr. Benito and her colleagues matched architectural parameters in the images from each subject at each time point to ensure that the same volume was analyzed each time. They measured the ratio of surface voxels (representing trabecular plates) to curve voxels (representing trabecular rods) and the ratio of topologic parameters expected to increase during trabecular deterioration to those expected to decrease (the topologic erosion index).
After 24 months of treatment, the ratio of surface to curve voxels increased significantly by 11% while the topologic erosion index decreased significantly by 8%; both measures indicate that trabecular architecture improved. Bone mineral density also rose significantly in the L1-L4 vertebrae by 7%.
The improvement in trabecular architecture could not be attributed to body mass index or calcium intake during treatment since neither factor changed substantially. Testosterone's effect on trabecular architecture suggests that it may exert an anabolic effect on bone, she said.
Men with a calcium intake of less than 750 mg per day, a history of disease, or on medications that could affect bone, were excluded from the study, Dr. Benito said at the meeting, sponsored by the National Institute for Arthritis, Musculoskeletal, and Skin Diseases and the American Society for Bone and Mineral Research.
Solvay Pharmaceuticals provided the AndroGel used in the study. Dr. Benito had no financial conflicts of interest to report.
BETHESDA, MD. — Testosterone replacement therapy in hypogonadal men appears to significantly improve trabecular bone architecture, according to the results of a small study.
Findings from previous studies suggest that testosterone replacement therapy increases bone mineral density in hypogonadal men, but none of these investigations looked at the effect of the hormone on trabecular architecture.
Maria Benito, M.D., reported at a meeting on bone quality the improvements seen in trabecular architecture in 10 hypogonadal men (median age 51 years) after 2 years of testosterone gel (AndroGel) therapy.
Each patient applied 5 g of a transdermally absorbed gel once per day and then received doses titrated to keep their serum testosterone level within the normal range of 400–900 ng/dL. The men increased their serum testosterone level from a mean of 88 ng/dL at baseline to 468 ng/dL after 2 years, said Dr. Benito of the division of endocrinology, diabetes, and metabolism at the University of Pennsylvania, Philadelphia.
Using micro MRI scans of the distal tibia taken at baseline, 6, 12, and 24 months, Dr. Benito and her colleagues matched architectural parameters in the images from each subject at each time point to ensure that the same volume was analyzed each time. They measured the ratio of surface voxels (representing trabecular plates) to curve voxels (representing trabecular rods) and the ratio of topologic parameters expected to increase during trabecular deterioration to those expected to decrease (the topologic erosion index).
After 24 months of treatment, the ratio of surface to curve voxels increased significantly by 11% while the topologic erosion index decreased significantly by 8%; both measures indicate that trabecular architecture improved. Bone mineral density also rose significantly in the L1-L4 vertebrae by 7%.
The improvement in trabecular architecture could not be attributed to body mass index or calcium intake during treatment since neither factor changed substantially. Testosterone's effect on trabecular architecture suggests that it may exert an anabolic effect on bone, she said.
Men with a calcium intake of less than 750 mg per day, a history of disease, or on medications that could affect bone, were excluded from the study, Dr. Benito said at the meeting, sponsored by the National Institute for Arthritis, Musculoskeletal, and Skin Diseases and the American Society for Bone and Mineral Research.
Solvay Pharmaceuticals provided the AndroGel used in the study. Dr. Benito had no financial conflicts of interest to report.
Better Bone Biomarkers on the Distant Horizon
BETHESDA, MD. — Fracture will continue to be the primary end point in clinical trials of treatments for osteoporosis until new biomarkers can stand in as surrogates for fracture, according to speakers at a meeting on bone quality.
The large number of factors that need to be tested to validate a single biomarker as a surrogate end point make it unlikely that any single biomarker will adequately predict the risk of fracture, said Henry Bone, M.D., of the Michigan Bone and Mineral Clinic, Detroit. Researchers may need to combine a set of biomarkers into a model to produce the best surrogate.
Many new potential biomarkers of bone quality are being evaluated in small subgroups in clinical trials, but no single study has compared a set of bone quality measurements with bone mineral density (BMD) and radiographs for prediction of fractures and the effect of treatment, Dr. Bone said. “We really haven't considered using combined models integrating a number of different kinds of these intermediate end points or biomarkers.”
“For something as complicated as fracture risk, I think this is where we're going to end up—that we use combinations of anatomical and more dynamic measurements in order to explain the effects of treatment,” added Steven R. Cummings, M.D., of the California Pacific Medical Center Research Institute in San Francisco.
Surrogate end points, which may be faster and easier to measure than clinical outcomes such as fracture, could allow researchers to speed up clinical trials, enroll fewer patients into studies, and lower the cost of drug development, said Theresa Kehoe, M.D., of the division of metabolic and endocrine drug products at the Food and Drug Administration's Center for Drug Evaluation and Research.
A surrogate end point in a clinical trial is a laboratory or radiologic measurement or physical sign—a biomarker—used as a substitute for a clinically meaningful end point that measures directly how a patient feels, functions, or survives. The changes induced by a therapy on this surrogate end point are expected to reflect changes in a clinically meaningful end point, such as fracture, Dr. Kehoe said.
In osteoporosis clinical trials involving the prevention of postmenopausal osteoporosis with estrogens, current regulatory practice permits the use of BMD data alone to act as a surrogate end point, Dr. Kehoe said at the meeting, which was sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
But data on the rate of fracture are necessary for clinical trials of the prevention or treatment of postmenopausal osteoporosis with selective estrogen receptor modulators or nonestrogen products. In those trials, Dr. Kehoe said the FDA “tends not to accept” BMD data as a surrogate for prevention of postmenopausal osteoporosis before it has data on the rate of fracture.
BMD is still the primary end point for efficacy in noninferiority trials that compare a once-daily formulation of a drug that has already been approved based on its ability to reduce the rate of fracture with a new formulation of the same drug, she said.
Dr. Kehoe raised additional questions to consider about surrogate end points:
▸ Should the surrogate show consistent sensitivity and specificity in more than one therapeutic class of drugs? A single negative therapeutic example has the potential to undermine the biological plausibility of the proposed surrogate, she noted.
▸ What type of fracture should the surrogate be tested against? This could be an asymptomatic morphometric vertebral fracture, which some already consider to be a surrogate for a symptomatic fracture.
▸ Should the surrogate have equal sensitivity and specificity for mild, moderate, and severe vertebral fractures?
▸ What sensitivity, specificity, positive and negative predictive values, or other relevant statistics should be required to prove that a surrogate is valid?
BETHESDA, MD. — Fracture will continue to be the primary end point in clinical trials of treatments for osteoporosis until new biomarkers can stand in as surrogates for fracture, according to speakers at a meeting on bone quality.
The large number of factors that need to be tested to validate a single biomarker as a surrogate end point make it unlikely that any single biomarker will adequately predict the risk of fracture, said Henry Bone, M.D., of the Michigan Bone and Mineral Clinic, Detroit. Researchers may need to combine a set of biomarkers into a model to produce the best surrogate.
Many new potential biomarkers of bone quality are being evaluated in small subgroups in clinical trials, but no single study has compared a set of bone quality measurements with bone mineral density (BMD) and radiographs for prediction of fractures and the effect of treatment, Dr. Bone said. “We really haven't considered using combined models integrating a number of different kinds of these intermediate end points or biomarkers.”
“For something as complicated as fracture risk, I think this is where we're going to end up—that we use combinations of anatomical and more dynamic measurements in order to explain the effects of treatment,” added Steven R. Cummings, M.D., of the California Pacific Medical Center Research Institute in San Francisco.
Surrogate end points, which may be faster and easier to measure than clinical outcomes such as fracture, could allow researchers to speed up clinical trials, enroll fewer patients into studies, and lower the cost of drug development, said Theresa Kehoe, M.D., of the division of metabolic and endocrine drug products at the Food and Drug Administration's Center for Drug Evaluation and Research.
A surrogate end point in a clinical trial is a laboratory or radiologic measurement or physical sign—a biomarker—used as a substitute for a clinically meaningful end point that measures directly how a patient feels, functions, or survives. The changes induced by a therapy on this surrogate end point are expected to reflect changes in a clinically meaningful end point, such as fracture, Dr. Kehoe said.
In osteoporosis clinical trials involving the prevention of postmenopausal osteoporosis with estrogens, current regulatory practice permits the use of BMD data alone to act as a surrogate end point, Dr. Kehoe said at the meeting, which was sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
But data on the rate of fracture are necessary for clinical trials of the prevention or treatment of postmenopausal osteoporosis with selective estrogen receptor modulators or nonestrogen products. In those trials, Dr. Kehoe said the FDA “tends not to accept” BMD data as a surrogate for prevention of postmenopausal osteoporosis before it has data on the rate of fracture.
BMD is still the primary end point for efficacy in noninferiority trials that compare a once-daily formulation of a drug that has already been approved based on its ability to reduce the rate of fracture with a new formulation of the same drug, she said.
Dr. Kehoe raised additional questions to consider about surrogate end points:
▸ Should the surrogate show consistent sensitivity and specificity in more than one therapeutic class of drugs? A single negative therapeutic example has the potential to undermine the biological plausibility of the proposed surrogate, she noted.
▸ What type of fracture should the surrogate be tested against? This could be an asymptomatic morphometric vertebral fracture, which some already consider to be a surrogate for a symptomatic fracture.
▸ Should the surrogate have equal sensitivity and specificity for mild, moderate, and severe vertebral fractures?
▸ What sensitivity, specificity, positive and negative predictive values, or other relevant statistics should be required to prove that a surrogate is valid?
BETHESDA, MD. — Fracture will continue to be the primary end point in clinical trials of treatments for osteoporosis until new biomarkers can stand in as surrogates for fracture, according to speakers at a meeting on bone quality.
The large number of factors that need to be tested to validate a single biomarker as a surrogate end point make it unlikely that any single biomarker will adequately predict the risk of fracture, said Henry Bone, M.D., of the Michigan Bone and Mineral Clinic, Detroit. Researchers may need to combine a set of biomarkers into a model to produce the best surrogate.
Many new potential biomarkers of bone quality are being evaluated in small subgroups in clinical trials, but no single study has compared a set of bone quality measurements with bone mineral density (BMD) and radiographs for prediction of fractures and the effect of treatment, Dr. Bone said. “We really haven't considered using combined models integrating a number of different kinds of these intermediate end points or biomarkers.”
“For something as complicated as fracture risk, I think this is where we're going to end up—that we use combinations of anatomical and more dynamic measurements in order to explain the effects of treatment,” added Steven R. Cummings, M.D., of the California Pacific Medical Center Research Institute in San Francisco.
Surrogate end points, which may be faster and easier to measure than clinical outcomes such as fracture, could allow researchers to speed up clinical trials, enroll fewer patients into studies, and lower the cost of drug development, said Theresa Kehoe, M.D., of the division of metabolic and endocrine drug products at the Food and Drug Administration's Center for Drug Evaluation and Research.
A surrogate end point in a clinical trial is a laboratory or radiologic measurement or physical sign—a biomarker—used as a substitute for a clinically meaningful end point that measures directly how a patient feels, functions, or survives. The changes induced by a therapy on this surrogate end point are expected to reflect changes in a clinically meaningful end point, such as fracture, Dr. Kehoe said.
In osteoporosis clinical trials involving the prevention of postmenopausal osteoporosis with estrogens, current regulatory practice permits the use of BMD data alone to act as a surrogate end point, Dr. Kehoe said at the meeting, which was sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
But data on the rate of fracture are necessary for clinical trials of the prevention or treatment of postmenopausal osteoporosis with selective estrogen receptor modulators or nonestrogen products. In those trials, Dr. Kehoe said the FDA “tends not to accept” BMD data as a surrogate for prevention of postmenopausal osteoporosis before it has data on the rate of fracture.
BMD is still the primary end point for efficacy in noninferiority trials that compare a once-daily formulation of a drug that has already been approved based on its ability to reduce the rate of fracture with a new formulation of the same drug, she said.
Dr. Kehoe raised additional questions to consider about surrogate end points:
▸ Should the surrogate show consistent sensitivity and specificity in more than one therapeutic class of drugs? A single negative therapeutic example has the potential to undermine the biological plausibility of the proposed surrogate, she noted.
▸ What type of fracture should the surrogate be tested against? This could be an asymptomatic morphometric vertebral fracture, which some already consider to be a surrogate for a symptomatic fracture.
▸ Should the surrogate have equal sensitivity and specificity for mild, moderate, and severe vertebral fractures?
▸ What sensitivity, specificity, positive and negative predictive values, or other relevant statistics should be required to prove that a surrogate is valid?
Clinical Capsules
Screening for Lynch Syndrome
Molecular screening combined with immunohistochemical analysis can detect the Lynch syndrome, or hereditary nonpolyposis colorectal cancer, in patients with colorectal adenocarcinoma with greater accuracy than other criteria, reported Heather Hampel of Ohio State University, Columbus, and her colleagues.
In a group of 1,066 patients with colorectal adenocarcinoma, abnormalities of microsatellite instability were detected by immunohistochemical analysis in 123 of 132 tumors that had shown high-frequency microsatellite instability on genotyping. Only 10 of 70 tumors that had low-frequency microsatellite instability on genotyping showed abnormalities on immunohistochemical staining. Overall, 23 patients (2.2% of 1,066)—all with high-frequency microsatellite instability—had a deleterious mutation in a mismatch-repair gene (MLH1, MSH2, MSH6, or PMS2), which indicated that they had the Lynch syndrome (N. Engl. J. Med. 2005;352:1851–60).
Only 3 of those 23 patients fulfilled the Amsterdam criteria for the diagnosis of the syndrome. The Bethesda criteria for the syndrome would have diagnosed 15 of the patients. Five of the 23 patients did not fulfill either set of criteria. A mismatch-repair gene mutation was found in 52 of 117 relatives of the 23 patients; 14 of the 52 relatives had cancer related to the Lynch syndrome.
Sigmoidoscopy Yield Is Poor in Women
Flexible sigmoidoscopy is inadequate for predicting advanced neoplasia in the proximal colon in women, making colonoscopy the preferred method of screening for colorectal cancer in women, according to the results of a prospective study.
The study included 1,463 asymptomatic, average-risk women who underwent colonoscopy. Flexible sigmoidoscopy would have missed 94% (47 of 50) of advanced neoplasias in the proximal colon if the procedure extended to the junction of the sigmoid and descending colon and a finding of distal colorectal neoplasia had triggered a colonoscopy, reported Philip Schoenfeld, M.D., of the University of Michigan, Ann Arbor, and his associates. In a similar situation, 92% (36 of 39) of advanced neoplasias in the proximal colon would have been missed if sigmoidoscopy had been performed to the splenic flexure (N. Engl. J. Med. 2005;352:2061–8).
When the researchers matched men from the VA Cooperative Study 380 with women in the present study who had a negative fecal occult blood test and no family history of colorectal cancer, flexible sigmoidoscopy had a significantly higher yield for advanced neoplasia in men (66%, 126 of 190) than in women (35%, 19 of 54).
Hepatic Colorectal Metastases
Expression of the catalytic subunit of human telomerase reverse transcriptase independently predicts lower survival in patients who undergo curative resection for hepatic colorectal metastases, reported Julien Dômont, M.D., of the Institut Gustave Roussy, Villjuif, France, and colleagues.
In a retrospective multicenter study of 201 patients, positive staining for human telomerase reverse transcriptase (hTERT) in the nucleolus was associated with a twofold higher relative risk of dying after curative resection for hepatic colorectal metastases; the increase was statistically significant (J. Clin. Oncol. 2005;23:3086–93).
The median overall survival of patients with positive hTERT staining was significantly lower than survival of patients with negative staining after hepatic resection (23 months vs. 46 months). In a multivariate analysis, other independent risk factors for lower survival included more than two hepatic metastases and a disease-free interval of less than 12 months.
C. difficile Diarrhea Relapse
The prebiotic oligofructose prevents relapse of Clostridium difficile-associated diarrhea in significantly more inpatients than does placebo, reported Stephen Lewis, M.D., of Derriford Hospital, Plymouth, England, and his associates.
Of 72 patients who received oligofructose in a double-blind, randomized trial, 6 patients (8%) had a relapse of diarrhea, compared with 24 (34%) of 70 patients who received placebo.
However, the two groups did not differ in C. difficile culture positivity at 30 and 60 days after hospital discharge. At hospital discharge and at 60 days after discharge, oligofructose-treated patients carried significantly higher concentrations of fecal bifidobacteria and total anaerobes, but not aerobes, compared with placebo-group patients. “It is possible that the bifidobacterial metabolic products inhibited the metabolic activity of C. difficile,” the investigators suggested (Clin. Gastroenterol. Hepatol. 2005;3:442–8).
Most patients received metronidazole as first-line treatment for their initial or relapse episode of C. difficile-associated diarrhea. Oligofructose is a fructo-oligosaccharide found in plants such as chicory, asparagus, and artichoke.
Screening for Lynch Syndrome
Molecular screening combined with immunohistochemical analysis can detect the Lynch syndrome, or hereditary nonpolyposis colorectal cancer, in patients with colorectal adenocarcinoma with greater accuracy than other criteria, reported Heather Hampel of Ohio State University, Columbus, and her colleagues.
In a group of 1,066 patients with colorectal adenocarcinoma, abnormalities of microsatellite instability were detected by immunohistochemical analysis in 123 of 132 tumors that had shown high-frequency microsatellite instability on genotyping. Only 10 of 70 tumors that had low-frequency microsatellite instability on genotyping showed abnormalities on immunohistochemical staining. Overall, 23 patients (2.2% of 1,066)—all with high-frequency microsatellite instability—had a deleterious mutation in a mismatch-repair gene (MLH1, MSH2, MSH6, or PMS2), which indicated that they had the Lynch syndrome (N. Engl. J. Med. 2005;352:1851–60).
Only 3 of those 23 patients fulfilled the Amsterdam criteria for the diagnosis of the syndrome. The Bethesda criteria for the syndrome would have diagnosed 15 of the patients. Five of the 23 patients did not fulfill either set of criteria. A mismatch-repair gene mutation was found in 52 of 117 relatives of the 23 patients; 14 of the 52 relatives had cancer related to the Lynch syndrome.
Sigmoidoscopy Yield Is Poor in Women
Flexible sigmoidoscopy is inadequate for predicting advanced neoplasia in the proximal colon in women, making colonoscopy the preferred method of screening for colorectal cancer in women, according to the results of a prospective study.
The study included 1,463 asymptomatic, average-risk women who underwent colonoscopy. Flexible sigmoidoscopy would have missed 94% (47 of 50) of advanced neoplasias in the proximal colon if the procedure extended to the junction of the sigmoid and descending colon and a finding of distal colorectal neoplasia had triggered a colonoscopy, reported Philip Schoenfeld, M.D., of the University of Michigan, Ann Arbor, and his associates. In a similar situation, 92% (36 of 39) of advanced neoplasias in the proximal colon would have been missed if sigmoidoscopy had been performed to the splenic flexure (N. Engl. J. Med. 2005;352:2061–8).
When the researchers matched men from the VA Cooperative Study 380 with women in the present study who had a negative fecal occult blood test and no family history of colorectal cancer, flexible sigmoidoscopy had a significantly higher yield for advanced neoplasia in men (66%, 126 of 190) than in women (35%, 19 of 54).
Hepatic Colorectal Metastases
Expression of the catalytic subunit of human telomerase reverse transcriptase independently predicts lower survival in patients who undergo curative resection for hepatic colorectal metastases, reported Julien Dômont, M.D., of the Institut Gustave Roussy, Villjuif, France, and colleagues.
In a retrospective multicenter study of 201 patients, positive staining for human telomerase reverse transcriptase (hTERT) in the nucleolus was associated with a twofold higher relative risk of dying after curative resection for hepatic colorectal metastases; the increase was statistically significant (J. Clin. Oncol. 2005;23:3086–93).
The median overall survival of patients with positive hTERT staining was significantly lower than survival of patients with negative staining after hepatic resection (23 months vs. 46 months). In a multivariate analysis, other independent risk factors for lower survival included more than two hepatic metastases and a disease-free interval of less than 12 months.
C. difficile Diarrhea Relapse
The prebiotic oligofructose prevents relapse of Clostridium difficile-associated diarrhea in significantly more inpatients than does placebo, reported Stephen Lewis, M.D., of Derriford Hospital, Plymouth, England, and his associates.
Of 72 patients who received oligofructose in a double-blind, randomized trial, 6 patients (8%) had a relapse of diarrhea, compared with 24 (34%) of 70 patients who received placebo.
However, the two groups did not differ in C. difficile culture positivity at 30 and 60 days after hospital discharge. At hospital discharge and at 60 days after discharge, oligofructose-treated patients carried significantly higher concentrations of fecal bifidobacteria and total anaerobes, but not aerobes, compared with placebo-group patients. “It is possible that the bifidobacterial metabolic products inhibited the metabolic activity of C. difficile,” the investigators suggested (Clin. Gastroenterol. Hepatol. 2005;3:442–8).
Most patients received metronidazole as first-line treatment for their initial or relapse episode of C. difficile-associated diarrhea. Oligofructose is a fructo-oligosaccharide found in plants such as chicory, asparagus, and artichoke.
Screening for Lynch Syndrome
Molecular screening combined with immunohistochemical analysis can detect the Lynch syndrome, or hereditary nonpolyposis colorectal cancer, in patients with colorectal adenocarcinoma with greater accuracy than other criteria, reported Heather Hampel of Ohio State University, Columbus, and her colleagues.
In a group of 1,066 patients with colorectal adenocarcinoma, abnormalities of microsatellite instability were detected by immunohistochemical analysis in 123 of 132 tumors that had shown high-frequency microsatellite instability on genotyping. Only 10 of 70 tumors that had low-frequency microsatellite instability on genotyping showed abnormalities on immunohistochemical staining. Overall, 23 patients (2.2% of 1,066)—all with high-frequency microsatellite instability—had a deleterious mutation in a mismatch-repair gene (MLH1, MSH2, MSH6, or PMS2), which indicated that they had the Lynch syndrome (N. Engl. J. Med. 2005;352:1851–60).
Only 3 of those 23 patients fulfilled the Amsterdam criteria for the diagnosis of the syndrome. The Bethesda criteria for the syndrome would have diagnosed 15 of the patients. Five of the 23 patients did not fulfill either set of criteria. A mismatch-repair gene mutation was found in 52 of 117 relatives of the 23 patients; 14 of the 52 relatives had cancer related to the Lynch syndrome.
Sigmoidoscopy Yield Is Poor in Women
Flexible sigmoidoscopy is inadequate for predicting advanced neoplasia in the proximal colon in women, making colonoscopy the preferred method of screening for colorectal cancer in women, according to the results of a prospective study.
The study included 1,463 asymptomatic, average-risk women who underwent colonoscopy. Flexible sigmoidoscopy would have missed 94% (47 of 50) of advanced neoplasias in the proximal colon if the procedure extended to the junction of the sigmoid and descending colon and a finding of distal colorectal neoplasia had triggered a colonoscopy, reported Philip Schoenfeld, M.D., of the University of Michigan, Ann Arbor, and his associates. In a similar situation, 92% (36 of 39) of advanced neoplasias in the proximal colon would have been missed if sigmoidoscopy had been performed to the splenic flexure (N. Engl. J. Med. 2005;352:2061–8).
When the researchers matched men from the VA Cooperative Study 380 with women in the present study who had a negative fecal occult blood test and no family history of colorectal cancer, flexible sigmoidoscopy had a significantly higher yield for advanced neoplasia in men (66%, 126 of 190) than in women (35%, 19 of 54).
Hepatic Colorectal Metastases
Expression of the catalytic subunit of human telomerase reverse transcriptase independently predicts lower survival in patients who undergo curative resection for hepatic colorectal metastases, reported Julien Dômont, M.D., of the Institut Gustave Roussy, Villjuif, France, and colleagues.
In a retrospective multicenter study of 201 patients, positive staining for human telomerase reverse transcriptase (hTERT) in the nucleolus was associated with a twofold higher relative risk of dying after curative resection for hepatic colorectal metastases; the increase was statistically significant (J. Clin. Oncol. 2005;23:3086–93).
The median overall survival of patients with positive hTERT staining was significantly lower than survival of patients with negative staining after hepatic resection (23 months vs. 46 months). In a multivariate analysis, other independent risk factors for lower survival included more than two hepatic metastases and a disease-free interval of less than 12 months.
C. difficile Diarrhea Relapse
The prebiotic oligofructose prevents relapse of Clostridium difficile-associated diarrhea in significantly more inpatients than does placebo, reported Stephen Lewis, M.D., of Derriford Hospital, Plymouth, England, and his associates.
Of 72 patients who received oligofructose in a double-blind, randomized trial, 6 patients (8%) had a relapse of diarrhea, compared with 24 (34%) of 70 patients who received placebo.
However, the two groups did not differ in C. difficile culture positivity at 30 and 60 days after hospital discharge. At hospital discharge and at 60 days after discharge, oligofructose-treated patients carried significantly higher concentrations of fecal bifidobacteria and total anaerobes, but not aerobes, compared with placebo-group patients. “It is possible that the bifidobacterial metabolic products inhibited the metabolic activity of C. difficile,” the investigators suggested (Clin. Gastroenterol. Hepatol. 2005;3:442–8).
Most patients received metronidazole as first-line treatment for their initial or relapse episode of C. difficile-associated diarrhea. Oligofructose is a fructo-oligosaccharide found in plants such as chicory, asparagus, and artichoke.
Suspect a Bile Leak When Blunt Liver Injury Requires Embolization
TUCSON, ARIZ. — Bile leaks most often accompany blunt liver injury in patients with the most severe liver trauma and in those who need angiographic embolization, reported Wendy L. Wahl, M.D., at the annual meeting of the Central Surgical Association.
In a review of 281 adults with blunt liver injury during 1997–2004, Dr. Wahl and her associates at the University of Michigan, Ann Arbor, determined that bile leaks usually stem from high-grade liver injuries in patients initially assigned to receive angiographic embolization. They found that hepatobiliary iminodiacetic acid (HIDA) scanning, or cholescintigraphy, is often the optimal method to diagnose bile leaks after nonoperative management.
The investigators divided the patients into three groups:
▸ An observation group of patients for whom there was no intention to operate or use angiographic embolization at admission.
▸ An operative group of patients who immediately went to the operating room from the emergency department or CT scanner. They included patients who first went to the operating room and then received angiographic embolization.
▸ An arteriography group of patients who received an angiogram, with or without embolization
Operative and arteriographic patients had significantly higher liver Abbreviated Injury Scale (AIS) scores than did observed patients (3.2 and 4 vs. 2.4, respectively).
The need for arteriography was an independent risk factor for the development of a bile leak, even if a patient was sent to get angiographic embolization but did not actually receive it, said Dr. Wahl, director of the trauma-burn ICU at the university. Patients in the arteriographic group had a significantly higher rate of bile leak (43%) than did patients in the operative (19%) or observation groups (2%).
Liver AIS scores were significantly higher in patients who developed bile leaks (4.2) than in those who did not (2.6). In fact, all bile leaks occurred in the 57 patients who had high-grade liver injuries (grade 4 or higher).
Clinicians detected most of the bile leaks with HIDA scans, but they detected some during laparotomy, laparoscopy, endoscopic retrograde cholangiopancreatography, or percutaneous transhepatic cholangiography. “If the patient had a negative HIDA scan, we did not find that the patients developed a bile leak after their initially negative HIDA scan,” she said.
On average, patients who received treatment for a bile leak by day 4 had a significantly shorter hospital stay than those treated after that (16 vs. 32 days). Each additional day of a delayed bile leak diagnosis after day 4 added 3.3 days to the length of stay.
Dr. Wahl's group now follows a guideline of doing a HIDA scan on day 2 or 3 in patients who have had a high-grade liver injury, an angiographic embolization, or just an angiogram for their liver-related injuries.
TUCSON, ARIZ. — Bile leaks most often accompany blunt liver injury in patients with the most severe liver trauma and in those who need angiographic embolization, reported Wendy L. Wahl, M.D., at the annual meeting of the Central Surgical Association.
In a review of 281 adults with blunt liver injury during 1997–2004, Dr. Wahl and her associates at the University of Michigan, Ann Arbor, determined that bile leaks usually stem from high-grade liver injuries in patients initially assigned to receive angiographic embolization. They found that hepatobiliary iminodiacetic acid (HIDA) scanning, or cholescintigraphy, is often the optimal method to diagnose bile leaks after nonoperative management.
The investigators divided the patients into three groups:
▸ An observation group of patients for whom there was no intention to operate or use angiographic embolization at admission.
▸ An operative group of patients who immediately went to the operating room from the emergency department or CT scanner. They included patients who first went to the operating room and then received angiographic embolization.
▸ An arteriography group of patients who received an angiogram, with or without embolization
Operative and arteriographic patients had significantly higher liver Abbreviated Injury Scale (AIS) scores than did observed patients (3.2 and 4 vs. 2.4, respectively).
The need for arteriography was an independent risk factor for the development of a bile leak, even if a patient was sent to get angiographic embolization but did not actually receive it, said Dr. Wahl, director of the trauma-burn ICU at the university. Patients in the arteriographic group had a significantly higher rate of bile leak (43%) than did patients in the operative (19%) or observation groups (2%).
Liver AIS scores were significantly higher in patients who developed bile leaks (4.2) than in those who did not (2.6). In fact, all bile leaks occurred in the 57 patients who had high-grade liver injuries (grade 4 or higher).
Clinicians detected most of the bile leaks with HIDA scans, but they detected some during laparotomy, laparoscopy, endoscopic retrograde cholangiopancreatography, or percutaneous transhepatic cholangiography. “If the patient had a negative HIDA scan, we did not find that the patients developed a bile leak after their initially negative HIDA scan,” she said.
On average, patients who received treatment for a bile leak by day 4 had a significantly shorter hospital stay than those treated after that (16 vs. 32 days). Each additional day of a delayed bile leak diagnosis after day 4 added 3.3 days to the length of stay.
Dr. Wahl's group now follows a guideline of doing a HIDA scan on day 2 or 3 in patients who have had a high-grade liver injury, an angiographic embolization, or just an angiogram for their liver-related injuries.
TUCSON, ARIZ. — Bile leaks most often accompany blunt liver injury in patients with the most severe liver trauma and in those who need angiographic embolization, reported Wendy L. Wahl, M.D., at the annual meeting of the Central Surgical Association.
In a review of 281 adults with blunt liver injury during 1997–2004, Dr. Wahl and her associates at the University of Michigan, Ann Arbor, determined that bile leaks usually stem from high-grade liver injuries in patients initially assigned to receive angiographic embolization. They found that hepatobiliary iminodiacetic acid (HIDA) scanning, or cholescintigraphy, is often the optimal method to diagnose bile leaks after nonoperative management.
The investigators divided the patients into three groups:
▸ An observation group of patients for whom there was no intention to operate or use angiographic embolization at admission.
▸ An operative group of patients who immediately went to the operating room from the emergency department or CT scanner. They included patients who first went to the operating room and then received angiographic embolization.
▸ An arteriography group of patients who received an angiogram, with or without embolization
Operative and arteriographic patients had significantly higher liver Abbreviated Injury Scale (AIS) scores than did observed patients (3.2 and 4 vs. 2.4, respectively).
The need for arteriography was an independent risk factor for the development of a bile leak, even if a patient was sent to get angiographic embolization but did not actually receive it, said Dr. Wahl, director of the trauma-burn ICU at the university. Patients in the arteriographic group had a significantly higher rate of bile leak (43%) than did patients in the operative (19%) or observation groups (2%).
Liver AIS scores were significantly higher in patients who developed bile leaks (4.2) than in those who did not (2.6). In fact, all bile leaks occurred in the 57 patients who had high-grade liver injuries (grade 4 or higher).
Clinicians detected most of the bile leaks with HIDA scans, but they detected some during laparotomy, laparoscopy, endoscopic retrograde cholangiopancreatography, or percutaneous transhepatic cholangiography. “If the patient had a negative HIDA scan, we did not find that the patients developed a bile leak after their initially negative HIDA scan,” she said.
On average, patients who received treatment for a bile leak by day 4 had a significantly shorter hospital stay than those treated after that (16 vs. 32 days). Each additional day of a delayed bile leak diagnosis after day 4 added 3.3 days to the length of stay.
Dr. Wahl's group now follows a guideline of doing a HIDA scan on day 2 or 3 in patients who have had a high-grade liver injury, an angiographic embolization, or just an angiogram for their liver-related injuries.
Severe Blunt Liver Injury Often Best Treated Nonoperatively
TUCSON, ARIZ. — Nonoperative management of severe blunt liver injuries appears to be the best strategy of caring for hemodynamically stable patients, according to findings from a retrospective review of 561 patients.
Choosing between nonoperative and operative treatment schemes seems to make a difference, however, only in patients with the most severe liver injuries, A. Britton Christmas, M.D., reported at the annual meeting of the Central Surgical Association.
Prior to the 1990s, physicians diagnosed liver injuries primarily through peritoneal lavage, CT scanning, or surgical exploration. The care of suspected liver injuries included repair of vascular, parenchymal, or biliary structures and drainage of the perihepatic spaces to control biliary leaks and to avoid sepsis, said Dr. Christmas, a surgical resident at the University of Louisville (Ky.).
Improvements in imaging technologies for diagnosing solid-organ injuries and an increased interest in critical care monitoring have prompted a paradigm shift toward nonoperative management. In hemodynamically stable patients with blunt liver injury, nonoperative management has evolved into the standard of care at most U.S. trauma centers, Dr. Christmas said.
Although the reported success rate for nonoperative management of hepatic trauma ranges from 82% to 100%, justification for the preference of either operative or nonoperative management remains ambiguous, he said.
Dr. Christmas and his colleagues reviewed 561 cases of blunt liver injury in the trauma registry at the university during 1993–2003.
Operative management—defined as undergoing an operation within 24 hours after admission—in 183 patients led to higher overall mortality than did nonoperative management in 378 patients (18% vs. 5%); liver-related mortality similarly was higher in those who received operative management (11% vs. 0.4%). Hemodynamic instability occurred in 20% of the operatively managed patients but in none of those managed nonoperatively.
Operative mortality rose with the grade of hepatic injury, such that 7% of patients with grade 1 liver injury and 92% with grade 5 injury died. Patients with severe liver injury (grades 3–5) who were treated operatively had significantly higher mortality than did those treated nonoperatively.
The management strategy for grade 2 or 3 liver injury did not significantly impact mortality.
The percentage of patients able to be managed nonoperatively dropped as the grade of liver injury increased; 82% of patients with grade 1 injury and 32% with grade 5 injury received nonoperative care. One patient died as a result of nonoperative management when he bled after angiographic embolization and required an operation on the first post-injury day. He died on the third day after injury.
Intraabdominal injuries associated with blunt liver injury required an operation in 19% of nonoperatively managed patients. A total of 3% of patients who originally received nonoperative management ultimately required laparotomy after the first 24 hours.
Adjunctive surgical procedures, such as biliary drainage, endoscopic retrograde cholangiopancreatography, and angiographic embolization, were performed with a high degree of success in 42 patients managed nonoperatively, Dr. Christmas said.
TUCSON, ARIZ. — Nonoperative management of severe blunt liver injuries appears to be the best strategy of caring for hemodynamically stable patients, according to findings from a retrospective review of 561 patients.
Choosing between nonoperative and operative treatment schemes seems to make a difference, however, only in patients with the most severe liver injuries, A. Britton Christmas, M.D., reported at the annual meeting of the Central Surgical Association.
Prior to the 1990s, physicians diagnosed liver injuries primarily through peritoneal lavage, CT scanning, or surgical exploration. The care of suspected liver injuries included repair of vascular, parenchymal, or biliary structures and drainage of the perihepatic spaces to control biliary leaks and to avoid sepsis, said Dr. Christmas, a surgical resident at the University of Louisville (Ky.).
Improvements in imaging technologies for diagnosing solid-organ injuries and an increased interest in critical care monitoring have prompted a paradigm shift toward nonoperative management. In hemodynamically stable patients with blunt liver injury, nonoperative management has evolved into the standard of care at most U.S. trauma centers, Dr. Christmas said.
Although the reported success rate for nonoperative management of hepatic trauma ranges from 82% to 100%, justification for the preference of either operative or nonoperative management remains ambiguous, he said.
Dr. Christmas and his colleagues reviewed 561 cases of blunt liver injury in the trauma registry at the university during 1993–2003.
Operative management—defined as undergoing an operation within 24 hours after admission—in 183 patients led to higher overall mortality than did nonoperative management in 378 patients (18% vs. 5%); liver-related mortality similarly was higher in those who received operative management (11% vs. 0.4%). Hemodynamic instability occurred in 20% of the operatively managed patients but in none of those managed nonoperatively.
Operative mortality rose with the grade of hepatic injury, such that 7% of patients with grade 1 liver injury and 92% with grade 5 injury died. Patients with severe liver injury (grades 3–5) who were treated operatively had significantly higher mortality than did those treated nonoperatively.
The management strategy for grade 2 or 3 liver injury did not significantly impact mortality.
The percentage of patients able to be managed nonoperatively dropped as the grade of liver injury increased; 82% of patients with grade 1 injury and 32% with grade 5 injury received nonoperative care. One patient died as a result of nonoperative management when he bled after angiographic embolization and required an operation on the first post-injury day. He died on the third day after injury.
Intraabdominal injuries associated with blunt liver injury required an operation in 19% of nonoperatively managed patients. A total of 3% of patients who originally received nonoperative management ultimately required laparotomy after the first 24 hours.
Adjunctive surgical procedures, such as biliary drainage, endoscopic retrograde cholangiopancreatography, and angiographic embolization, were performed with a high degree of success in 42 patients managed nonoperatively, Dr. Christmas said.
TUCSON, ARIZ. — Nonoperative management of severe blunt liver injuries appears to be the best strategy of caring for hemodynamically stable patients, according to findings from a retrospective review of 561 patients.
Choosing between nonoperative and operative treatment schemes seems to make a difference, however, only in patients with the most severe liver injuries, A. Britton Christmas, M.D., reported at the annual meeting of the Central Surgical Association.
Prior to the 1990s, physicians diagnosed liver injuries primarily through peritoneal lavage, CT scanning, or surgical exploration. The care of suspected liver injuries included repair of vascular, parenchymal, or biliary structures and drainage of the perihepatic spaces to control biliary leaks and to avoid sepsis, said Dr. Christmas, a surgical resident at the University of Louisville (Ky.).
Improvements in imaging technologies for diagnosing solid-organ injuries and an increased interest in critical care monitoring have prompted a paradigm shift toward nonoperative management. In hemodynamically stable patients with blunt liver injury, nonoperative management has evolved into the standard of care at most U.S. trauma centers, Dr. Christmas said.
Although the reported success rate for nonoperative management of hepatic trauma ranges from 82% to 100%, justification for the preference of either operative or nonoperative management remains ambiguous, he said.
Dr. Christmas and his colleagues reviewed 561 cases of blunt liver injury in the trauma registry at the university during 1993–2003.
Operative management—defined as undergoing an operation within 24 hours after admission—in 183 patients led to higher overall mortality than did nonoperative management in 378 patients (18% vs. 5%); liver-related mortality similarly was higher in those who received operative management (11% vs. 0.4%). Hemodynamic instability occurred in 20% of the operatively managed patients but in none of those managed nonoperatively.
Operative mortality rose with the grade of hepatic injury, such that 7% of patients with grade 1 liver injury and 92% with grade 5 injury died. Patients with severe liver injury (grades 3–5) who were treated operatively had significantly higher mortality than did those treated nonoperatively.
The management strategy for grade 2 or 3 liver injury did not significantly impact mortality.
The percentage of patients able to be managed nonoperatively dropped as the grade of liver injury increased; 82% of patients with grade 1 injury and 32% with grade 5 injury received nonoperative care. One patient died as a result of nonoperative management when he bled after angiographic embolization and required an operation on the first post-injury day. He died on the third day after injury.
Intraabdominal injuries associated with blunt liver injury required an operation in 19% of nonoperatively managed patients. A total of 3% of patients who originally received nonoperative management ultimately required laparotomy after the first 24 hours.
Adjunctive surgical procedures, such as biliary drainage, endoscopic retrograde cholangiopancreatography, and angiographic embolization, were performed with a high degree of success in 42 patients managed nonoperatively, Dr. Christmas said.
Transdermal Technique Checks Bone Quality
BETHESDA, MD. — Among the many novel technologies cropping up to help analyze bone quality noninvasively, near-infrared spectroscopy may eventually prove to be quite useful, according to results from a preliminary study.
In an investigation that involved mice as subjects, the near-infrared spectroscopy technique has been shown to detect differences in mineralization of those with and without a mutation that models type III osteogenesis imperfecta, Guiyang Li, Ph.D., reported at a meeting on bone quality.
Dual x-ray absorptiometry scans are limited in that they cannot obtain “information on molecular structure of bone and its primary components—hydroxyapatite mineral and collagen,” explained Dr. Li, of the musculoskeletal imaging and spectroscopy laboratory at the Hospital for Special Surgery in New York.
Near-infrared spectroscopy can penetrate millimeters to centimeters through the skin—farther than its close cousin, mid-infrared spectroscopy, which can only penetrate about 10 μm into skin, Dr. Li noted.
Mid-infrared spectroscopy has stronger absorbance bands than near infrared.
The relatively low intensity of near infrared absorbance necessitates the use of special modeling methods to analyze the resulting spectrum, he explained at the meeting, which was sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases. The meeting was cosponsored by the American Society for Bone and Mineral Research.
BETHESDA, MD. — Among the many novel technologies cropping up to help analyze bone quality noninvasively, near-infrared spectroscopy may eventually prove to be quite useful, according to results from a preliminary study.
In an investigation that involved mice as subjects, the near-infrared spectroscopy technique has been shown to detect differences in mineralization of those with and without a mutation that models type III osteogenesis imperfecta, Guiyang Li, Ph.D., reported at a meeting on bone quality.
Dual x-ray absorptiometry scans are limited in that they cannot obtain “information on molecular structure of bone and its primary components—hydroxyapatite mineral and collagen,” explained Dr. Li, of the musculoskeletal imaging and spectroscopy laboratory at the Hospital for Special Surgery in New York.
Near-infrared spectroscopy can penetrate millimeters to centimeters through the skin—farther than its close cousin, mid-infrared spectroscopy, which can only penetrate about 10 μm into skin, Dr. Li noted.
Mid-infrared spectroscopy has stronger absorbance bands than near infrared.
The relatively low intensity of near infrared absorbance necessitates the use of special modeling methods to analyze the resulting spectrum, he explained at the meeting, which was sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases. The meeting was cosponsored by the American Society for Bone and Mineral Research.
BETHESDA, MD. — Among the many novel technologies cropping up to help analyze bone quality noninvasively, near-infrared spectroscopy may eventually prove to be quite useful, according to results from a preliminary study.
In an investigation that involved mice as subjects, the near-infrared spectroscopy technique has been shown to detect differences in mineralization of those with and without a mutation that models type III osteogenesis imperfecta, Guiyang Li, Ph.D., reported at a meeting on bone quality.
Dual x-ray absorptiometry scans are limited in that they cannot obtain “information on molecular structure of bone and its primary components—hydroxyapatite mineral and collagen,” explained Dr. Li, of the musculoskeletal imaging and spectroscopy laboratory at the Hospital for Special Surgery in New York.
Near-infrared spectroscopy can penetrate millimeters to centimeters through the skin—farther than its close cousin, mid-infrared spectroscopy, which can only penetrate about 10 μm into skin, Dr. Li noted.
Mid-infrared spectroscopy has stronger absorbance bands than near infrared.
The relatively low intensity of near infrared absorbance necessitates the use of special modeling methods to analyze the resulting spectrum, he explained at the meeting, which was sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases. The meeting was cosponsored by the American Society for Bone and Mineral Research.
High-Resolution CT Accurately Assesses Bone Microarchitecture
BETHESDA, MD. — High-resolution peripheral quantitative CT appears to be a promising technology for identifying osteoporosis-related changes in bone microarchitecture, according to the results of a prospective study.
Data from the noninvasive technique suggest that the imaging procedure will provide new insight into the degradation of bone mineral architecture that occurs in osteoporosis, Stéphanie Boutroy, Ph.D., said at a meeting on bone quality.
Dr. Boutroy of France's National Institute of Health and Medical Research, Lyon, described her findings from an investigation of the scanning technique in 108 healthy premenopausal women (aged 19–45 years), 109 osteopenic, postmenopausal women (aged 52–88 years), and 33 osteoporotic, postmenopausal women (aged 61–84 years). The women were classified as osteopenic or osteoporotic based on bone mineral density (BMD) measures taken by dual x-ray absorptiometry of the femoral neck or spine.
Initially, eight healthy women underwent three separate scanning sessions within 1 month to determine the short-term reproducibility of the density and architecture parameters of the scanning protocol. Between the three sessions, trabecular and cortical volumetric BMD measurements varied by only 0.5%–1.3% in each of those eight patients. Similarly, trabecular architecture values varied by 0.9%–3.1% for each patient between sessions.
When Dr. Boutroy examined the relationship between volumetric BMD and architectural parameters, she found that total density, as expected, was strongly correlated to both trabecular and cortical density. Trabecular and cortical density were strongly correlated to trabecular architecture and cortical thickness, respectively.
At the distal radius, osteoporotic women had significantly lower total volumetric BMD and cortical thickness compared with osteopenic women. Likewise, osteoporotic women also had comparatively lower trabecular density, number, thickness, and separation. No differences could be found in cortical density or the distribution of trabeculae between the two groups, Dr. Boutroy said at the meeting, sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
At the tibia, osteoporotic women had significantly lower measurements on all parameters (total volumetric BMD, cortical and trabecular density, and trabecular number, thickness, and separation) than osteopenic women. In addition, the osteopenic women had significantly lower values on all parameters compared with healthy, premenopausal women.
Dr. Boutroy has no financial interest in the companies that manufacture high-resolution peripheral quantitative CT devices.
BETHESDA, MD. — High-resolution peripheral quantitative CT appears to be a promising technology for identifying osteoporosis-related changes in bone microarchitecture, according to the results of a prospective study.
Data from the noninvasive technique suggest that the imaging procedure will provide new insight into the degradation of bone mineral architecture that occurs in osteoporosis, Stéphanie Boutroy, Ph.D., said at a meeting on bone quality.
Dr. Boutroy of France's National Institute of Health and Medical Research, Lyon, described her findings from an investigation of the scanning technique in 108 healthy premenopausal women (aged 19–45 years), 109 osteopenic, postmenopausal women (aged 52–88 years), and 33 osteoporotic, postmenopausal women (aged 61–84 years). The women were classified as osteopenic or osteoporotic based on bone mineral density (BMD) measures taken by dual x-ray absorptiometry of the femoral neck or spine.
Initially, eight healthy women underwent three separate scanning sessions within 1 month to determine the short-term reproducibility of the density and architecture parameters of the scanning protocol. Between the three sessions, trabecular and cortical volumetric BMD measurements varied by only 0.5%–1.3% in each of those eight patients. Similarly, trabecular architecture values varied by 0.9%–3.1% for each patient between sessions.
When Dr. Boutroy examined the relationship between volumetric BMD and architectural parameters, she found that total density, as expected, was strongly correlated to both trabecular and cortical density. Trabecular and cortical density were strongly correlated to trabecular architecture and cortical thickness, respectively.
At the distal radius, osteoporotic women had significantly lower total volumetric BMD and cortical thickness compared with osteopenic women. Likewise, osteoporotic women also had comparatively lower trabecular density, number, thickness, and separation. No differences could be found in cortical density or the distribution of trabeculae between the two groups, Dr. Boutroy said at the meeting, sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
At the tibia, osteoporotic women had significantly lower measurements on all parameters (total volumetric BMD, cortical and trabecular density, and trabecular number, thickness, and separation) than osteopenic women. In addition, the osteopenic women had significantly lower values on all parameters compared with healthy, premenopausal women.
Dr. Boutroy has no financial interest in the companies that manufacture high-resolution peripheral quantitative CT devices.
BETHESDA, MD. — High-resolution peripheral quantitative CT appears to be a promising technology for identifying osteoporosis-related changes in bone microarchitecture, according to the results of a prospective study.
Data from the noninvasive technique suggest that the imaging procedure will provide new insight into the degradation of bone mineral architecture that occurs in osteoporosis, Stéphanie Boutroy, Ph.D., said at a meeting on bone quality.
Dr. Boutroy of France's National Institute of Health and Medical Research, Lyon, described her findings from an investigation of the scanning technique in 108 healthy premenopausal women (aged 19–45 years), 109 osteopenic, postmenopausal women (aged 52–88 years), and 33 osteoporotic, postmenopausal women (aged 61–84 years). The women were classified as osteopenic or osteoporotic based on bone mineral density (BMD) measures taken by dual x-ray absorptiometry of the femoral neck or spine.
Initially, eight healthy women underwent three separate scanning sessions within 1 month to determine the short-term reproducibility of the density and architecture parameters of the scanning protocol. Between the three sessions, trabecular and cortical volumetric BMD measurements varied by only 0.5%–1.3% in each of those eight patients. Similarly, trabecular architecture values varied by 0.9%–3.1% for each patient between sessions.
When Dr. Boutroy examined the relationship between volumetric BMD and architectural parameters, she found that total density, as expected, was strongly correlated to both trabecular and cortical density. Trabecular and cortical density were strongly correlated to trabecular architecture and cortical thickness, respectively.
At the distal radius, osteoporotic women had significantly lower total volumetric BMD and cortical thickness compared with osteopenic women. Likewise, osteoporotic women also had comparatively lower trabecular density, number, thickness, and separation. No differences could be found in cortical density or the distribution of trabeculae between the two groups, Dr. Boutroy said at the meeting, sponsored by the National Institute of Arthritis and Musculoskeletal and Skin Diseases and the American Society for Bone and Mineral Research.
At the tibia, osteoporotic women had significantly lower measurements on all parameters (total volumetric BMD, cortical and trabecular density, and trabecular number, thickness, and separation) than osteopenic women. In addition, the osteopenic women had significantly lower values on all parameters compared with healthy, premenopausal women.
Dr. Boutroy has no financial interest in the companies that manufacture high-resolution peripheral quantitative CT devices.
Prostate Cancer Screening, Treatment Revisited
ORLANDO, FLA. — Emerging insights into the clinical significance of prostate-specific antigen levels are leading to new approaches to screening, treatment, and patient counseling, speakers said at a symposium on prostate cancer sponsored by the American Society of Clinical Oncology.
PSA testing has led to the detection of indolent, slow-growing prostate cancers in many men. Overdetection of these biologically inconsequential cancers should prompt doctors to question their screening practices and the way they approach managing such patients, the speakers suggested.
The word “cancer” promotes aggressive treatment, which is often disproportionate to the natural history of minimal-volume, low-grade, “good risk” prostate cancer, said Laurence Klotz, M.D., professor of surgery at the University of Toronto
Interpreting PSA Levels
Accumulating evidence suggests that a normal PSA level—commonly thought of as below 4.0 ng/mL—is losing its clinical relevance for detecting prostate cancer.
The Prostate Cancer Prevention Trial (PCPT), the largest prostate cancer screening study to date, randomized 18,882 patients to the 5-α-reductase inhibitor finasteride or placebo. The trial is the only major study to date to obtain prostate biopsies when clinically indicated or when PSA levels rose above 4.0 ng/mL and at the end of the study, regardless of PSA level or treatment.
Among 2,950 men in the placebo arm of the PCPT who never had a PSA level higher than 4.0 ng/mL or an abnormal digital rectal examination, 15% had a biopsy positive for prostate cancer at the end of the 7-year study. About 27% of those with a PSA level of 3.1–4.0 ng/mL had a positive biopsy, and even 6.6% of men with a PSA level of up to 0.5 ng/mL had a positive biopsy (N. Engl. J. Med. 2004;350:2239–46).
“There is no level of PSA at which you have no risk of prostate cancer,” said Ian Thompson, M.D., lead investigator of the PCPT.
Despite the large percentage of placebo patients who had prostate cancer with PSA levels below 4.0 ng/mL, only 2.3% of them had high-grade cancer with a Gleason score of 7 or higher, and only 0.24% had high-grade cancer with a Gleason score of 8 or 9 (and none had a score of 10), noted Howard L. Parnes, M.D., chief of the prostate and urologic cancers research group in the division of cancer prevention at the National Cancer Institute.
That prostate cancer can occur at all PSA levels indicates “PSA is just a marker, like cholesterol. It is not dichotomous,” said Dr. Thompson, urology department chair at the University of Texas, San Antonio.
Physicians will need to individualize the decision to biopsy in light of these results, rather than just biopsy patients with PSA levels above 4.0 ng/mL, Dr. Parnes advised.
The Case for Watchful Waiting
In men older than 55 years with an initial PSA level of 3.0 ng/mL or less, systematic use of PSA testing and subsequent biopsying of men who had a PSA level of more than 4.0 ng/mL or an abnormal digital rectal examination resulted in a prostate cancer detection rate of about 24% during a 7-year period, according to the results of the PCPT (N. Engl. J. Med. 2003;349:215–24).
But screening has been shown to increase the incidence-to-mortality ratio for prostate cancer from 2.5:1 to 15:1, Dr. Klotz said at a separate session at the symposium, which was cosponsored by the Society of Urologic Oncology and the American Society for Therapeutic Radiology and Oncology.
Prostate cancer progresses slowly in most patients, with long windows of curability. About 85%–90% of prostate cancer patients with a Gleason score of 6 or less have “insignificant” prostate cancer, which is not destined to cause morbidity or mortality during a patient's lifetime. Only 4%-8% of cancers with a Gleason score of 6 or less will progress to high-grade cancer after 8 years, he said. And even then, only some patients will die of the cancer.
Dr. Klotz suggested that doctors should consider “watchful waiting” more often in good-risk prostate cancer patients—those with a Gleason score of 6 or less, PSA level of 10.0 or less, and T1c-T2a tumor grade.
He and his associates followed 299 patients with clinically localized prostate cancer (Gleason score of 7 or less, stage T1b-T2b, and PSA level of 15.0 ng/mL or less). The patients were seen every 3 months for 2 years and then every 6 months thereafter, waiting for either an increase in PSA level, clinical progression, or histologic upgrade on repeat biopsy before implementing appropriate treatment.
The median PSA doubling time in the cohort was 7 years. About 22% had a doubling time of less than 3 years, and 42% had a doubling time of more than 10 years. Only two patients died from prostate cancer, each at 5 years after diagnosis, which “implies that both had incurable disease at diagnosis,” he said. Overall, two-thirds of the patients remain free from progression.
Rapid PSA progression has generally been defined as a doubling time of less than 3 years. Some researchers have found that a PSA velocity of more than 2.0 ng/mL per year corresponds to disease progression (N. Engl. J. Med. 2004;351:125–35).
The intervention criteria on a program of watchful waiting or “active surveillance” should include PSA doubling time or grade progression, Dr. Klotz advised. (See box.)
A 50-year-old man with good-risk prostate cancer could potentially face the psychological burden of living with prostate cancer for around 30 years, Dr. Klotz noted. But even patients who have been treated for prostate cancer still worry. At an office visit, the first thing that comes to the mind of a patient treated for prostate cancer is his PSA level.
To determine the validity of a watchful waiting approach, the START trial (Standard Treatment Against Restricted Treatment) will randomize 1,200–2,000 good-risk prostate cancer patients to active surveillance with selective delayed treatment or definitive therapy (radical prostatectomy, brachytherapy, or external beam radiation therapy).
Selective Use of PSA Testing
Instead of routinely testing PSA levels in all men, physicians could provide information on prostate cancer screening, suggested Timothy J. Wilt, M.D., an internist at the Minneapolis Veterans Affairs Center for Chronic Disease Outcomes Research.
This information could include the difference between prostate cancer and other prostate problems, descriptions of what PSA testing and digital rectal examinations can and cannot tell them, the consequences that may result from having a PSA test, and the risks and benefits of treatment options available for prostate cancer.
Physicians should target testing or treatment to men most likely to benefit from them but should also reassure those who are unlikely to benefit that not testing PSA or undergoing watchful waiting “is compassionate care that is likely to provide superior health outcomes,” Dr. Wilt recommended during another session at the symposium.
Criteria for Intervention
In patients with “good risk” prostate cancer (Gleason score of 6 or less, PSA level of 10.0 ng/mL or less, and clinically localized stage T1c-T2a), physicians may want to use either of the following strategies to determine when to intervene and begin appropriate treatment:
Rapid PSA Doubling Time
▸ Measure PSA level every 3 months for 2 years and then every 6 months thereafter.
▸ If the PSA doubling time is less than 3 years, it may be time to intervene.
Gleason Grade Progression on Repeat Biopsy
▸ Biopsy between 1 and 2 years, then every 3 years, stopping at age 80 years.
▸ Treat if there is progression to a predominant Gleason pattern 4 or worse.
Source: Dr. Klotz
ORLANDO, FLA. — Emerging insights into the clinical significance of prostate-specific antigen levels are leading to new approaches to screening, treatment, and patient counseling, speakers said at a symposium on prostate cancer sponsored by the American Society of Clinical Oncology.
PSA testing has led to the detection of indolent, slow-growing prostate cancers in many men. Overdetection of these biologically inconsequential cancers should prompt doctors to question their screening practices and the way they approach managing such patients, the speakers suggested.
The word “cancer” promotes aggressive treatment, which is often disproportionate to the natural history of minimal-volume, low-grade, “good risk” prostate cancer, said Laurence Klotz, M.D., professor of surgery at the University of Toronto
Interpreting PSA Levels
Accumulating evidence suggests that a normal PSA level—commonly thought of as below 4.0 ng/mL—is losing its clinical relevance for detecting prostate cancer.
The Prostate Cancer Prevention Trial (PCPT), the largest prostate cancer screening study to date, randomized 18,882 patients to the 5-α-reductase inhibitor finasteride or placebo. The trial is the only major study to date to obtain prostate biopsies when clinically indicated or when PSA levels rose above 4.0 ng/mL and at the end of the study, regardless of PSA level or treatment.
Among 2,950 men in the placebo arm of the PCPT who never had a PSA level higher than 4.0 ng/mL or an abnormal digital rectal examination, 15% had a biopsy positive for prostate cancer at the end of the 7-year study. About 27% of those with a PSA level of 3.1–4.0 ng/mL had a positive biopsy, and even 6.6% of men with a PSA level of up to 0.5 ng/mL had a positive biopsy (N. Engl. J. Med. 2004;350:2239–46).
“There is no level of PSA at which you have no risk of prostate cancer,” said Ian Thompson, M.D., lead investigator of the PCPT.
Despite the large percentage of placebo patients who had prostate cancer with PSA levels below 4.0 ng/mL, only 2.3% of them had high-grade cancer with a Gleason score of 7 or higher, and only 0.24% had high-grade cancer with a Gleason score of 8 or 9 (and none had a score of 10), noted Howard L. Parnes, M.D., chief of the prostate and urologic cancers research group in the division of cancer prevention at the National Cancer Institute.
That prostate cancer can occur at all PSA levels indicates “PSA is just a marker, like cholesterol. It is not dichotomous,” said Dr. Thompson, urology department chair at the University of Texas, San Antonio.
Physicians will need to individualize the decision to biopsy in light of these results, rather than just biopsy patients with PSA levels above 4.0 ng/mL, Dr. Parnes advised.
The Case for Watchful Waiting
In men older than 55 years with an initial PSA level of 3.0 ng/mL or less, systematic use of PSA testing and subsequent biopsying of men who had a PSA level of more than 4.0 ng/mL or an abnormal digital rectal examination resulted in a prostate cancer detection rate of about 24% during a 7-year period, according to the results of the PCPT (N. Engl. J. Med. 2003;349:215–24).
But screening has been shown to increase the incidence-to-mortality ratio for prostate cancer from 2.5:1 to 15:1, Dr. Klotz said at a separate session at the symposium, which was cosponsored by the Society of Urologic Oncology and the American Society for Therapeutic Radiology and Oncology.
Prostate cancer progresses slowly in most patients, with long windows of curability. About 85%–90% of prostate cancer patients with a Gleason score of 6 or less have “insignificant” prostate cancer, which is not destined to cause morbidity or mortality during a patient's lifetime. Only 4%-8% of cancers with a Gleason score of 6 or less will progress to high-grade cancer after 8 years, he said. And even then, only some patients will die of the cancer.
Dr. Klotz suggested that doctors should consider “watchful waiting” more often in good-risk prostate cancer patients—those with a Gleason score of 6 or less, PSA level of 10.0 or less, and T1c-T2a tumor grade.
He and his associates followed 299 patients with clinically localized prostate cancer (Gleason score of 7 or less, stage T1b-T2b, and PSA level of 15.0 ng/mL or less). The patients were seen every 3 months for 2 years and then every 6 months thereafter, waiting for either an increase in PSA level, clinical progression, or histologic upgrade on repeat biopsy before implementing appropriate treatment.
The median PSA doubling time in the cohort was 7 years. About 22% had a doubling time of less than 3 years, and 42% had a doubling time of more than 10 years. Only two patients died from prostate cancer, each at 5 years after diagnosis, which “implies that both had incurable disease at diagnosis,” he said. Overall, two-thirds of the patients remain free from progression.
Rapid PSA progression has generally been defined as a doubling time of less than 3 years. Some researchers have found that a PSA velocity of more than 2.0 ng/mL per year corresponds to disease progression (N. Engl. J. Med. 2004;351:125–35).
The intervention criteria on a program of watchful waiting or “active surveillance” should include PSA doubling time or grade progression, Dr. Klotz advised. (See box.)
A 50-year-old man with good-risk prostate cancer could potentially face the psychological burden of living with prostate cancer for around 30 years, Dr. Klotz noted. But even patients who have been treated for prostate cancer still worry. At an office visit, the first thing that comes to the mind of a patient treated for prostate cancer is his PSA level.
To determine the validity of a watchful waiting approach, the START trial (Standard Treatment Against Restricted Treatment) will randomize 1,200–2,000 good-risk prostate cancer patients to active surveillance with selective delayed treatment or definitive therapy (radical prostatectomy, brachytherapy, or external beam radiation therapy).
Selective Use of PSA Testing
Instead of routinely testing PSA levels in all men, physicians could provide information on prostate cancer screening, suggested Timothy J. Wilt, M.D., an internist at the Minneapolis Veterans Affairs Center for Chronic Disease Outcomes Research.
This information could include the difference between prostate cancer and other prostate problems, descriptions of what PSA testing and digital rectal examinations can and cannot tell them, the consequences that may result from having a PSA test, and the risks and benefits of treatment options available for prostate cancer.
Physicians should target testing or treatment to men most likely to benefit from them but should also reassure those who are unlikely to benefit that not testing PSA or undergoing watchful waiting “is compassionate care that is likely to provide superior health outcomes,” Dr. Wilt recommended during another session at the symposium.
Criteria for Intervention
In patients with “good risk” prostate cancer (Gleason score of 6 or less, PSA level of 10.0 ng/mL or less, and clinically localized stage T1c-T2a), physicians may want to use either of the following strategies to determine when to intervene and begin appropriate treatment:
Rapid PSA Doubling Time
▸ Measure PSA level every 3 months for 2 years and then every 6 months thereafter.
▸ If the PSA doubling time is less than 3 years, it may be time to intervene.
Gleason Grade Progression on Repeat Biopsy
▸ Biopsy between 1 and 2 years, then every 3 years, stopping at age 80 years.
▸ Treat if there is progression to a predominant Gleason pattern 4 or worse.
Source: Dr. Klotz
ORLANDO, FLA. — Emerging insights into the clinical significance of prostate-specific antigen levels are leading to new approaches to screening, treatment, and patient counseling, speakers said at a symposium on prostate cancer sponsored by the American Society of Clinical Oncology.
PSA testing has led to the detection of indolent, slow-growing prostate cancers in many men. Overdetection of these biologically inconsequential cancers should prompt doctors to question their screening practices and the way they approach managing such patients, the speakers suggested.
The word “cancer” promotes aggressive treatment, which is often disproportionate to the natural history of minimal-volume, low-grade, “good risk” prostate cancer, said Laurence Klotz, M.D., professor of surgery at the University of Toronto
Interpreting PSA Levels
Accumulating evidence suggests that a normal PSA level—commonly thought of as below 4.0 ng/mL—is losing its clinical relevance for detecting prostate cancer.
The Prostate Cancer Prevention Trial (PCPT), the largest prostate cancer screening study to date, randomized 18,882 patients to the 5-α-reductase inhibitor finasteride or placebo. The trial is the only major study to date to obtain prostate biopsies when clinically indicated or when PSA levels rose above 4.0 ng/mL and at the end of the study, regardless of PSA level or treatment.
Among 2,950 men in the placebo arm of the PCPT who never had a PSA level higher than 4.0 ng/mL or an abnormal digital rectal examination, 15% had a biopsy positive for prostate cancer at the end of the 7-year study. About 27% of those with a PSA level of 3.1–4.0 ng/mL had a positive biopsy, and even 6.6% of men with a PSA level of up to 0.5 ng/mL had a positive biopsy (N. Engl. J. Med. 2004;350:2239–46).
“There is no level of PSA at which you have no risk of prostate cancer,” said Ian Thompson, M.D., lead investigator of the PCPT.
Despite the large percentage of placebo patients who had prostate cancer with PSA levels below 4.0 ng/mL, only 2.3% of them had high-grade cancer with a Gleason score of 7 or higher, and only 0.24% had high-grade cancer with a Gleason score of 8 or 9 (and none had a score of 10), noted Howard L. Parnes, M.D., chief of the prostate and urologic cancers research group in the division of cancer prevention at the National Cancer Institute.
That prostate cancer can occur at all PSA levels indicates “PSA is just a marker, like cholesterol. It is not dichotomous,” said Dr. Thompson, urology department chair at the University of Texas, San Antonio.
Physicians will need to individualize the decision to biopsy in light of these results, rather than just biopsy patients with PSA levels above 4.0 ng/mL, Dr. Parnes advised.
The Case for Watchful Waiting
In men older than 55 years with an initial PSA level of 3.0 ng/mL or less, systematic use of PSA testing and subsequent biopsying of men who had a PSA level of more than 4.0 ng/mL or an abnormal digital rectal examination resulted in a prostate cancer detection rate of about 24% during a 7-year period, according to the results of the PCPT (N. Engl. J. Med. 2003;349:215–24).
But screening has been shown to increase the incidence-to-mortality ratio for prostate cancer from 2.5:1 to 15:1, Dr. Klotz said at a separate session at the symposium, which was cosponsored by the Society of Urologic Oncology and the American Society for Therapeutic Radiology and Oncology.
Prostate cancer progresses slowly in most patients, with long windows of curability. About 85%–90% of prostate cancer patients with a Gleason score of 6 or less have “insignificant” prostate cancer, which is not destined to cause morbidity or mortality during a patient's lifetime. Only 4%-8% of cancers with a Gleason score of 6 or less will progress to high-grade cancer after 8 years, he said. And even then, only some patients will die of the cancer.
Dr. Klotz suggested that doctors should consider “watchful waiting” more often in good-risk prostate cancer patients—those with a Gleason score of 6 or less, PSA level of 10.0 or less, and T1c-T2a tumor grade.
He and his associates followed 299 patients with clinically localized prostate cancer (Gleason score of 7 or less, stage T1b-T2b, and PSA level of 15.0 ng/mL or less). The patients were seen every 3 months for 2 years and then every 6 months thereafter, waiting for either an increase in PSA level, clinical progression, or histologic upgrade on repeat biopsy before implementing appropriate treatment.
The median PSA doubling time in the cohort was 7 years. About 22% had a doubling time of less than 3 years, and 42% had a doubling time of more than 10 years. Only two patients died from prostate cancer, each at 5 years after diagnosis, which “implies that both had incurable disease at diagnosis,” he said. Overall, two-thirds of the patients remain free from progression.
Rapid PSA progression has generally been defined as a doubling time of less than 3 years. Some researchers have found that a PSA velocity of more than 2.0 ng/mL per year corresponds to disease progression (N. Engl. J. Med. 2004;351:125–35).
The intervention criteria on a program of watchful waiting or “active surveillance” should include PSA doubling time or grade progression, Dr. Klotz advised. (See box.)
A 50-year-old man with good-risk prostate cancer could potentially face the psychological burden of living with prostate cancer for around 30 years, Dr. Klotz noted. But even patients who have been treated for prostate cancer still worry. At an office visit, the first thing that comes to the mind of a patient treated for prostate cancer is his PSA level.
To determine the validity of a watchful waiting approach, the START trial (Standard Treatment Against Restricted Treatment) will randomize 1,200–2,000 good-risk prostate cancer patients to active surveillance with selective delayed treatment or definitive therapy (radical prostatectomy, brachytherapy, or external beam radiation therapy).
Selective Use of PSA Testing
Instead of routinely testing PSA levels in all men, physicians could provide information on prostate cancer screening, suggested Timothy J. Wilt, M.D., an internist at the Minneapolis Veterans Affairs Center for Chronic Disease Outcomes Research.
This information could include the difference between prostate cancer and other prostate problems, descriptions of what PSA testing and digital rectal examinations can and cannot tell them, the consequences that may result from having a PSA test, and the risks and benefits of treatment options available for prostate cancer.
Physicians should target testing or treatment to men most likely to benefit from them but should also reassure those who are unlikely to benefit that not testing PSA or undergoing watchful waiting “is compassionate care that is likely to provide superior health outcomes,” Dr. Wilt recommended during another session at the symposium.
Criteria for Intervention
In patients with “good risk” prostate cancer (Gleason score of 6 or less, PSA level of 10.0 ng/mL or less, and clinically localized stage T1c-T2a), physicians may want to use either of the following strategies to determine when to intervene and begin appropriate treatment:
Rapid PSA Doubling Time
▸ Measure PSA level every 3 months for 2 years and then every 6 months thereafter.
▸ If the PSA doubling time is less than 3 years, it may be time to intervene.
Gleason Grade Progression on Repeat Biopsy
▸ Biopsy between 1 and 2 years, then every 3 years, stopping at age 80 years.
▸ Treat if there is progression to a predominant Gleason pattern 4 or worse.
Source: Dr. Klotz