Vascular Parkinsonism Mimics Array of Traits

Article Type
Changed
Display Headline
Vascular Parkinsonism Mimics Array of Traits

PORTO, PORTUGAL — Vascular parkinsonism displays a range of noncognitive symptoms, which explains why its diagnosis can depend on the bias of the specialist doing the evaluation, said Joseph Ghika, M.D., at the Fourth International Congress on Vascular Dementia.

The same group of symptoms might be referred to as vascular parkinsonism (or gait disorder) by movement disorder specialists, central incontinence by urologists, vascular depression by psychiatrists, apraxia of gait by neuropsychologists, gait disorder of hydrocephalus by neurosurgeons, cardiogenic dementia by cardiologists, senile gait disorder by geriatricians, and small- and/or large-vessel disease (or poststroke/multistroke dementia) by stroke specialists, said Dr. Ghika of the Centre Hospitalier Universitaire Vaudois in Lausanne, Switzerland.

Vascular parkinsonism accounts for 3%–6% of all Parkinson's disease (PD) cases. The evolution of vascular parkinsonism is more rapid than that of PD and may have a stepwise progression. Generally, patients with vascular parkinsonism are older than those with PD and have vascular risk factors. They are usually nonresponsive to dopa treatment.

Presentation may involve a number of symptoms that are not seen in other forms of cognitive impairment/dementia: gait disturbances (gait ignition failure, frontal gait disorder, frontal or subcortical disequilibrium), focal deficits, loss of sphincter control, emotional lability (forced laughter, pseudobulbar syndrome), and psychomotor slowing.

“It's a symmetrical axial/proximal Parkinson's that involves mostly the lower extremities,” said Dr. Ghika. Patients tend to be nontremulous except in posture and can have a mixture of rigidity and spasticity, with axial and proximal predominance without cogwheeling.

Associated gait disorders develop early in the course of degeneration, often at the same time that impairment of executive function becomes apparent. Patients have problems standing, starting to walk, and changing directions. Their steps are short and shuffling. Patients spread their feet in a wide base for standing and walking and turn without turning the trunk. Posture is stooped but without flexion at the hip or knee, unlike in PD. Patients have great difficulty getting up from a sitting position and are often unable to do so unassisted. There is marked retropulsion with a loss of protective/postural reflexes. Arm swing can be variable to increased. The hodgepodge of movement traits have hampered ongoing efforts to identify a characteristic pattern of gait disturbances for vascular parkinsonism. “It's a problem of ataxia and apraxia all together,” said Dr. Ghika.

Corticobulbar/pseudobulbar syndrome affects more than half of patients with vascular parkinsonism, taking the form of emotional lability and/or forced laughter. Their faces often carry a “mask” of bewilderment. Their speech pattern is typically low, slow, and monotonous. Their speech may be dysarthric, very nasal, aprosodic, and monosyllabic. Their verbal communication may be aspontaneous, or they may be mute, stutter, or have palilalia. Normal olfaction is absent.

Urinary dysfunction/incontinence is also common; half of patients with vascular parkinonism experience detrusor hyperreflexia. Dyskinesias are common. Hemichorea-hemiballism may be bilateral. Patients often have myoclonus upon startling. They have postural (action) or Holmes tremors.

Focal neurologic deficits are quite common, affecting about 34% of those with vascular dementia. As many as 63% of those with vascular parkinsonism have brisk reflexes, by some estimates. Other pyramidal signs include synkinesias, clonus, and spasticity. Hemiataxia and hemianopia may be present.

It can be difficult to identify dementia due to vascular parkinsonism. The differential diagnosis includes hydrocephalus, other dementias (Alzheimer's disease, frontotemporal dementia, etc.), atypical parkinsonism (progressive supranuclear palsy, corticobasal ganglionic degeneration), idiopathic dopa-responsive PD, multiple sclerosis/leukodystrophies, other white matter diseases, and motor neuron disease.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

PORTO, PORTUGAL — Vascular parkinsonism displays a range of noncognitive symptoms, which explains why its diagnosis can depend on the bias of the specialist doing the evaluation, said Joseph Ghika, M.D., at the Fourth International Congress on Vascular Dementia.

The same group of symptoms might be referred to as vascular parkinsonism (or gait disorder) by movement disorder specialists, central incontinence by urologists, vascular depression by psychiatrists, apraxia of gait by neuropsychologists, gait disorder of hydrocephalus by neurosurgeons, cardiogenic dementia by cardiologists, senile gait disorder by geriatricians, and small- and/or large-vessel disease (or poststroke/multistroke dementia) by stroke specialists, said Dr. Ghika of the Centre Hospitalier Universitaire Vaudois in Lausanne, Switzerland.

Vascular parkinsonism accounts for 3%–6% of all Parkinson's disease (PD) cases. The evolution of vascular parkinsonism is more rapid than that of PD and may have a stepwise progression. Generally, patients with vascular parkinsonism are older than those with PD and have vascular risk factors. They are usually nonresponsive to dopa treatment.

Presentation may involve a number of symptoms that are not seen in other forms of cognitive impairment/dementia: gait disturbances (gait ignition failure, frontal gait disorder, frontal or subcortical disequilibrium), focal deficits, loss of sphincter control, emotional lability (forced laughter, pseudobulbar syndrome), and psychomotor slowing.

“It's a symmetrical axial/proximal Parkinson's that involves mostly the lower extremities,” said Dr. Ghika. Patients tend to be nontremulous except in posture and can have a mixture of rigidity and spasticity, with axial and proximal predominance without cogwheeling.

Associated gait disorders develop early in the course of degeneration, often at the same time that impairment of executive function becomes apparent. Patients have problems standing, starting to walk, and changing directions. Their steps are short and shuffling. Patients spread their feet in a wide base for standing and walking and turn without turning the trunk. Posture is stooped but without flexion at the hip or knee, unlike in PD. Patients have great difficulty getting up from a sitting position and are often unable to do so unassisted. There is marked retropulsion with a loss of protective/postural reflexes. Arm swing can be variable to increased. The hodgepodge of movement traits have hampered ongoing efforts to identify a characteristic pattern of gait disturbances for vascular parkinsonism. “It's a problem of ataxia and apraxia all together,” said Dr. Ghika.

Corticobulbar/pseudobulbar syndrome affects more than half of patients with vascular parkinsonism, taking the form of emotional lability and/or forced laughter. Their faces often carry a “mask” of bewilderment. Their speech pattern is typically low, slow, and monotonous. Their speech may be dysarthric, very nasal, aprosodic, and monosyllabic. Their verbal communication may be aspontaneous, or they may be mute, stutter, or have palilalia. Normal olfaction is absent.

Urinary dysfunction/incontinence is also common; half of patients with vascular parkinonism experience detrusor hyperreflexia. Dyskinesias are common. Hemichorea-hemiballism may be bilateral. Patients often have myoclonus upon startling. They have postural (action) or Holmes tremors.

Focal neurologic deficits are quite common, affecting about 34% of those with vascular dementia. As many as 63% of those with vascular parkinsonism have brisk reflexes, by some estimates. Other pyramidal signs include synkinesias, clonus, and spasticity. Hemiataxia and hemianopia may be present.

It can be difficult to identify dementia due to vascular parkinsonism. The differential diagnosis includes hydrocephalus, other dementias (Alzheimer's disease, frontotemporal dementia, etc.), atypical parkinsonism (progressive supranuclear palsy, corticobasal ganglionic degeneration), idiopathic dopa-responsive PD, multiple sclerosis/leukodystrophies, other white matter diseases, and motor neuron disease.

PORTO, PORTUGAL — Vascular parkinsonism displays a range of noncognitive symptoms, which explains why its diagnosis can depend on the bias of the specialist doing the evaluation, said Joseph Ghika, M.D., at the Fourth International Congress on Vascular Dementia.

The same group of symptoms might be referred to as vascular parkinsonism (or gait disorder) by movement disorder specialists, central incontinence by urologists, vascular depression by psychiatrists, apraxia of gait by neuropsychologists, gait disorder of hydrocephalus by neurosurgeons, cardiogenic dementia by cardiologists, senile gait disorder by geriatricians, and small- and/or large-vessel disease (or poststroke/multistroke dementia) by stroke specialists, said Dr. Ghika of the Centre Hospitalier Universitaire Vaudois in Lausanne, Switzerland.

Vascular parkinsonism accounts for 3%–6% of all Parkinson's disease (PD) cases. The evolution of vascular parkinsonism is more rapid than that of PD and may have a stepwise progression. Generally, patients with vascular parkinsonism are older than those with PD and have vascular risk factors. They are usually nonresponsive to dopa treatment.

Presentation may involve a number of symptoms that are not seen in other forms of cognitive impairment/dementia: gait disturbances (gait ignition failure, frontal gait disorder, frontal or subcortical disequilibrium), focal deficits, loss of sphincter control, emotional lability (forced laughter, pseudobulbar syndrome), and psychomotor slowing.

“It's a symmetrical axial/proximal Parkinson's that involves mostly the lower extremities,” said Dr. Ghika. Patients tend to be nontremulous except in posture and can have a mixture of rigidity and spasticity, with axial and proximal predominance without cogwheeling.

Associated gait disorders develop early in the course of degeneration, often at the same time that impairment of executive function becomes apparent. Patients have problems standing, starting to walk, and changing directions. Their steps are short and shuffling. Patients spread their feet in a wide base for standing and walking and turn without turning the trunk. Posture is stooped but without flexion at the hip or knee, unlike in PD. Patients have great difficulty getting up from a sitting position and are often unable to do so unassisted. There is marked retropulsion with a loss of protective/postural reflexes. Arm swing can be variable to increased. The hodgepodge of movement traits have hampered ongoing efforts to identify a characteristic pattern of gait disturbances for vascular parkinsonism. “It's a problem of ataxia and apraxia all together,” said Dr. Ghika.

Corticobulbar/pseudobulbar syndrome affects more than half of patients with vascular parkinsonism, taking the form of emotional lability and/or forced laughter. Their faces often carry a “mask” of bewilderment. Their speech pattern is typically low, slow, and monotonous. Their speech may be dysarthric, very nasal, aprosodic, and monosyllabic. Their verbal communication may be aspontaneous, or they may be mute, stutter, or have palilalia. Normal olfaction is absent.

Urinary dysfunction/incontinence is also common; half of patients with vascular parkinonism experience detrusor hyperreflexia. Dyskinesias are common. Hemichorea-hemiballism may be bilateral. Patients often have myoclonus upon startling. They have postural (action) or Holmes tremors.

Focal neurologic deficits are quite common, affecting about 34% of those with vascular dementia. As many as 63% of those with vascular parkinsonism have brisk reflexes, by some estimates. Other pyramidal signs include synkinesias, clonus, and spasticity. Hemiataxia and hemianopia may be present.

It can be difficult to identify dementia due to vascular parkinsonism. The differential diagnosis includes hydrocephalus, other dementias (Alzheimer's disease, frontotemporal dementia, etc.), atypical parkinsonism (progressive supranuclear palsy, corticobasal ganglionic degeneration), idiopathic dopa-responsive PD, multiple sclerosis/leukodystrophies, other white matter diseases, and motor neuron disease.

Publications
Publications
Topics
Article Type
Display Headline
Vascular Parkinsonism Mimics Array of Traits
Display Headline
Vascular Parkinsonism Mimics Array of Traits
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Daily Exercise Increases Bone Mineral Content

Article Type
Changed
Display Headline
Daily Exercise Increases Bone Mineral Content

NASHVILLE, TENN. — A school-based exercise program may be one way to head off osteoporosis later in life, according to results from a study presented at the annual meeting of the American Society for Bone and Mineral Research.

A school-based exercise program in early school years seems to be followed by a greater increase in bone mineral content (BMC) and bone size than was seen in controls, said Christian Linden, M.D., of Malmö (Sweden) University Hospital.

The finding is from the Pediatric Osteoporosis Prevention (POP) study, a prospective, controlled population-based study assessing the effects of daily exercise in early school years on accrual of bone mineral.

A total of 121 children (73 boys and 48 girls) in grades 1 and 2 (average age 7.7 years) participated in 40 minutes of physical activity during each school day for 4 years. A control group of 100 age-, height-, and weight-matched children (52 boys, 48 girls) in nearby schools followed the standard Swedish curriculum, consisting of 60–90 minutes of physical activity each week.

At baseline there were no differences between the groups with regard to bone mass and size. At follow-up, the boys in the control group had a significantly higher Tanner stage on average; otherwise the children in the two groups were similar.

Boys in the intervention group had significantly greater BMC in the lumbar spine at follow-up after 4 years vs. those in the control group (7.0 g vs. 6.2 g). Girls in the intervention group had significantly higher BMC at the lumbar spine (9.1 g vs. 7.1 g) and femora neck (0.39 g vs. 0.29 g) at follow-up than did those in the control group.

The annual increase in femoral neck width was greater in the intervention group than in the control group for girls (1.23 mm vs. 1.07 mm) and boys (1.45 mm vs. 1.03 mm).

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

NASHVILLE, TENN. — A school-based exercise program may be one way to head off osteoporosis later in life, according to results from a study presented at the annual meeting of the American Society for Bone and Mineral Research.

A school-based exercise program in early school years seems to be followed by a greater increase in bone mineral content (BMC) and bone size than was seen in controls, said Christian Linden, M.D., of Malmö (Sweden) University Hospital.

The finding is from the Pediatric Osteoporosis Prevention (POP) study, a prospective, controlled population-based study assessing the effects of daily exercise in early school years on accrual of bone mineral.

A total of 121 children (73 boys and 48 girls) in grades 1 and 2 (average age 7.7 years) participated in 40 minutes of physical activity during each school day for 4 years. A control group of 100 age-, height-, and weight-matched children (52 boys, 48 girls) in nearby schools followed the standard Swedish curriculum, consisting of 60–90 minutes of physical activity each week.

At baseline there were no differences between the groups with regard to bone mass and size. At follow-up, the boys in the control group had a significantly higher Tanner stage on average; otherwise the children in the two groups were similar.

Boys in the intervention group had significantly greater BMC in the lumbar spine at follow-up after 4 years vs. those in the control group (7.0 g vs. 6.2 g). Girls in the intervention group had significantly higher BMC at the lumbar spine (9.1 g vs. 7.1 g) and femora neck (0.39 g vs. 0.29 g) at follow-up than did those in the control group.

The annual increase in femoral neck width was greater in the intervention group than in the control group for girls (1.23 mm vs. 1.07 mm) and boys (1.45 mm vs. 1.03 mm).

NASHVILLE, TENN. — A school-based exercise program may be one way to head off osteoporosis later in life, according to results from a study presented at the annual meeting of the American Society for Bone and Mineral Research.

A school-based exercise program in early school years seems to be followed by a greater increase in bone mineral content (BMC) and bone size than was seen in controls, said Christian Linden, M.D., of Malmö (Sweden) University Hospital.

The finding is from the Pediatric Osteoporosis Prevention (POP) study, a prospective, controlled population-based study assessing the effects of daily exercise in early school years on accrual of bone mineral.

A total of 121 children (73 boys and 48 girls) in grades 1 and 2 (average age 7.7 years) participated in 40 minutes of physical activity during each school day for 4 years. A control group of 100 age-, height-, and weight-matched children (52 boys, 48 girls) in nearby schools followed the standard Swedish curriculum, consisting of 60–90 minutes of physical activity each week.

At baseline there were no differences between the groups with regard to bone mass and size. At follow-up, the boys in the control group had a significantly higher Tanner stage on average; otherwise the children in the two groups were similar.

Boys in the intervention group had significantly greater BMC in the lumbar spine at follow-up after 4 years vs. those in the control group (7.0 g vs. 6.2 g). Girls in the intervention group had significantly higher BMC at the lumbar spine (9.1 g vs. 7.1 g) and femora neck (0.39 g vs. 0.29 g) at follow-up than did those in the control group.

The annual increase in femoral neck width was greater in the intervention group than in the control group for girls (1.23 mm vs. 1.07 mm) and boys (1.45 mm vs. 1.03 mm).

Publications
Publications
Topics
Article Type
Display Headline
Daily Exercise Increases Bone Mineral Content
Display Headline
Daily Exercise Increases Bone Mineral Content
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Vertebral Strength Gains From Small Rise in BMD

Article Type
Changed
Display Headline
Vertebral Strength Gains From Small Rise in BMD

NASHVILLE, TENN. — A little increase in bone mineral density associated with parathyroid hormone use appears to translate into greater vertebral strength, according to data presented at the annual meeting of the American Society for Bone and Mineral Research.

To assess changes in vertebral strength, the researchers used finite element analysis, a technique borrowed from engineering, where it is used to design bridges, skyscrapers, airplanes, and more, said Dennis M. Black, Ph.D., a professor of epidemiology at the University of California, San Francisco.

Starting with a quantitative CT (QCT) scan, the structure of interest is divided into many finite elements—in this case voxels—each of which is assigned material properties based on the bone mineral density (BMD). Computer simulation is then used to apply a set of forces—in this case compressive force—to the model to estimate the mechanical response. One of the responses of the most interest is strength, which in engineering terms means the applied force necessary for the structure to fail—in other words, the force necessary for the vertebra to fracture.

The researchers used the technique in a pilot study to assess vertebral strength changes, using data for 19 randomly selected women enrolled in the Parathyroid Hormone and Alendronate for Osteoporosis study. The postmenopausal women had all been randomized to receive 100 mcg of parathyroid hormone (1–84) daily for 1 year. QCT and dual x-ray absorptiometry (DXA) measurements were performed at baseline and at 1 year. The QCT data was used to look at changes in estimated vertebral strength using finite element analysis.

The average estimated compressive strength was 4,522 N. At 1 year the average estimated compressive strength was 5,715 N—a statistically significant increase in overall vertebral strength of 29%. While overall vertebral strength increased by 29%, overall BMD increased by only 6% based on DXA measurements.

The researchers were also able to virtually “peel away” the outer 2 mm of each vertebra (assumed to be cortical bone), in order to assess changes in the strength of trabecular bone. The increase in trabecular BMD was 29%, as measured by QCT. Trabecular bone accounted for 70% of the increase in total strength over the course of 1 year. At baseline, trabecular bone strength accounted for about half of total bone strength. “From this we inferred that the majority of the increase in strength is attributable to increases in trabecular strength,” Dr. Black explained.

The researchers also compared the increase in strength due to an average increase in bone density with the increase in strength due to a redistribution of bone density in the vertebra. To do this, the researchers assumed that the vertebra has a homogeneous density. So each element in the vertebra is assigned the same density—the average density for the vertebra. “So if we saw that the average density change leads to a strength change that is small, we would then infer that some of the overall increase in strength is due to a redistribution of density,” said Dr. Black.

In fact, they found that the increase in average bone density accounted for most—but not all—of the increase in strength. “This suggests that not all of the increase in strength can be attributed to the average change in bone density,” he said.

Researchers had previously validated the technique for assessing vertebral compressive strength by comparing the results of compression tests on cadaveric vertebra (Bone 2003;33:744–50). The finite element measures of strength correlated well with the test results.

The study was funded in part by NPS Pharmaceuticals Inc., maker of Preos (recombinantly produced, full-length human PTH), which is under development. Dr. Black also receives consulting fees from NPS Pharmaceuticals and is speaker for Merck & Co. Inc., maker of Fosamax (alendronate).

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

NASHVILLE, TENN. — A little increase in bone mineral density associated with parathyroid hormone use appears to translate into greater vertebral strength, according to data presented at the annual meeting of the American Society for Bone and Mineral Research.

To assess changes in vertebral strength, the researchers used finite element analysis, a technique borrowed from engineering, where it is used to design bridges, skyscrapers, airplanes, and more, said Dennis M. Black, Ph.D., a professor of epidemiology at the University of California, San Francisco.

Starting with a quantitative CT (QCT) scan, the structure of interest is divided into many finite elements—in this case voxels—each of which is assigned material properties based on the bone mineral density (BMD). Computer simulation is then used to apply a set of forces—in this case compressive force—to the model to estimate the mechanical response. One of the responses of the most interest is strength, which in engineering terms means the applied force necessary for the structure to fail—in other words, the force necessary for the vertebra to fracture.

The researchers used the technique in a pilot study to assess vertebral strength changes, using data for 19 randomly selected women enrolled in the Parathyroid Hormone and Alendronate for Osteoporosis study. The postmenopausal women had all been randomized to receive 100 mcg of parathyroid hormone (1–84) daily for 1 year. QCT and dual x-ray absorptiometry (DXA) measurements were performed at baseline and at 1 year. The QCT data was used to look at changes in estimated vertebral strength using finite element analysis.

The average estimated compressive strength was 4,522 N. At 1 year the average estimated compressive strength was 5,715 N—a statistically significant increase in overall vertebral strength of 29%. While overall vertebral strength increased by 29%, overall BMD increased by only 6% based on DXA measurements.

The researchers were also able to virtually “peel away” the outer 2 mm of each vertebra (assumed to be cortical bone), in order to assess changes in the strength of trabecular bone. The increase in trabecular BMD was 29%, as measured by QCT. Trabecular bone accounted for 70% of the increase in total strength over the course of 1 year. At baseline, trabecular bone strength accounted for about half of total bone strength. “From this we inferred that the majority of the increase in strength is attributable to increases in trabecular strength,” Dr. Black explained.

The researchers also compared the increase in strength due to an average increase in bone density with the increase in strength due to a redistribution of bone density in the vertebra. To do this, the researchers assumed that the vertebra has a homogeneous density. So each element in the vertebra is assigned the same density—the average density for the vertebra. “So if we saw that the average density change leads to a strength change that is small, we would then infer that some of the overall increase in strength is due to a redistribution of density,” said Dr. Black.

In fact, they found that the increase in average bone density accounted for most—but not all—of the increase in strength. “This suggests that not all of the increase in strength can be attributed to the average change in bone density,” he said.

Researchers had previously validated the technique for assessing vertebral compressive strength by comparing the results of compression tests on cadaveric vertebra (Bone 2003;33:744–50). The finite element measures of strength correlated well with the test results.

The study was funded in part by NPS Pharmaceuticals Inc., maker of Preos (recombinantly produced, full-length human PTH), which is under development. Dr. Black also receives consulting fees from NPS Pharmaceuticals and is speaker for Merck & Co. Inc., maker of Fosamax (alendronate).

NASHVILLE, TENN. — A little increase in bone mineral density associated with parathyroid hormone use appears to translate into greater vertebral strength, according to data presented at the annual meeting of the American Society for Bone and Mineral Research.

To assess changes in vertebral strength, the researchers used finite element analysis, a technique borrowed from engineering, where it is used to design bridges, skyscrapers, airplanes, and more, said Dennis M. Black, Ph.D., a professor of epidemiology at the University of California, San Francisco.

Starting with a quantitative CT (QCT) scan, the structure of interest is divided into many finite elements—in this case voxels—each of which is assigned material properties based on the bone mineral density (BMD). Computer simulation is then used to apply a set of forces—in this case compressive force—to the model to estimate the mechanical response. One of the responses of the most interest is strength, which in engineering terms means the applied force necessary for the structure to fail—in other words, the force necessary for the vertebra to fracture.

The researchers used the technique in a pilot study to assess vertebral strength changes, using data for 19 randomly selected women enrolled in the Parathyroid Hormone and Alendronate for Osteoporosis study. The postmenopausal women had all been randomized to receive 100 mcg of parathyroid hormone (1–84) daily for 1 year. QCT and dual x-ray absorptiometry (DXA) measurements were performed at baseline and at 1 year. The QCT data was used to look at changes in estimated vertebral strength using finite element analysis.

The average estimated compressive strength was 4,522 N. At 1 year the average estimated compressive strength was 5,715 N—a statistically significant increase in overall vertebral strength of 29%. While overall vertebral strength increased by 29%, overall BMD increased by only 6% based on DXA measurements.

The researchers were also able to virtually “peel away” the outer 2 mm of each vertebra (assumed to be cortical bone), in order to assess changes in the strength of trabecular bone. The increase in trabecular BMD was 29%, as measured by QCT. Trabecular bone accounted for 70% of the increase in total strength over the course of 1 year. At baseline, trabecular bone strength accounted for about half of total bone strength. “From this we inferred that the majority of the increase in strength is attributable to increases in trabecular strength,” Dr. Black explained.

The researchers also compared the increase in strength due to an average increase in bone density with the increase in strength due to a redistribution of bone density in the vertebra. To do this, the researchers assumed that the vertebra has a homogeneous density. So each element in the vertebra is assigned the same density—the average density for the vertebra. “So if we saw that the average density change leads to a strength change that is small, we would then infer that some of the overall increase in strength is due to a redistribution of density,” said Dr. Black.

In fact, they found that the increase in average bone density accounted for most—but not all—of the increase in strength. “This suggests that not all of the increase in strength can be attributed to the average change in bone density,” he said.

Researchers had previously validated the technique for assessing vertebral compressive strength by comparing the results of compression tests on cadaveric vertebra (Bone 2003;33:744–50). The finite element measures of strength correlated well with the test results.

The study was funded in part by NPS Pharmaceuticals Inc., maker of Preos (recombinantly produced, full-length human PTH), which is under development. Dr. Black also receives consulting fees from NPS Pharmaceuticals and is speaker for Merck & Co. Inc., maker of Fosamax (alendronate).

Publications
Publications
Topics
Article Type
Display Headline
Vertebral Strength Gains From Small Rise in BMD
Display Headline
Vertebral Strength Gains From Small Rise in BMD
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Imaging Shows Gender Differences in Bone Aging

Article Type
Changed
Display Headline
Imaging Shows Gender Differences in Bone Aging

NASHVILLE, TENN. — Three-dimensional, high-resolution peripheral quantitative computed tomography has revealed significant differences in the way that trabecular bone microstructure changes with age in men and women.

The technique allows for in vivo assessment of bone density and trabecular microstructure, Sundeep Khosla, M.D., said at the annual meeting of the American Society for Bone and Mineral Research.

He and his associates at the Mayo Clinic, Rochester, Minn., imaged 278 men and 324 women, aged 21–97 years. The nondominant wrist was scanned to obtain 116 views at the distal end of the radius.

The slice thickness was 89 μm.

Differences in the structure of trabecular bone in men and women are evident in young adulthood. Compared with young women, young men have indices of trabecular structure that predict stronger bones and greater resistance to fractures—higher bone volume/tissue volume (BV/TV) and thicker trabeculae, he said.

Over their lifetimes, men and women have similar reductions in BV/TV, but “the structural basis for this parallel decrease in BV/TV seems to be quite different in women and men,” Dr. Khosla said.

In women aged 20–49 years, trabecular number remains stable, then declines at about the same rate as seen in men aged 50 years and older. Men show no long-term net change in trabecular numbers, because decreases from age 50 on are offset by increases from ages 20 to 49.

Trabecular separation increases by 24% over men's lifetimes, but most of this change occurs after age 50. “Trabeculae actually tend to get closer together in men between the ages of 20 and 50, and then separation increases.” The net effect is that there isn't much change, he said.

Trabecular thickness goes down more than twice as much in men than in women over their lifetimes. “Trabecular thickness goes down fairly linearly over life in women. But in men there is a much more dramatic decrease in trabecular thickness from about age 20 to age 50, and then it looks like it doesn't decrease further,” he said. “In women, aging is associated with loss of trabeculae, whereas in men the primary mechanism of the decrease in BV/TV appears to be trabecular thinning.”

“Losing trabeculae is much more detrimental to bone strength than is thinning trabeculae,” Dr. Khosla noted. A 10% drop in BV/TV due to a reduction in trabecular number results in a twofold to fivefold greater loss of bone strength than the same drop in BV/TV caused by a reduction in trabecular thickness.

In a separate study, investigators used MRI-based virtual bone biopsy (VBB) to track trabecular microarchitecture changes in two groups of postmenopausal women aged 45–55 years—one receiving hormone therapy and the other not receiving the therapy.

A 20-patient treatment group received hormone therapy (0.05 mg/day estradiol transdermal patch); a 27-patient control group did not. All women received supplemental calcium (1,500 mg/day), said Glenn A. Ladinsky, M.D., of the University of Pennsylvania, Philadelphia.

In the control group, VBBs collected at the distal radius and the distal tibia showed conversion from trabecular plate to rod structure, indicating a reduction in bone strength during the 24-month study. Platelike trabecular architecture was preserved in patients who received hormone therapy. There was a 3%–4% reduction in bone mineral density in the control group, as measured by DXA. No changes in BMD were noted in the therapy group.

Dr. Ladinsky is a part owner of MicroMRI Inc., which developed the MRI-based VBB technology. The study was funded in part by Novartis Inc.

Differences in trabecular structure are shown here in two 24-year-olds, male (top left) and female (top right), a 73-year-old man (botton left) and a 71-year-old female. Photos courtesy Dr. Sundeep Khosla

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

NASHVILLE, TENN. — Three-dimensional, high-resolution peripheral quantitative computed tomography has revealed significant differences in the way that trabecular bone microstructure changes with age in men and women.

The technique allows for in vivo assessment of bone density and trabecular microstructure, Sundeep Khosla, M.D., said at the annual meeting of the American Society for Bone and Mineral Research.

He and his associates at the Mayo Clinic, Rochester, Minn., imaged 278 men and 324 women, aged 21–97 years. The nondominant wrist was scanned to obtain 116 views at the distal end of the radius.

The slice thickness was 89 μm.

Differences in the structure of trabecular bone in men and women are evident in young adulthood. Compared with young women, young men have indices of trabecular structure that predict stronger bones and greater resistance to fractures—higher bone volume/tissue volume (BV/TV) and thicker trabeculae, he said.

Over their lifetimes, men and women have similar reductions in BV/TV, but “the structural basis for this parallel decrease in BV/TV seems to be quite different in women and men,” Dr. Khosla said.

In women aged 20–49 years, trabecular number remains stable, then declines at about the same rate as seen in men aged 50 years and older. Men show no long-term net change in trabecular numbers, because decreases from age 50 on are offset by increases from ages 20 to 49.

Trabecular separation increases by 24% over men's lifetimes, but most of this change occurs after age 50. “Trabeculae actually tend to get closer together in men between the ages of 20 and 50, and then separation increases.” The net effect is that there isn't much change, he said.

Trabecular thickness goes down more than twice as much in men than in women over their lifetimes. “Trabecular thickness goes down fairly linearly over life in women. But in men there is a much more dramatic decrease in trabecular thickness from about age 20 to age 50, and then it looks like it doesn't decrease further,” he said. “In women, aging is associated with loss of trabeculae, whereas in men the primary mechanism of the decrease in BV/TV appears to be trabecular thinning.”

“Losing trabeculae is much more detrimental to bone strength than is thinning trabeculae,” Dr. Khosla noted. A 10% drop in BV/TV due to a reduction in trabecular number results in a twofold to fivefold greater loss of bone strength than the same drop in BV/TV caused by a reduction in trabecular thickness.

In a separate study, investigators used MRI-based virtual bone biopsy (VBB) to track trabecular microarchitecture changes in two groups of postmenopausal women aged 45–55 years—one receiving hormone therapy and the other not receiving the therapy.

A 20-patient treatment group received hormone therapy (0.05 mg/day estradiol transdermal patch); a 27-patient control group did not. All women received supplemental calcium (1,500 mg/day), said Glenn A. Ladinsky, M.D., of the University of Pennsylvania, Philadelphia.

In the control group, VBBs collected at the distal radius and the distal tibia showed conversion from trabecular plate to rod structure, indicating a reduction in bone strength during the 24-month study. Platelike trabecular architecture was preserved in patients who received hormone therapy. There was a 3%–4% reduction in bone mineral density in the control group, as measured by DXA. No changes in BMD were noted in the therapy group.

Dr. Ladinsky is a part owner of MicroMRI Inc., which developed the MRI-based VBB technology. The study was funded in part by Novartis Inc.

Differences in trabecular structure are shown here in two 24-year-olds, male (top left) and female (top right), a 73-year-old man (botton left) and a 71-year-old female. Photos courtesy Dr. Sundeep Khosla

NASHVILLE, TENN. — Three-dimensional, high-resolution peripheral quantitative computed tomography has revealed significant differences in the way that trabecular bone microstructure changes with age in men and women.

The technique allows for in vivo assessment of bone density and trabecular microstructure, Sundeep Khosla, M.D., said at the annual meeting of the American Society for Bone and Mineral Research.

He and his associates at the Mayo Clinic, Rochester, Minn., imaged 278 men and 324 women, aged 21–97 years. The nondominant wrist was scanned to obtain 116 views at the distal end of the radius.

The slice thickness was 89 μm.

Differences in the structure of trabecular bone in men and women are evident in young adulthood. Compared with young women, young men have indices of trabecular structure that predict stronger bones and greater resistance to fractures—higher bone volume/tissue volume (BV/TV) and thicker trabeculae, he said.

Over their lifetimes, men and women have similar reductions in BV/TV, but “the structural basis for this parallel decrease in BV/TV seems to be quite different in women and men,” Dr. Khosla said.

In women aged 20–49 years, trabecular number remains stable, then declines at about the same rate as seen in men aged 50 years and older. Men show no long-term net change in trabecular numbers, because decreases from age 50 on are offset by increases from ages 20 to 49.

Trabecular separation increases by 24% over men's lifetimes, but most of this change occurs after age 50. “Trabeculae actually tend to get closer together in men between the ages of 20 and 50, and then separation increases.” The net effect is that there isn't much change, he said.

Trabecular thickness goes down more than twice as much in men than in women over their lifetimes. “Trabecular thickness goes down fairly linearly over life in women. But in men there is a much more dramatic decrease in trabecular thickness from about age 20 to age 50, and then it looks like it doesn't decrease further,” he said. “In women, aging is associated with loss of trabeculae, whereas in men the primary mechanism of the decrease in BV/TV appears to be trabecular thinning.”

“Losing trabeculae is much more detrimental to bone strength than is thinning trabeculae,” Dr. Khosla noted. A 10% drop in BV/TV due to a reduction in trabecular number results in a twofold to fivefold greater loss of bone strength than the same drop in BV/TV caused by a reduction in trabecular thickness.

In a separate study, investigators used MRI-based virtual bone biopsy (VBB) to track trabecular microarchitecture changes in two groups of postmenopausal women aged 45–55 years—one receiving hormone therapy and the other not receiving the therapy.

A 20-patient treatment group received hormone therapy (0.05 mg/day estradiol transdermal patch); a 27-patient control group did not. All women received supplemental calcium (1,500 mg/day), said Glenn A. Ladinsky, M.D., of the University of Pennsylvania, Philadelphia.

In the control group, VBBs collected at the distal radius and the distal tibia showed conversion from trabecular plate to rod structure, indicating a reduction in bone strength during the 24-month study. Platelike trabecular architecture was preserved in patients who received hormone therapy. There was a 3%–4% reduction in bone mineral density in the control group, as measured by DXA. No changes in BMD were noted in the therapy group.

Dr. Ladinsky is a part owner of MicroMRI Inc., which developed the MRI-based VBB technology. The study was funded in part by Novartis Inc.

Differences in trabecular structure are shown here in two 24-year-olds, male (top left) and female (top right), a 73-year-old man (botton left) and a 71-year-old female. Photos courtesy Dr. Sundeep Khosla

Publications
Publications
Topics
Article Type
Display Headline
Imaging Shows Gender Differences in Bone Aging
Display Headline
Imaging Shows Gender Differences in Bone Aging
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

School Exercise in Early Years Results in Bone Mass Increase

Article Type
Changed
Display Headline
School Exercise in Early Years Results in Bone Mass Increase

NASHVILLE, TENN. — A school-based exercise program may be one way to head off osteoporosis later in life, according to results from a study presented at the annual meeting of the American Society for Bone and Mineral Research.

A school-based exercise program in early school years seems to be followed by a greater increase in bone mineral content (BMC) and bone size, said Christian Linden, M.D., of Malmö (Sweden) University Hospital.

The finding comes from the Pediatric Osteoporosis Prevention (POP) study, a prospective, controlled population-based study assessing the effects of daily exercise during early school years on the accrual of bone mineral.

A total of 121 children (73 boys and 48 girls) in grades 1 and 2 (average age 7.7 years) participated in 40 minutes of physical activity during each school day for 4 years. A control group of 100 age-, height-, and weight-matched children (52 boys and 48 girls) in nearby schools followed the standard Swedish physical education curriculum, consisting of 60–90 minutes of physical activity each week.

BMC was assessed using dual-energy x-ray absorptiometry measurements of the lumbar spine and the femoral neck at baseline and at yearly evaluations. The researchers also tracked duration of physical activity outside of school.

At baseline, there were no differences between the groups with regard to bone mass and size. At follow-up, the boys in the control group had a significantly higher Tanner stage on average; otherwise the children in the two groups were similar.

Boys who were in the intervention group had significantly greater BMC in the lumbar spine at follow-up after 4 years, compared with those in the control group (7.0 g vs. 6.2 g). Girls in the intervention group had significantly higher BMC at the lumbar spine (9.1 g vs. 7.1 g) and femora neck (0.39 g vs. 0.29 g) at follow-up than did those in the control group. The annual increase in femoral neck width was greater in the intervention group than in the control group for girls (1.23 mm vs. 1.07 mm) and boys (1.45 mm vs. 1.03 mm).

The findings support those from earlier studies suggesting that the best time to increase bone mineral accrual through exercise in is the prepubertal period. Approximately 30% of bone mass acquired over a lifetime can be influenced by nongenetic factors, such as exercise.

Exercise programs in early life show potential as a prevention strategy of osteoporosis, Dr. Linden concluded.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

NASHVILLE, TENN. — A school-based exercise program may be one way to head off osteoporosis later in life, according to results from a study presented at the annual meeting of the American Society for Bone and Mineral Research.

A school-based exercise program in early school years seems to be followed by a greater increase in bone mineral content (BMC) and bone size, said Christian Linden, M.D., of Malmö (Sweden) University Hospital.

The finding comes from the Pediatric Osteoporosis Prevention (POP) study, a prospective, controlled population-based study assessing the effects of daily exercise during early school years on the accrual of bone mineral.

A total of 121 children (73 boys and 48 girls) in grades 1 and 2 (average age 7.7 years) participated in 40 minutes of physical activity during each school day for 4 years. A control group of 100 age-, height-, and weight-matched children (52 boys and 48 girls) in nearby schools followed the standard Swedish physical education curriculum, consisting of 60–90 minutes of physical activity each week.

BMC was assessed using dual-energy x-ray absorptiometry measurements of the lumbar spine and the femoral neck at baseline and at yearly evaluations. The researchers also tracked duration of physical activity outside of school.

At baseline, there were no differences between the groups with regard to bone mass and size. At follow-up, the boys in the control group had a significantly higher Tanner stage on average; otherwise the children in the two groups were similar.

Boys who were in the intervention group had significantly greater BMC in the lumbar spine at follow-up after 4 years, compared with those in the control group (7.0 g vs. 6.2 g). Girls in the intervention group had significantly higher BMC at the lumbar spine (9.1 g vs. 7.1 g) and femora neck (0.39 g vs. 0.29 g) at follow-up than did those in the control group. The annual increase in femoral neck width was greater in the intervention group than in the control group for girls (1.23 mm vs. 1.07 mm) and boys (1.45 mm vs. 1.03 mm).

The findings support those from earlier studies suggesting that the best time to increase bone mineral accrual through exercise in is the prepubertal period. Approximately 30% of bone mass acquired over a lifetime can be influenced by nongenetic factors, such as exercise.

Exercise programs in early life show potential as a prevention strategy of osteoporosis, Dr. Linden concluded.

NASHVILLE, TENN. — A school-based exercise program may be one way to head off osteoporosis later in life, according to results from a study presented at the annual meeting of the American Society for Bone and Mineral Research.

A school-based exercise program in early school years seems to be followed by a greater increase in bone mineral content (BMC) and bone size, said Christian Linden, M.D., of Malmö (Sweden) University Hospital.

The finding comes from the Pediatric Osteoporosis Prevention (POP) study, a prospective, controlled population-based study assessing the effects of daily exercise during early school years on the accrual of bone mineral.

A total of 121 children (73 boys and 48 girls) in grades 1 and 2 (average age 7.7 years) participated in 40 minutes of physical activity during each school day for 4 years. A control group of 100 age-, height-, and weight-matched children (52 boys and 48 girls) in nearby schools followed the standard Swedish physical education curriculum, consisting of 60–90 minutes of physical activity each week.

BMC was assessed using dual-energy x-ray absorptiometry measurements of the lumbar spine and the femoral neck at baseline and at yearly evaluations. The researchers also tracked duration of physical activity outside of school.

At baseline, there were no differences between the groups with regard to bone mass and size. At follow-up, the boys in the control group had a significantly higher Tanner stage on average; otherwise the children in the two groups were similar.

Boys who were in the intervention group had significantly greater BMC in the lumbar spine at follow-up after 4 years, compared with those in the control group (7.0 g vs. 6.2 g). Girls in the intervention group had significantly higher BMC at the lumbar spine (9.1 g vs. 7.1 g) and femora neck (0.39 g vs. 0.29 g) at follow-up than did those in the control group. The annual increase in femoral neck width was greater in the intervention group than in the control group for girls (1.23 mm vs. 1.07 mm) and boys (1.45 mm vs. 1.03 mm).

The findings support those from earlier studies suggesting that the best time to increase bone mineral accrual through exercise in is the prepubertal period. Approximately 30% of bone mass acquired over a lifetime can be influenced by nongenetic factors, such as exercise.

Exercise programs in early life show potential as a prevention strategy of osteoporosis, Dr. Linden concluded.

Publications
Publications
Topics
Article Type
Display Headline
School Exercise in Early Years Results in Bone Mass Increase
Display Headline
School Exercise in Early Years Results in Bone Mass Increase
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Quick Dementia Screen Increased Number of Diagnoses

Article Type
Changed
Display Headline
Quick Dementia Screen Increased Number of Diagnoses

WASHINGTON – Routine use of a simple dementia screening tool can boost the number of possible dementia cases identified in primary care without putting a substantial drain on physician time, according to data presented at an international conference sponsored by the Alzheimer's Association.

After institution of dementia screening for all patients 65 years old and older in two primary care clinics, the number of dementia diagnoses made by geriatricians and primary care physicians rose significantly. The number of diagnoses did not change at two other primary care facilities that served as controls, said Soo Borson, M.D., professor of geriatric psychiatry at the University of Washington in Seattle.

For the trial, dementia screening was performed by medical assistants who had been specially trained in administering and interpreting the Mini-Cog test, developed by Dr. Borson and her colleagues. Data from electronic records at two other primary care clinics served as controls.

The Mini-Cog screening test takes 1–3 minutes to administer and involves a three-item recall portion to assess memory and a clock-drawing test. The Mini-Cog is as effective as, if not better than, the Mini-Mental State Examination in identifying cognitive impairment, she said. A score of 0–2 on a scale of 5 indicates impairment.

The medical assistants informed physicians of the results and entered the results into the patient's medical record.

The medical assistants had a 96% agreement rate with the expert raters. The researchers identified the percentage of the clinic caseload comprising dementia diagnoses, dementia referrals, cognitive impairment referrals, or prescriptions of cholinesterase inhibitors in the year following the start of cognitive screening.

Of the 540 patients eligible for screening at the test clinics, 70% were successfully screened; fewer than 1% refused. Of those screened, 16% scored below the Mini-Cog cut point of 3.

Prior to use of the screening tool, 11% of patients seen by the geriatricians were diagnosed with dementia, compared with 4% of those seen by primary care physicians.

After screening was introduced, the number of patients diagnosed with dementia rose to 15% for geriatricians and 6% for primary care physicians. In comparison, of the 1,143 patients treated at the control clinics, only 2% had a dementia diagnosis both years.

The increase in the number of patients identified with cognitive problems at the test clinics didn't necessarily translate into more care for dementia.

“Even for geriatricians, less than 50% of people diagnosed with dementia were actively treated for it,” Dr. Borson said at the meeting.

Primary care referrals to specialists for suspected dementia also increased in response to screening but did not change in control clinics.

Prescribing of medications for dementia increased slightly, but only among nongeriatricians in the test clinics. “This shows that screening has some effect, but isn't a sufficient intervention,” Dr. Borson told this newspaper.

The findings show that screening can improve recognition of possible dementia.

Based on this and other studies by Dr. Borson's group, simple screening tools like the Mini-Cog appear to help the most with identifying the less cognitively impaired individuals whom physicians often miss.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

WASHINGTON – Routine use of a simple dementia screening tool can boost the number of possible dementia cases identified in primary care without putting a substantial drain on physician time, according to data presented at an international conference sponsored by the Alzheimer's Association.

After institution of dementia screening for all patients 65 years old and older in two primary care clinics, the number of dementia diagnoses made by geriatricians and primary care physicians rose significantly. The number of diagnoses did not change at two other primary care facilities that served as controls, said Soo Borson, M.D., professor of geriatric psychiatry at the University of Washington in Seattle.

For the trial, dementia screening was performed by medical assistants who had been specially trained in administering and interpreting the Mini-Cog test, developed by Dr. Borson and her colleagues. Data from electronic records at two other primary care clinics served as controls.

The Mini-Cog screening test takes 1–3 minutes to administer and involves a three-item recall portion to assess memory and a clock-drawing test. The Mini-Cog is as effective as, if not better than, the Mini-Mental State Examination in identifying cognitive impairment, she said. A score of 0–2 on a scale of 5 indicates impairment.

The medical assistants informed physicians of the results and entered the results into the patient's medical record.

The medical assistants had a 96% agreement rate with the expert raters. The researchers identified the percentage of the clinic caseload comprising dementia diagnoses, dementia referrals, cognitive impairment referrals, or prescriptions of cholinesterase inhibitors in the year following the start of cognitive screening.

Of the 540 patients eligible for screening at the test clinics, 70% were successfully screened; fewer than 1% refused. Of those screened, 16% scored below the Mini-Cog cut point of 3.

Prior to use of the screening tool, 11% of patients seen by the geriatricians were diagnosed with dementia, compared with 4% of those seen by primary care physicians.

After screening was introduced, the number of patients diagnosed with dementia rose to 15% for geriatricians and 6% for primary care physicians. In comparison, of the 1,143 patients treated at the control clinics, only 2% had a dementia diagnosis both years.

The increase in the number of patients identified with cognitive problems at the test clinics didn't necessarily translate into more care for dementia.

“Even for geriatricians, less than 50% of people diagnosed with dementia were actively treated for it,” Dr. Borson said at the meeting.

Primary care referrals to specialists for suspected dementia also increased in response to screening but did not change in control clinics.

Prescribing of medications for dementia increased slightly, but only among nongeriatricians in the test clinics. “This shows that screening has some effect, but isn't a sufficient intervention,” Dr. Borson told this newspaper.

The findings show that screening can improve recognition of possible dementia.

Based on this and other studies by Dr. Borson's group, simple screening tools like the Mini-Cog appear to help the most with identifying the less cognitively impaired individuals whom physicians often miss.

WASHINGTON – Routine use of a simple dementia screening tool can boost the number of possible dementia cases identified in primary care without putting a substantial drain on physician time, according to data presented at an international conference sponsored by the Alzheimer's Association.

After institution of dementia screening for all patients 65 years old and older in two primary care clinics, the number of dementia diagnoses made by geriatricians and primary care physicians rose significantly. The number of diagnoses did not change at two other primary care facilities that served as controls, said Soo Borson, M.D., professor of geriatric psychiatry at the University of Washington in Seattle.

For the trial, dementia screening was performed by medical assistants who had been specially trained in administering and interpreting the Mini-Cog test, developed by Dr. Borson and her colleagues. Data from electronic records at two other primary care clinics served as controls.

The Mini-Cog screening test takes 1–3 minutes to administer and involves a three-item recall portion to assess memory and a clock-drawing test. The Mini-Cog is as effective as, if not better than, the Mini-Mental State Examination in identifying cognitive impairment, she said. A score of 0–2 on a scale of 5 indicates impairment.

The medical assistants informed physicians of the results and entered the results into the patient's medical record.

The medical assistants had a 96% agreement rate with the expert raters. The researchers identified the percentage of the clinic caseload comprising dementia diagnoses, dementia referrals, cognitive impairment referrals, or prescriptions of cholinesterase inhibitors in the year following the start of cognitive screening.

Of the 540 patients eligible for screening at the test clinics, 70% were successfully screened; fewer than 1% refused. Of those screened, 16% scored below the Mini-Cog cut point of 3.

Prior to use of the screening tool, 11% of patients seen by the geriatricians were diagnosed with dementia, compared with 4% of those seen by primary care physicians.

After screening was introduced, the number of patients diagnosed with dementia rose to 15% for geriatricians and 6% for primary care physicians. In comparison, of the 1,143 patients treated at the control clinics, only 2% had a dementia diagnosis both years.

The increase in the number of patients identified with cognitive problems at the test clinics didn't necessarily translate into more care for dementia.

“Even for geriatricians, less than 50% of people diagnosed with dementia were actively treated for it,” Dr. Borson said at the meeting.

Primary care referrals to specialists for suspected dementia also increased in response to screening but did not change in control clinics.

Prescribing of medications for dementia increased slightly, but only among nongeriatricians in the test clinics. “This shows that screening has some effect, but isn't a sufficient intervention,” Dr. Borson told this newspaper.

The findings show that screening can improve recognition of possible dementia.

Based on this and other studies by Dr. Borson's group, simple screening tools like the Mini-Cog appear to help the most with identifying the less cognitively impaired individuals whom physicians often miss.

Publications
Publications
Topics
Article Type
Display Headline
Quick Dementia Screen Increased Number of Diagnoses
Display Headline
Quick Dementia Screen Increased Number of Diagnoses
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Depression Common in Eating Disorders, Complicates Tx

Article Type
Changed
Display Headline
Depression Common in Eating Disorders, Complicates Tx

BALTIMORE — Depression frequently co-occurs with eating disorders, making treatment challenging, Graham W. Redgrave, M.D., said at a symposium on mood disorders sponsored by Johns Hopkins University.

“There are high rates of concurrent major depressive disorder in anorexia,” said Dr. Redgrave of the Johns Hopkins University in Baltimore. Among patients with the restricting type of anorexia, 15%–50% also have major depressive disorder (MDD). The rates among patients with the binge-eating/purging type of anorexia are even higher at 46%–80%. The rates are higher still when these patients are asked whether they have ever had depression.

Numbers like these suggest that anorexia might simply be a behavioral manifestation of an underlying mood disorder. However, controlled family studies have provided good evidence that these disorders are different and independent, Dr. Redgrave said.

One reason so much overlap exists between anorexia and MDD is that starvation produces a host of psychiatric conditions in the body, such as mood lability, irritability, anxiety, apathy, obsessiveness, poor concentration, social withdrawal, and decreased libido.

Patients with anorexia aren't the only ones suffering from comorbid depression. Among patients with bulimia, 30%–60% have concurrent MDD and 50%–65% have had a lifetime occurrence of depression.

In patients with bulimia, starvation magnifies feelings of guilt, shame, and hopelessness, Dr. Redgrave said. Increased frequency in the binge and purge cycles decreases the ability to concentrate, because the fear of being overweight increases in importance.

Depression also is high among patients with binge-eating disorder, with 36%–60% of these patients also having MDD. In addition, 48% of obese women who binge also have MDD, compared with only 26% of obese women who do not binge. “It's not just the obesity. There's something about the psychopathology of depression and the binge eating that seems to be related,” Dr. Redgrave said.

Treatment of patients with eating disorders and depression can be a challenge because “when you are treating an eating disorder, you are asking your patient to give up something that is very rewarding.” Patients can recognize that what they're doing is problematic but have a hard time giving it up, Dr. Redgrave said at the sympo- sium, also sponsored by the Depression and Related Affective Disorders Association.

Treatment for an eating disorder focuses on behaviors and then on thoughts and feelings. Underlying connections and associations are addressed only when the patient is stabilized.

Pharmacotherapy is primarily an adjunctive treatment for patients with anorexia. Antidepressants are of modest but important benefit in bulimia nervosa, Dr. Redgrave said. Fluoxetine at high doses is especially useful, though most antidepressants can be useful in this population. Bupropion is contraindicated because of the risk of seizures.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

BALTIMORE — Depression frequently co-occurs with eating disorders, making treatment challenging, Graham W. Redgrave, M.D., said at a symposium on mood disorders sponsored by Johns Hopkins University.

“There are high rates of concurrent major depressive disorder in anorexia,” said Dr. Redgrave of the Johns Hopkins University in Baltimore. Among patients with the restricting type of anorexia, 15%–50% also have major depressive disorder (MDD). The rates among patients with the binge-eating/purging type of anorexia are even higher at 46%–80%. The rates are higher still when these patients are asked whether they have ever had depression.

Numbers like these suggest that anorexia might simply be a behavioral manifestation of an underlying mood disorder. However, controlled family studies have provided good evidence that these disorders are different and independent, Dr. Redgrave said.

One reason so much overlap exists between anorexia and MDD is that starvation produces a host of psychiatric conditions in the body, such as mood lability, irritability, anxiety, apathy, obsessiveness, poor concentration, social withdrawal, and decreased libido.

Patients with anorexia aren't the only ones suffering from comorbid depression. Among patients with bulimia, 30%–60% have concurrent MDD and 50%–65% have had a lifetime occurrence of depression.

In patients with bulimia, starvation magnifies feelings of guilt, shame, and hopelessness, Dr. Redgrave said. Increased frequency in the binge and purge cycles decreases the ability to concentrate, because the fear of being overweight increases in importance.

Depression also is high among patients with binge-eating disorder, with 36%–60% of these patients also having MDD. In addition, 48% of obese women who binge also have MDD, compared with only 26% of obese women who do not binge. “It's not just the obesity. There's something about the psychopathology of depression and the binge eating that seems to be related,” Dr. Redgrave said.

Treatment of patients with eating disorders and depression can be a challenge because “when you are treating an eating disorder, you are asking your patient to give up something that is very rewarding.” Patients can recognize that what they're doing is problematic but have a hard time giving it up, Dr. Redgrave said at the sympo- sium, also sponsored by the Depression and Related Affective Disorders Association.

Treatment for an eating disorder focuses on behaviors and then on thoughts and feelings. Underlying connections and associations are addressed only when the patient is stabilized.

Pharmacotherapy is primarily an adjunctive treatment for patients with anorexia. Antidepressants are of modest but important benefit in bulimia nervosa, Dr. Redgrave said. Fluoxetine at high doses is especially useful, though most antidepressants can be useful in this population. Bupropion is contraindicated because of the risk of seizures.

BALTIMORE — Depression frequently co-occurs with eating disorders, making treatment challenging, Graham W. Redgrave, M.D., said at a symposium on mood disorders sponsored by Johns Hopkins University.

“There are high rates of concurrent major depressive disorder in anorexia,” said Dr. Redgrave of the Johns Hopkins University in Baltimore. Among patients with the restricting type of anorexia, 15%–50% also have major depressive disorder (MDD). The rates among patients with the binge-eating/purging type of anorexia are even higher at 46%–80%. The rates are higher still when these patients are asked whether they have ever had depression.

Numbers like these suggest that anorexia might simply be a behavioral manifestation of an underlying mood disorder. However, controlled family studies have provided good evidence that these disorders are different and independent, Dr. Redgrave said.

One reason so much overlap exists between anorexia and MDD is that starvation produces a host of psychiatric conditions in the body, such as mood lability, irritability, anxiety, apathy, obsessiveness, poor concentration, social withdrawal, and decreased libido.

Patients with anorexia aren't the only ones suffering from comorbid depression. Among patients with bulimia, 30%–60% have concurrent MDD and 50%–65% have had a lifetime occurrence of depression.

In patients with bulimia, starvation magnifies feelings of guilt, shame, and hopelessness, Dr. Redgrave said. Increased frequency in the binge and purge cycles decreases the ability to concentrate, because the fear of being overweight increases in importance.

Depression also is high among patients with binge-eating disorder, with 36%–60% of these patients also having MDD. In addition, 48% of obese women who binge also have MDD, compared with only 26% of obese women who do not binge. “It's not just the obesity. There's something about the psychopathology of depression and the binge eating that seems to be related,” Dr. Redgrave said.

Treatment of patients with eating disorders and depression can be a challenge because “when you are treating an eating disorder, you are asking your patient to give up something that is very rewarding.” Patients can recognize that what they're doing is problematic but have a hard time giving it up, Dr. Redgrave said at the sympo- sium, also sponsored by the Depression and Related Affective Disorders Association.

Treatment for an eating disorder focuses on behaviors and then on thoughts and feelings. Underlying connections and associations are addressed only when the patient is stabilized.

Pharmacotherapy is primarily an adjunctive treatment for patients with anorexia. Antidepressants are of modest but important benefit in bulimia nervosa, Dr. Redgrave said. Fluoxetine at high doses is especially useful, though most antidepressants can be useful in this population. Bupropion is contraindicated because of the risk of seizures.

Publications
Publications
Topics
Article Type
Display Headline
Depression Common in Eating Disorders, Complicates Tx
Display Headline
Depression Common in Eating Disorders, Complicates Tx
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

MRI Helps Find Cause of Orthostatic Headache

Article Type
Changed
Display Headline
MRI Helps Find Cause of Orthostatic Headache

Diffuse meningeal enhancement on MRI with gadolinium contrast can confirm the diagnosis of intracranial hypotension when a patient presents with orthostatic headaches, said Todd J. Schwedt, M.D., a neurology fellow at the Mayo Clinic in Scottsdale, Ariz.

Patients who present with ongoing severe, diffuse, pressure headache that is worse when standing and relieved upon lying down may have intracranial hypotension due to a cerebrospinal fluid (CSF) leak. They may also experience very intense pain whenever they sneeze or cough, or with Valsalva maneuvers. The headaches may also be accompanied by vomiting and diminished hearing. Caffeinated beverages provide some relief from the headaches.

The differential diagnosis for orthostatic headache includes spontaneous intracranial hypotension (as in these cases), postdural puncture (resulting from a lumbar puncture or spinal anesthesia), and CSF fistula. Additional symptoms of spontaneous intracranial hypotension due to CSF leak can include diplopia, dizziness, visual blurring, interscapular pain, and radicular, upper extremity symptoms.

On MRI of the brain with gadolinium contrast, the classic sign of intracranial hypotension due to a CSF leak is contiguous, pachymeningeal enhancement, Dr. Schwedt said. Spontaneous CSF leaks also can cause generalized sagging of the brain with downward displacement of the cerebellar tonsils that is clearly visible on MRI with gadolinium.

MRI of the spine with and without contrast may not be as helpful. Despite the presence of a CSF leak, spinal MRIs may appear normal, with no visible collection of CSF. On occasion, spinal MRI may show pooling of extraarachnoid CSF, but this rarely identifies the exact location of the leak.

Autologous epidural blood patch is used to treat the CSF leak, even when MRI has not located the exact site of the leak. A history of minor trauma, orthostatic headaches, hearing changes, and MRI findings are considered reason enough to perform the blood patch.

For an epidural blood patch, 10–20 mL of autologous blood is injected into the epidural space. “Injection into the lumbar region can be adequate since the blood may travel to the site of the dural leak and injection will result in elevated CSF pressure,” Dr. Schwedt said.

Although the exact mechanism by which an epidural blood patch provides relief of symptoms is controversial, pain relief may be due to the formation of a gelatinous tamponade that stops the CSF leak and provides an immediate elevation of CSF pressure. Alternatively, the patch may increase CSF pressure by compression of the thecal sac, effectively reducing the volume of the intrathecal space. Relief usually occurs fairly quickly–within 30 minutes in many cases. The technique is typically performed by anesthesiologists.

Performing a lumbar puncture to see if a patient with such symptoms has a low opening pressure is an option. The CSF pressure may or may not be low, and the CSF may contain increased levels of protein and erythrocytes. However, given the diffuse meningeal enhancement seen on MRI and the clinical presentation, a clinical diagnosis can be made and a lumbar puncture avoided, as it has the potential to worsen the patient's symptoms.

Axial MRI with gadolinium contrast shows diffuse contiguous pachymeningeal enhancement (left). Without gadolinium (center), MRI looks normal. There was no evidence of cerebellar descent in this sagittal MRI with gadolinium. Photos courtesy Dr. Todd J. Schwedt

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Diffuse meningeal enhancement on MRI with gadolinium contrast can confirm the diagnosis of intracranial hypotension when a patient presents with orthostatic headaches, said Todd J. Schwedt, M.D., a neurology fellow at the Mayo Clinic in Scottsdale, Ariz.

Patients who present with ongoing severe, diffuse, pressure headache that is worse when standing and relieved upon lying down may have intracranial hypotension due to a cerebrospinal fluid (CSF) leak. They may also experience very intense pain whenever they sneeze or cough, or with Valsalva maneuvers. The headaches may also be accompanied by vomiting and diminished hearing. Caffeinated beverages provide some relief from the headaches.

The differential diagnosis for orthostatic headache includes spontaneous intracranial hypotension (as in these cases), postdural puncture (resulting from a lumbar puncture or spinal anesthesia), and CSF fistula. Additional symptoms of spontaneous intracranial hypotension due to CSF leak can include diplopia, dizziness, visual blurring, interscapular pain, and radicular, upper extremity symptoms.

On MRI of the brain with gadolinium contrast, the classic sign of intracranial hypotension due to a CSF leak is contiguous, pachymeningeal enhancement, Dr. Schwedt said. Spontaneous CSF leaks also can cause generalized sagging of the brain with downward displacement of the cerebellar tonsils that is clearly visible on MRI with gadolinium.

MRI of the spine with and without contrast may not be as helpful. Despite the presence of a CSF leak, spinal MRIs may appear normal, with no visible collection of CSF. On occasion, spinal MRI may show pooling of extraarachnoid CSF, but this rarely identifies the exact location of the leak.

Autologous epidural blood patch is used to treat the CSF leak, even when MRI has not located the exact site of the leak. A history of minor trauma, orthostatic headaches, hearing changes, and MRI findings are considered reason enough to perform the blood patch.

For an epidural blood patch, 10–20 mL of autologous blood is injected into the epidural space. “Injection into the lumbar region can be adequate since the blood may travel to the site of the dural leak and injection will result in elevated CSF pressure,” Dr. Schwedt said.

Although the exact mechanism by which an epidural blood patch provides relief of symptoms is controversial, pain relief may be due to the formation of a gelatinous tamponade that stops the CSF leak and provides an immediate elevation of CSF pressure. Alternatively, the patch may increase CSF pressure by compression of the thecal sac, effectively reducing the volume of the intrathecal space. Relief usually occurs fairly quickly–within 30 minutes in many cases. The technique is typically performed by anesthesiologists.

Performing a lumbar puncture to see if a patient with such symptoms has a low opening pressure is an option. The CSF pressure may or may not be low, and the CSF may contain increased levels of protein and erythrocytes. However, given the diffuse meningeal enhancement seen on MRI and the clinical presentation, a clinical diagnosis can be made and a lumbar puncture avoided, as it has the potential to worsen the patient's symptoms.

Axial MRI with gadolinium contrast shows diffuse contiguous pachymeningeal enhancement (left). Without gadolinium (center), MRI looks normal. There was no evidence of cerebellar descent in this sagittal MRI with gadolinium. Photos courtesy Dr. Todd J. Schwedt

Diffuse meningeal enhancement on MRI with gadolinium contrast can confirm the diagnosis of intracranial hypotension when a patient presents with orthostatic headaches, said Todd J. Schwedt, M.D., a neurology fellow at the Mayo Clinic in Scottsdale, Ariz.

Patients who present with ongoing severe, diffuse, pressure headache that is worse when standing and relieved upon lying down may have intracranial hypotension due to a cerebrospinal fluid (CSF) leak. They may also experience very intense pain whenever they sneeze or cough, or with Valsalva maneuvers. The headaches may also be accompanied by vomiting and diminished hearing. Caffeinated beverages provide some relief from the headaches.

The differential diagnosis for orthostatic headache includes spontaneous intracranial hypotension (as in these cases), postdural puncture (resulting from a lumbar puncture or spinal anesthesia), and CSF fistula. Additional symptoms of spontaneous intracranial hypotension due to CSF leak can include diplopia, dizziness, visual blurring, interscapular pain, and radicular, upper extremity symptoms.

On MRI of the brain with gadolinium contrast, the classic sign of intracranial hypotension due to a CSF leak is contiguous, pachymeningeal enhancement, Dr. Schwedt said. Spontaneous CSF leaks also can cause generalized sagging of the brain with downward displacement of the cerebellar tonsils that is clearly visible on MRI with gadolinium.

MRI of the spine with and without contrast may not be as helpful. Despite the presence of a CSF leak, spinal MRIs may appear normal, with no visible collection of CSF. On occasion, spinal MRI may show pooling of extraarachnoid CSF, but this rarely identifies the exact location of the leak.

Autologous epidural blood patch is used to treat the CSF leak, even when MRI has not located the exact site of the leak. A history of minor trauma, orthostatic headaches, hearing changes, and MRI findings are considered reason enough to perform the blood patch.

For an epidural blood patch, 10–20 mL of autologous blood is injected into the epidural space. “Injection into the lumbar region can be adequate since the blood may travel to the site of the dural leak and injection will result in elevated CSF pressure,” Dr. Schwedt said.

Although the exact mechanism by which an epidural blood patch provides relief of symptoms is controversial, pain relief may be due to the formation of a gelatinous tamponade that stops the CSF leak and provides an immediate elevation of CSF pressure. Alternatively, the patch may increase CSF pressure by compression of the thecal sac, effectively reducing the volume of the intrathecal space. Relief usually occurs fairly quickly–within 30 minutes in many cases. The technique is typically performed by anesthesiologists.

Performing a lumbar puncture to see if a patient with such symptoms has a low opening pressure is an option. The CSF pressure may or may not be low, and the CSF may contain increased levels of protein and erythrocytes. However, given the diffuse meningeal enhancement seen on MRI and the clinical presentation, a clinical diagnosis can be made and a lumbar puncture avoided, as it has the potential to worsen the patient's symptoms.

Axial MRI with gadolinium contrast shows diffuse contiguous pachymeningeal enhancement (left). Without gadolinium (center), MRI looks normal. There was no evidence of cerebellar descent in this sagittal MRI with gadolinium. Photos courtesy Dr. Todd J. Schwedt

Publications
Publications
Topics
Article Type
Display Headline
MRI Helps Find Cause of Orthostatic Headache
Display Headline
MRI Helps Find Cause of Orthostatic Headache
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Addition of Herceptin to Chemo Is Linked to Increased Cardiotoxicity

Article Type
Changed
Display Headline
Addition of Herceptin to Chemo Is Linked to Increased Cardiotoxicity

The Food and Drug Administration and Genentech Inc. are notifying physicians of new data demonstrating a significant increase in cardiotoxicity in patients randomized to receive Herceptin (trastuzumab) along with standard adjuvant chemotherapy, compared with patients who received chemotherapy alone.

The data come from the National Surgical Adjuvant Breast and Bowel Project (NSABP) study (B-31), a phase III trial involving 2,043 women with operable HER2 overexpressing breast cancer (immunohistochemistry test score of 3 or greater or a positive fluorescence in situ hybridization test score).

Preliminary analysis of safety data from this trial and the North Central Cancer Treatment Group (NCCTG) study (N9831) revealed a statistically significant increase in the 3-year cumulative incidence of New York Hospital Association class III and IV congestive heart failure and cardiac death observed in patients who received the Herceptin-containing regimen (4.1%), compared with the chemotherapy-alone group (0.8%). No cardiac deaths were observed in patients who received the Herceptin-containing regimen; one cardiac death occurred in the control arm. Final analysis of the cardiac safety data collected in these two studies is ongoing.

Herceptin as a single agent is indicated for the treatment of patients with metastatic breast cancer whose tumors overexpress the HER2 protein and who have received one or more chemotherapy regimens. Herceptin in combination with Taxol (paclitaxel) is indicated for treatment of patients with metastatic breast cancer whose tumors overexpress the HER2 protein and who have not received chemotherapy.

For additional information, contact Genentech Inc.'s medical communication department by calling 800-821-8590 or by visiting www.gene.com/gene/contact/

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

The Food and Drug Administration and Genentech Inc. are notifying physicians of new data demonstrating a significant increase in cardiotoxicity in patients randomized to receive Herceptin (trastuzumab) along with standard adjuvant chemotherapy, compared with patients who received chemotherapy alone.

The data come from the National Surgical Adjuvant Breast and Bowel Project (NSABP) study (B-31), a phase III trial involving 2,043 women with operable HER2 overexpressing breast cancer (immunohistochemistry test score of 3 or greater or a positive fluorescence in situ hybridization test score).

Preliminary analysis of safety data from this trial and the North Central Cancer Treatment Group (NCCTG) study (N9831) revealed a statistically significant increase in the 3-year cumulative incidence of New York Hospital Association class III and IV congestive heart failure and cardiac death observed in patients who received the Herceptin-containing regimen (4.1%), compared with the chemotherapy-alone group (0.8%). No cardiac deaths were observed in patients who received the Herceptin-containing regimen; one cardiac death occurred in the control arm. Final analysis of the cardiac safety data collected in these two studies is ongoing.

Herceptin as a single agent is indicated for the treatment of patients with metastatic breast cancer whose tumors overexpress the HER2 protein and who have received one or more chemotherapy regimens. Herceptin in combination with Taxol (paclitaxel) is indicated for treatment of patients with metastatic breast cancer whose tumors overexpress the HER2 protein and who have not received chemotherapy.

For additional information, contact Genentech Inc.'s medical communication department by calling 800-821-8590 or by visiting www.gene.com/gene/contact/

The Food and Drug Administration and Genentech Inc. are notifying physicians of new data demonstrating a significant increase in cardiotoxicity in patients randomized to receive Herceptin (trastuzumab) along with standard adjuvant chemotherapy, compared with patients who received chemotherapy alone.

The data come from the National Surgical Adjuvant Breast and Bowel Project (NSABP) study (B-31), a phase III trial involving 2,043 women with operable HER2 overexpressing breast cancer (immunohistochemistry test score of 3 or greater or a positive fluorescence in situ hybridization test score).

Preliminary analysis of safety data from this trial and the North Central Cancer Treatment Group (NCCTG) study (N9831) revealed a statistically significant increase in the 3-year cumulative incidence of New York Hospital Association class III and IV congestive heart failure and cardiac death observed in patients who received the Herceptin-containing regimen (4.1%), compared with the chemotherapy-alone group (0.8%). No cardiac deaths were observed in patients who received the Herceptin-containing regimen; one cardiac death occurred in the control arm. Final analysis of the cardiac safety data collected in these two studies is ongoing.

Herceptin as a single agent is indicated for the treatment of patients with metastatic breast cancer whose tumors overexpress the HER2 protein and who have received one or more chemotherapy regimens. Herceptin in combination with Taxol (paclitaxel) is indicated for treatment of patients with metastatic breast cancer whose tumors overexpress the HER2 protein and who have not received chemotherapy.

For additional information, contact Genentech Inc.'s medical communication department by calling 800-821-8590 or by visiting www.gene.com/gene/contact/

Publications
Publications
Topics
Article Type
Display Headline
Addition of Herceptin to Chemo Is Linked to Increased Cardiotoxicity
Display Headline
Addition of Herceptin to Chemo Is Linked to Increased Cardiotoxicity
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

One-Year Survival Is Rising in HIV-Infected Organ Transplant Recipients

Article Type
Changed
Display Headline
One-Year Survival Is Rising in HIV-Infected Organ Transplant Recipients

WASHINGTON — The introduction of highly active antiretroviral therapy, more effective prophylactic regimens, and improvements in surgical technique and antirejection therapy have made solid organ transplantation a possibility for HIV-infected patients, said Marla J. Keller, M.D., at a meeting sponsored by the National Kidney Foundation.

Based on the most recent analysis from an ongoing, multicenter, prospective, observational study, survival among HIV-infected kidney transplant patients at 1 year was 93.8%. For comparison, the 1-year survival for kidney transplant patients in the Organ Procurement and Transplantation Network database was 95.6% (1999–2001), said Dr. Keller of Mount Sinai School of Medicine in New York.

This analysis included 29 patients, 18 of whom received kidney transplants. The patients were enrolled in the study between 2000 and 2003. Potential kidney recipients were included in the study if they had CD4 T-cell counts of at least 200, HIV RNA less than 50 copies/mL, and no history of opportunistic infections. Patients are being followed for up to 5 years.

Initial immunosuppressive therapy included cyclosporine or tacrolimus in combination with prednisone, with or without mycophenolate mofetil. Rejections were managed with steroid pulses, changing calcineurin inhibitors or doses, and/or adding sirolimus and/or Thymoglobulin. All antiretroviral drugs were allowed, though AZT and stavudine (d4T) use was minimized. Standard transplant prophylaxis was used for several opportunistic organisms.

Most of the kidney transplant recipients (17) were male. There were slightly more white patients (10) than African American patients (8). Kidney donors were fairly evenly split: five related, living; three unrelated, living; six deceased; and four high infectious risk, deceased. Organs from deceased donors were considered high infectious risk if they were serologically negative for HIV and hepatitis B and C but the donor might have engaged in behavior putting them at risk for recent acquisition. The median baseline CD4 T-cell count was 439 cells/mm

One opportunistic infection occurred in a diabetic patient, who developed Candida esophagitis. Surprisingly, 12 kidney recipients (67%) had graft rejection, mostly of the early acute cellular type. Seven patients received Thymoglobulin in response to eight rejection episodes. The 1-year cumulative rejection estimate was 52%, said Dr. Keller. One kidney transplant recipient died because of congestive heart failure, and two patients had graft loss—one because of rupture from severe acute rejection and one due to chronic allograft nephropathy.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

WASHINGTON — The introduction of highly active antiretroviral therapy, more effective prophylactic regimens, and improvements in surgical technique and antirejection therapy have made solid organ transplantation a possibility for HIV-infected patients, said Marla J. Keller, M.D., at a meeting sponsored by the National Kidney Foundation.

Based on the most recent analysis from an ongoing, multicenter, prospective, observational study, survival among HIV-infected kidney transplant patients at 1 year was 93.8%. For comparison, the 1-year survival for kidney transplant patients in the Organ Procurement and Transplantation Network database was 95.6% (1999–2001), said Dr. Keller of Mount Sinai School of Medicine in New York.

This analysis included 29 patients, 18 of whom received kidney transplants. The patients were enrolled in the study between 2000 and 2003. Potential kidney recipients were included in the study if they had CD4 T-cell counts of at least 200, HIV RNA less than 50 copies/mL, and no history of opportunistic infections. Patients are being followed for up to 5 years.

Initial immunosuppressive therapy included cyclosporine or tacrolimus in combination with prednisone, with or without mycophenolate mofetil. Rejections were managed with steroid pulses, changing calcineurin inhibitors or doses, and/or adding sirolimus and/or Thymoglobulin. All antiretroviral drugs were allowed, though AZT and stavudine (d4T) use was minimized. Standard transplant prophylaxis was used for several opportunistic organisms.

Most of the kidney transplant recipients (17) were male. There were slightly more white patients (10) than African American patients (8). Kidney donors were fairly evenly split: five related, living; three unrelated, living; six deceased; and four high infectious risk, deceased. Organs from deceased donors were considered high infectious risk if they were serologically negative for HIV and hepatitis B and C but the donor might have engaged in behavior putting them at risk for recent acquisition. The median baseline CD4 T-cell count was 439 cells/mm

One opportunistic infection occurred in a diabetic patient, who developed Candida esophagitis. Surprisingly, 12 kidney recipients (67%) had graft rejection, mostly of the early acute cellular type. Seven patients received Thymoglobulin in response to eight rejection episodes. The 1-year cumulative rejection estimate was 52%, said Dr. Keller. One kidney transplant recipient died because of congestive heart failure, and two patients had graft loss—one because of rupture from severe acute rejection and one due to chronic allograft nephropathy.

WASHINGTON — The introduction of highly active antiretroviral therapy, more effective prophylactic regimens, and improvements in surgical technique and antirejection therapy have made solid organ transplantation a possibility for HIV-infected patients, said Marla J. Keller, M.D., at a meeting sponsored by the National Kidney Foundation.

Based on the most recent analysis from an ongoing, multicenter, prospective, observational study, survival among HIV-infected kidney transplant patients at 1 year was 93.8%. For comparison, the 1-year survival for kidney transplant patients in the Organ Procurement and Transplantation Network database was 95.6% (1999–2001), said Dr. Keller of Mount Sinai School of Medicine in New York.

This analysis included 29 patients, 18 of whom received kidney transplants. The patients were enrolled in the study between 2000 and 2003. Potential kidney recipients were included in the study if they had CD4 T-cell counts of at least 200, HIV RNA less than 50 copies/mL, and no history of opportunistic infections. Patients are being followed for up to 5 years.

Initial immunosuppressive therapy included cyclosporine or tacrolimus in combination with prednisone, with or without mycophenolate mofetil. Rejections were managed with steroid pulses, changing calcineurin inhibitors or doses, and/or adding sirolimus and/or Thymoglobulin. All antiretroviral drugs were allowed, though AZT and stavudine (d4T) use was minimized. Standard transplant prophylaxis was used for several opportunistic organisms.

Most of the kidney transplant recipients (17) were male. There were slightly more white patients (10) than African American patients (8). Kidney donors were fairly evenly split: five related, living; three unrelated, living; six deceased; and four high infectious risk, deceased. Organs from deceased donors were considered high infectious risk if they were serologically negative for HIV and hepatitis B and C but the donor might have engaged in behavior putting them at risk for recent acquisition. The median baseline CD4 T-cell count was 439 cells/mm

One opportunistic infection occurred in a diabetic patient, who developed Candida esophagitis. Surprisingly, 12 kidney recipients (67%) had graft rejection, mostly of the early acute cellular type. Seven patients received Thymoglobulin in response to eight rejection episodes. The 1-year cumulative rejection estimate was 52%, said Dr. Keller. One kidney transplant recipient died because of congestive heart failure, and two patients had graft loss—one because of rupture from severe acute rejection and one due to chronic allograft nephropathy.

Publications
Publications
Topics
Article Type
Display Headline
One-Year Survival Is Rising in HIV-Infected Organ Transplant Recipients
Display Headline
One-Year Survival Is Rising in HIV-Infected Organ Transplant Recipients
Article Source

PURLs Copyright

Inside the Article

Article PDF Media