User login
Atomoxetine Improves ADHD, Anxiety Disorder
WASHINGTON — The attention-deficit hyperactivity drug atomoxetine does not appear to improve comorbid depression in adolescents, but it does appear to reduce comorbid anxiety in children and adolescents, according to data from two studies presented at the annual meeting of the Pediatric Academic Societies.
Both trials were sponsored by Eli Lilly & Co., maker of atomoxetine (Strattera).
In the first study, patients aged 8–17 years, who met the DSM-IV diagnostic criteria for both attention-deficit hyperactivity disorder (ADHD) and anxiety disorder (generalized anxiety, separation anxiety, or social phobia disorder), were randomized to receive either atomoxetine (87 patients) or placebo (89 patients) in a 12-week trial.
The mean age of the patients was roughly 12 years, and boys outnumbered girls 3–1. The target dose of atomoxetine was 1.2 mg/kg per day (split and given twice a day), said Calvin Sumner, M.D., of Eli Lilly.
ADHD symptoms were assessed using the ADHD Rating Scale (ADHDRS). The Pediatric Anxiety Rating Scale total score and the Multidimensional Anxiety Scale for Children (which allows children to rate their own anxiety) were used to assess anxiety symptoms. The last observations were carried forward.
As a way to minimize any placebo effect, those randomized to receive atomoxetine actually received placebo for the first 2 weeks of the trial. Any patients who had a 25% reduction in anxiety score during that period were allowed to finish the trial but not included in the final analysis.
For the analysis that excluded patients with less than 25% improvement in anxiety during the first 2 weeks of the trial, those on atomoxetine (55 patients) had a significant improvement in ADHD scores from baseline to the end point, compared with those on placebo (58 patients). When all patients were considered, there was a significant improvement in ADHD scores for patients on atomoxetine, compared with those on placebo.
In the smaller analysis, there also was a significant improvement in anxiety scores for those on atomoxetine, compared with those on placebo. Among all patients, a significant improvement in anxiety scores was seen for those on atomoxetine, compared with those on placebo.
Decreased appetite was the only adverse event that occurred more frequently in the atomoxetine group.
In the second trial, adolescents had to meet the clinical definition of both ADHD and major depressive disorder. “These were kids who really had major depression,” Dr. Sumner said.
The patients, aged 12–18 years, were randomized to receive 9 weeks of treatment with atomoxetine (72 patients) or placebo (70 patients). Boys outnumbered girls 3–1. The target atomoxetine dose was 1.2 mg/kg each day, though patients could go up to a dose of 1.8 mg/kg each day. Both placebo and atomoxetine were given once a day.
The response of ADHD symptoms was measured using the 18-question ADHDRS. Depressive symptoms were measured using the Children's Depression Rating Scale. Patients were assessed using the Young Mania Rating Scale, as a way of determining whether the depression experienced by these adolescents was a heralding event for bipolar disorder or true depression.
The ADHD and depression scores at 9 weeks were analyzed as change from baseline, with last observation carried forward. Treatment emergent mania was described as a patient who started with a score of less than 15 on the mania scale, and at the end point the score was 15 or greater.
“Atomoxetine really helped depression. There was a considerable reduction in the depressive rating scales. The other side of the story is, so did placebo,” Dr. Sumner said. Placebo showed a very strong effect on depressive symptoms that was independent of its effect on ADHD.
“So this was inconclusive. There was no evidence—that was separable from placebo—that atomoxetine had any benefit in reducing depressive symptoms,” he said.
Two patients in each group had treatment emergent mania, a result that was not interpretable. In terms of adverse events, nausea and decreased appetite were more common in the atomoxetine group. Importantly, there were no adverse events involving suicidal ideation or suicidal behavior in either group.
WASHINGTON — The attention-deficit hyperactivity drug atomoxetine does not appear to improve comorbid depression in adolescents, but it does appear to reduce comorbid anxiety in children and adolescents, according to data from two studies presented at the annual meeting of the Pediatric Academic Societies.
Both trials were sponsored by Eli Lilly & Co., maker of atomoxetine (Strattera).
In the first study, patients aged 8–17 years, who met the DSM-IV diagnostic criteria for both attention-deficit hyperactivity disorder (ADHD) and anxiety disorder (generalized anxiety, separation anxiety, or social phobia disorder), were randomized to receive either atomoxetine (87 patients) or placebo (89 patients) in a 12-week trial.
The mean age of the patients was roughly 12 years, and boys outnumbered girls 3–1. The target dose of atomoxetine was 1.2 mg/kg per day (split and given twice a day), said Calvin Sumner, M.D., of Eli Lilly.
ADHD symptoms were assessed using the ADHD Rating Scale (ADHDRS). The Pediatric Anxiety Rating Scale total score and the Multidimensional Anxiety Scale for Children (which allows children to rate their own anxiety) were used to assess anxiety symptoms. The last observations were carried forward.
As a way to minimize any placebo effect, those randomized to receive atomoxetine actually received placebo for the first 2 weeks of the trial. Any patients who had a 25% reduction in anxiety score during that period were allowed to finish the trial but not included in the final analysis.
For the analysis that excluded patients with less than 25% improvement in anxiety during the first 2 weeks of the trial, those on atomoxetine (55 patients) had a significant improvement in ADHD scores from baseline to the end point, compared with those on placebo (58 patients). When all patients were considered, there was a significant improvement in ADHD scores for patients on atomoxetine, compared with those on placebo.
In the smaller analysis, there also was a significant improvement in anxiety scores for those on atomoxetine, compared with those on placebo. Among all patients, a significant improvement in anxiety scores was seen for those on atomoxetine, compared with those on placebo.
Decreased appetite was the only adverse event that occurred more frequently in the atomoxetine group.
In the second trial, adolescents had to meet the clinical definition of both ADHD and major depressive disorder. “These were kids who really had major depression,” Dr. Sumner said.
The patients, aged 12–18 years, were randomized to receive 9 weeks of treatment with atomoxetine (72 patients) or placebo (70 patients). Boys outnumbered girls 3–1. The target atomoxetine dose was 1.2 mg/kg each day, though patients could go up to a dose of 1.8 mg/kg each day. Both placebo and atomoxetine were given once a day.
The response of ADHD symptoms was measured using the 18-question ADHDRS. Depressive symptoms were measured using the Children's Depression Rating Scale. Patients were assessed using the Young Mania Rating Scale, as a way of determining whether the depression experienced by these adolescents was a heralding event for bipolar disorder or true depression.
The ADHD and depression scores at 9 weeks were analyzed as change from baseline, with last observation carried forward. Treatment emergent mania was described as a patient who started with a score of less than 15 on the mania scale, and at the end point the score was 15 or greater.
“Atomoxetine really helped depression. There was a considerable reduction in the depressive rating scales. The other side of the story is, so did placebo,” Dr. Sumner said. Placebo showed a very strong effect on depressive symptoms that was independent of its effect on ADHD.
“So this was inconclusive. There was no evidence—that was separable from placebo—that atomoxetine had any benefit in reducing depressive symptoms,” he said.
Two patients in each group had treatment emergent mania, a result that was not interpretable. In terms of adverse events, nausea and decreased appetite were more common in the atomoxetine group. Importantly, there were no adverse events involving suicidal ideation or suicidal behavior in either group.
WASHINGTON — The attention-deficit hyperactivity drug atomoxetine does not appear to improve comorbid depression in adolescents, but it does appear to reduce comorbid anxiety in children and adolescents, according to data from two studies presented at the annual meeting of the Pediatric Academic Societies.
Both trials were sponsored by Eli Lilly & Co., maker of atomoxetine (Strattera).
In the first study, patients aged 8–17 years, who met the DSM-IV diagnostic criteria for both attention-deficit hyperactivity disorder (ADHD) and anxiety disorder (generalized anxiety, separation anxiety, or social phobia disorder), were randomized to receive either atomoxetine (87 patients) or placebo (89 patients) in a 12-week trial.
The mean age of the patients was roughly 12 years, and boys outnumbered girls 3–1. The target dose of atomoxetine was 1.2 mg/kg per day (split and given twice a day), said Calvin Sumner, M.D., of Eli Lilly.
ADHD symptoms were assessed using the ADHD Rating Scale (ADHDRS). The Pediatric Anxiety Rating Scale total score and the Multidimensional Anxiety Scale for Children (which allows children to rate their own anxiety) were used to assess anxiety symptoms. The last observations were carried forward.
As a way to minimize any placebo effect, those randomized to receive atomoxetine actually received placebo for the first 2 weeks of the trial. Any patients who had a 25% reduction in anxiety score during that period were allowed to finish the trial but not included in the final analysis.
For the analysis that excluded patients with less than 25% improvement in anxiety during the first 2 weeks of the trial, those on atomoxetine (55 patients) had a significant improvement in ADHD scores from baseline to the end point, compared with those on placebo (58 patients). When all patients were considered, there was a significant improvement in ADHD scores for patients on atomoxetine, compared with those on placebo.
In the smaller analysis, there also was a significant improvement in anxiety scores for those on atomoxetine, compared with those on placebo. Among all patients, a significant improvement in anxiety scores was seen for those on atomoxetine, compared with those on placebo.
Decreased appetite was the only adverse event that occurred more frequently in the atomoxetine group.
In the second trial, adolescents had to meet the clinical definition of both ADHD and major depressive disorder. “These were kids who really had major depression,” Dr. Sumner said.
The patients, aged 12–18 years, were randomized to receive 9 weeks of treatment with atomoxetine (72 patients) or placebo (70 patients). Boys outnumbered girls 3–1. The target atomoxetine dose was 1.2 mg/kg each day, though patients could go up to a dose of 1.8 mg/kg each day. Both placebo and atomoxetine were given once a day.
The response of ADHD symptoms was measured using the 18-question ADHDRS. Depressive symptoms were measured using the Children's Depression Rating Scale. Patients were assessed using the Young Mania Rating Scale, as a way of determining whether the depression experienced by these adolescents was a heralding event for bipolar disorder or true depression.
The ADHD and depression scores at 9 weeks were analyzed as change from baseline, with last observation carried forward. Treatment emergent mania was described as a patient who started with a score of less than 15 on the mania scale, and at the end point the score was 15 or greater.
“Atomoxetine really helped depression. There was a considerable reduction in the depressive rating scales. The other side of the story is, so did placebo,” Dr. Sumner said. Placebo showed a very strong effect on depressive symptoms that was independent of its effect on ADHD.
“So this was inconclusive. There was no evidence—that was separable from placebo—that atomoxetine had any benefit in reducing depressive symptoms,” he said.
Two patients in each group had treatment emergent mania, a result that was not interpretable. In terms of adverse events, nausea and decreased appetite were more common in the atomoxetine group. Importantly, there were no adverse events involving suicidal ideation or suicidal behavior in either group.
Glucose Meters: Watch Change In Measurement
LifeScan Inc. is notifying users that it is possible to accidentally change the measurement units on its OneTouch Ultra, InDuo, and OneTouch FastTake blood glucose meters, which can lead patients to misinterpret their results.
The company has found that users can inadvertently change the unit of measurement in the course of setting their meter's time and date. The two possible units on the affected meters are mg/dL and mmol/L.
The choice of measurement unit is generally determined by the country where the patient lives, and mg/dL is usually used in the United States.
About 40 adverse events worldwide related to incorrect measurement settings on these meters have been reported to LifeScan.
Most of the adverse events consisted of temporary periods of low or high blood glucose.
Patients using the affected meters are advised to contact the company to confirm that their meter is set to the proper unit of measurement. Users may contact LifeScan customer service by calling 800-515-0915.
For more information, see the Food and Drug Administration's firm safety alert at www.fda.gov/oc/po/firmrecalls/lifescan04_05.html
LifeScan Inc. is notifying users that it is possible to accidentally change the measurement units on its OneTouch Ultra, InDuo, and OneTouch FastTake blood glucose meters, which can lead patients to misinterpret their results.
The company has found that users can inadvertently change the unit of measurement in the course of setting their meter's time and date. The two possible units on the affected meters are mg/dL and mmol/L.
The choice of measurement unit is generally determined by the country where the patient lives, and mg/dL is usually used in the United States.
About 40 adverse events worldwide related to incorrect measurement settings on these meters have been reported to LifeScan.
Most of the adverse events consisted of temporary periods of low or high blood glucose.
Patients using the affected meters are advised to contact the company to confirm that their meter is set to the proper unit of measurement. Users may contact LifeScan customer service by calling 800-515-0915.
For more information, see the Food and Drug Administration's firm safety alert at www.fda.gov/oc/po/firmrecalls/lifescan04_05.html
LifeScan Inc. is notifying users that it is possible to accidentally change the measurement units on its OneTouch Ultra, InDuo, and OneTouch FastTake blood glucose meters, which can lead patients to misinterpret their results.
The company has found that users can inadvertently change the unit of measurement in the course of setting their meter's time and date. The two possible units on the affected meters are mg/dL and mmol/L.
The choice of measurement unit is generally determined by the country where the patient lives, and mg/dL is usually used in the United States.
About 40 adverse events worldwide related to incorrect measurement settings on these meters have been reported to LifeScan.
Most of the adverse events consisted of temporary periods of low or high blood glucose.
Patients using the affected meters are advised to contact the company to confirm that their meter is set to the proper unit of measurement. Users may contact LifeScan customer service by calling 800-515-0915.
For more information, see the Food and Drug Administration's firm safety alert at www.fda.gov/oc/po/firmrecalls/lifescan04_05.html
Tender Point Criteria for Fibromyalgia Called Flawed
DESTIN, FLA. – The tender point criteria commonly used to diagnose fibromyalgia are not useful and in fact may even explain why the disease appears to disproportionately affect women, Daniel Clauw, M.D., said at a rheumatology meeting sponsored by Virginia Commonwealth University.
According to the American College of Rheumatology's 1990 classification criteria, patients must have both widespread pain and tenderness in 11 of 18 tender points in order to be diagnosed with fibromyalgia.
Yet “tender points merely represent areas of the body where everyone is more tender,” explained Dr. Clauw, the executive director of the Chronic Pain and Fatigue Research Center at the University of Michigan in Ann Arbor.
Fibromyalgia patients and healthy individuals were found to have different thresholds of pain in those tender points. These two groups also had different thresholds of pain in areas not thought to be tender–the forehead and fingernails, for example–as at the recognized tender points. In addition, the cutoff of 11 out of 18 tender points is arbitrary. “We know that tenderness varies a great deal from day to day and week to week, especially in women,” he said.
In clinical practice, many physicians are realizing the arbitrary nature of the diagnostic criteria.
The diminished role of tender points represents a shift in the way that they view the disorder. In the past, the disorder was considered a discrete illness with pain and focal areas of tenderness. In more recent years, fibromyalgia has been appreciated as part of a larger continuum, with many somatic symptoms and diffuse tenderness all over the body–not just at tender points.
Tender points are “not even a good way to measure tenderness,” as study findings suggest that the number of tender points correlates better with a patient's general stress than with pain, Dr. Clauw pointed out.
Women are 10 times more likely to have achy and tender points, so the higher incidence of fibromyalgia among them may be attributable to a selection bias created by the tender point criteria, he continued.
Men who have chronic widespread pain but not many tender points in many cases are given diagnoses other than fibromyalgia, “when in fact they probably have the exact same problem as women, who have a lot of tender points and meet other criteria for fibromyalgia.”
DESTIN, FLA. – The tender point criteria commonly used to diagnose fibromyalgia are not useful and in fact may even explain why the disease appears to disproportionately affect women, Daniel Clauw, M.D., said at a rheumatology meeting sponsored by Virginia Commonwealth University.
According to the American College of Rheumatology's 1990 classification criteria, patients must have both widespread pain and tenderness in 11 of 18 tender points in order to be diagnosed with fibromyalgia.
Yet “tender points merely represent areas of the body where everyone is more tender,” explained Dr. Clauw, the executive director of the Chronic Pain and Fatigue Research Center at the University of Michigan in Ann Arbor.
Fibromyalgia patients and healthy individuals were found to have different thresholds of pain in those tender points. These two groups also had different thresholds of pain in areas not thought to be tender–the forehead and fingernails, for example–as at the recognized tender points. In addition, the cutoff of 11 out of 18 tender points is arbitrary. “We know that tenderness varies a great deal from day to day and week to week, especially in women,” he said.
In clinical practice, many physicians are realizing the arbitrary nature of the diagnostic criteria.
The diminished role of tender points represents a shift in the way that they view the disorder. In the past, the disorder was considered a discrete illness with pain and focal areas of tenderness. In more recent years, fibromyalgia has been appreciated as part of a larger continuum, with many somatic symptoms and diffuse tenderness all over the body–not just at tender points.
Tender points are “not even a good way to measure tenderness,” as study findings suggest that the number of tender points correlates better with a patient's general stress than with pain, Dr. Clauw pointed out.
Women are 10 times more likely to have achy and tender points, so the higher incidence of fibromyalgia among them may be attributable to a selection bias created by the tender point criteria, he continued.
Men who have chronic widespread pain but not many tender points in many cases are given diagnoses other than fibromyalgia, “when in fact they probably have the exact same problem as women, who have a lot of tender points and meet other criteria for fibromyalgia.”
DESTIN, FLA. – The tender point criteria commonly used to diagnose fibromyalgia are not useful and in fact may even explain why the disease appears to disproportionately affect women, Daniel Clauw, M.D., said at a rheumatology meeting sponsored by Virginia Commonwealth University.
According to the American College of Rheumatology's 1990 classification criteria, patients must have both widespread pain and tenderness in 11 of 18 tender points in order to be diagnosed with fibromyalgia.
Yet “tender points merely represent areas of the body where everyone is more tender,” explained Dr. Clauw, the executive director of the Chronic Pain and Fatigue Research Center at the University of Michigan in Ann Arbor.
Fibromyalgia patients and healthy individuals were found to have different thresholds of pain in those tender points. These two groups also had different thresholds of pain in areas not thought to be tender–the forehead and fingernails, for example–as at the recognized tender points. In addition, the cutoff of 11 out of 18 tender points is arbitrary. “We know that tenderness varies a great deal from day to day and week to week, especially in women,” he said.
In clinical practice, many physicians are realizing the arbitrary nature of the diagnostic criteria.
The diminished role of tender points represents a shift in the way that they view the disorder. In the past, the disorder was considered a discrete illness with pain and focal areas of tenderness. In more recent years, fibromyalgia has been appreciated as part of a larger continuum, with many somatic symptoms and diffuse tenderness all over the body–not just at tender points.
Tender points are “not even a good way to measure tenderness,” as study findings suggest that the number of tender points correlates better with a patient's general stress than with pain, Dr. Clauw pointed out.
Women are 10 times more likely to have achy and tender points, so the higher incidence of fibromyalgia among them may be attributable to a selection bias created by the tender point criteria, he continued.
Men who have chronic widespread pain but not many tender points in many cases are given diagnoses other than fibromyalgia, “when in fact they probably have the exact same problem as women, who have a lot of tender points and meet other criteria for fibromyalgia.”
ADHD Drug Eases Anxiety, Not Depression
WASHINGTON – The attention-deficit hyperactivity drug atomoxetine does not appear to improve comorbid depression in adolescents, but it does appear to reduce comorbid anxiety in children and adolescents, according to data from two studies presented at the annual meeting of the Pediatric Academic Societies.
Both trials were sponsored by Eli Lilly & Co., maker of atomoxetine (Strattera).
In the first study, patients aged 8–17 years, who met the DSM-IV diagnostic criteria for both attention-deficit hyperactivity disorder (ADHD) and anxiety disorder (generalized anxiety, separation anxiety, or social phobia disorder), were randomized to receive either atomoxetine (87 patients) or placebo (89 patients) in a 12-week trial.
The mean age of the patients was roughly 12 years, and boys outnumbered girls 3–1. The target dose of atomoxetine was 1.2 mg/kg per day (split and given twice a day), said Calvin Sumner, M.D., of Eli Lilly.
ADHD symptoms were assessed using the ADHD Rating Scale (ADHDRS). The Pediatric Anxiety Rating Scale total score and the Multidimensional Anxiety Scale for Children (which allows children to rate their own anxiety) were used to assess anxiety symptoms. The last observations were carried forward.
As a way to minimize any placebo effect, those randomized to receive atomoxetine actually received placebo for the first 2 weeks of the trial. Any patients who had a 25% reduction in anxiety score during that period were allowed to finish the trial but not included in the final analysis.
For the analysis that excluded patients with less than 25% improvement in anxiety during the first 2 weeks of the trial, those on atomoxetine (55 patients) had a significant improvement in ADHD scores from baseline to the end point, compared with those on placebo (58 patients). When all patients were considered, there was a significant improvement in ADHD scores for patient on atomoxetine, compared with those on placebo.
In the smaller analysis, there also was a significant improvement in anxiety scores for those on atomoxetine, compared with those on placebo.
Among all patients, a significant improvement in anxiety scores was seen for those on atomoxetine, compared with those on placebo.
Also looking at the full group, the children who received atomoxetine had a greater perceived reduction in anxiety symptoms, compared with those who received placebo, as measured by the Multidimensional Anxiety Scale for Children.
Decreased appetite was the only adverse event that occurred more frequently in the atomoxetine group.
In the second trial, adolescents had to meet the clinical definition of both ADHD and major depressive disorder. “These were kids who really had major depression,” Dr. Sumner said.
The patients, aged 12–18 years, were randomized to receive 9 weeks of treatment with atomoxetine (72 patients) or placebo (70 patients). Boys outnumbered girls 3–1.
The target atomoxetine dose was 1.2 mg/kg each day, though patients could go up to a dose of 1.8 mg/kg each day. Both placebo and atomoxetine were given once a day. The response of ADHD symptoms was measured using the 18-question ADHDRS. Depressive symptoms were measured using the Children's Depression Rating Scale. Patients were assessed using the Young Mania Rating Scale, as a way of determining whether the depression experienced by these adolescents was a heralding event for bipolar disorder or true depression.
The ADHD and depression scores at 9 weeks were analyzed as change from baseline, with last observation carried forward. Treatment emergent mania was described as a patient who started out with a score of less than 15 on the mania scale, and at the end point the mania score was 15 or greater.
“Atomoxetine really helped depression. There was a considerable reduction in the depressive rating scales. The other side of the story is, so did placebo,” Dr. Sumner said.
Placebo showed a very strong effect on depressive symptoms that was independent of its effect on ADHD.
“So this was inconclusive. There was no evidence–that was separable from placebo–that atomoxetine had any benefit in reducing depressive symptoms,” Dr. Sumner said.
Two patients in each group had treatment emergent mania, a result that was not interpretable.
In terms of adverse events, nausea and decreased appetite were more common in the atomoxetine group. Importantly, there were no adverse events involving suicidal ideation or suicidal behavior in either group.
The meeting also was sponsored by the American Pediatric Society, the Society for Pediatric Research, the Ambulatory Pediatric Association, and the American Academy of Pediatrics.
WASHINGTON – The attention-deficit hyperactivity drug atomoxetine does not appear to improve comorbid depression in adolescents, but it does appear to reduce comorbid anxiety in children and adolescents, according to data from two studies presented at the annual meeting of the Pediatric Academic Societies.
Both trials were sponsored by Eli Lilly & Co., maker of atomoxetine (Strattera).
In the first study, patients aged 8–17 years, who met the DSM-IV diagnostic criteria for both attention-deficit hyperactivity disorder (ADHD) and anxiety disorder (generalized anxiety, separation anxiety, or social phobia disorder), were randomized to receive either atomoxetine (87 patients) or placebo (89 patients) in a 12-week trial.
The mean age of the patients was roughly 12 years, and boys outnumbered girls 3–1. The target dose of atomoxetine was 1.2 mg/kg per day (split and given twice a day), said Calvin Sumner, M.D., of Eli Lilly.
ADHD symptoms were assessed using the ADHD Rating Scale (ADHDRS). The Pediatric Anxiety Rating Scale total score and the Multidimensional Anxiety Scale for Children (which allows children to rate their own anxiety) were used to assess anxiety symptoms. The last observations were carried forward.
As a way to minimize any placebo effect, those randomized to receive atomoxetine actually received placebo for the first 2 weeks of the trial. Any patients who had a 25% reduction in anxiety score during that period were allowed to finish the trial but not included in the final analysis.
For the analysis that excluded patients with less than 25% improvement in anxiety during the first 2 weeks of the trial, those on atomoxetine (55 patients) had a significant improvement in ADHD scores from baseline to the end point, compared with those on placebo (58 patients). When all patients were considered, there was a significant improvement in ADHD scores for patient on atomoxetine, compared with those on placebo.
In the smaller analysis, there also was a significant improvement in anxiety scores for those on atomoxetine, compared with those on placebo.
Among all patients, a significant improvement in anxiety scores was seen for those on atomoxetine, compared with those on placebo.
Also looking at the full group, the children who received atomoxetine had a greater perceived reduction in anxiety symptoms, compared with those who received placebo, as measured by the Multidimensional Anxiety Scale for Children.
Decreased appetite was the only adverse event that occurred more frequently in the atomoxetine group.
In the second trial, adolescents had to meet the clinical definition of both ADHD and major depressive disorder. “These were kids who really had major depression,” Dr. Sumner said.
The patients, aged 12–18 years, were randomized to receive 9 weeks of treatment with atomoxetine (72 patients) or placebo (70 patients). Boys outnumbered girls 3–1.
The target atomoxetine dose was 1.2 mg/kg each day, though patients could go up to a dose of 1.8 mg/kg each day. Both placebo and atomoxetine were given once a day. The response of ADHD symptoms was measured using the 18-question ADHDRS. Depressive symptoms were measured using the Children's Depression Rating Scale. Patients were assessed using the Young Mania Rating Scale, as a way of determining whether the depression experienced by these adolescents was a heralding event for bipolar disorder or true depression.
The ADHD and depression scores at 9 weeks were analyzed as change from baseline, with last observation carried forward. Treatment emergent mania was described as a patient who started out with a score of less than 15 on the mania scale, and at the end point the mania score was 15 or greater.
“Atomoxetine really helped depression. There was a considerable reduction in the depressive rating scales. The other side of the story is, so did placebo,” Dr. Sumner said.
Placebo showed a very strong effect on depressive symptoms that was independent of its effect on ADHD.
“So this was inconclusive. There was no evidence–that was separable from placebo–that atomoxetine had any benefit in reducing depressive symptoms,” Dr. Sumner said.
Two patients in each group had treatment emergent mania, a result that was not interpretable.
In terms of adverse events, nausea and decreased appetite were more common in the atomoxetine group. Importantly, there were no adverse events involving suicidal ideation or suicidal behavior in either group.
The meeting also was sponsored by the American Pediatric Society, the Society for Pediatric Research, the Ambulatory Pediatric Association, and the American Academy of Pediatrics.
WASHINGTON – The attention-deficit hyperactivity drug atomoxetine does not appear to improve comorbid depression in adolescents, but it does appear to reduce comorbid anxiety in children and adolescents, according to data from two studies presented at the annual meeting of the Pediatric Academic Societies.
Both trials were sponsored by Eli Lilly & Co., maker of atomoxetine (Strattera).
In the first study, patients aged 8–17 years, who met the DSM-IV diagnostic criteria for both attention-deficit hyperactivity disorder (ADHD) and anxiety disorder (generalized anxiety, separation anxiety, or social phobia disorder), were randomized to receive either atomoxetine (87 patients) or placebo (89 patients) in a 12-week trial.
The mean age of the patients was roughly 12 years, and boys outnumbered girls 3–1. The target dose of atomoxetine was 1.2 mg/kg per day (split and given twice a day), said Calvin Sumner, M.D., of Eli Lilly.
ADHD symptoms were assessed using the ADHD Rating Scale (ADHDRS). The Pediatric Anxiety Rating Scale total score and the Multidimensional Anxiety Scale for Children (which allows children to rate their own anxiety) were used to assess anxiety symptoms. The last observations were carried forward.
As a way to minimize any placebo effect, those randomized to receive atomoxetine actually received placebo for the first 2 weeks of the trial. Any patients who had a 25% reduction in anxiety score during that period were allowed to finish the trial but not included in the final analysis.
For the analysis that excluded patients with less than 25% improvement in anxiety during the first 2 weeks of the trial, those on atomoxetine (55 patients) had a significant improvement in ADHD scores from baseline to the end point, compared with those on placebo (58 patients). When all patients were considered, there was a significant improvement in ADHD scores for patient on atomoxetine, compared with those on placebo.
In the smaller analysis, there also was a significant improvement in anxiety scores for those on atomoxetine, compared with those on placebo.
Among all patients, a significant improvement in anxiety scores was seen for those on atomoxetine, compared with those on placebo.
Also looking at the full group, the children who received atomoxetine had a greater perceived reduction in anxiety symptoms, compared with those who received placebo, as measured by the Multidimensional Anxiety Scale for Children.
Decreased appetite was the only adverse event that occurred more frequently in the atomoxetine group.
In the second trial, adolescents had to meet the clinical definition of both ADHD and major depressive disorder. “These were kids who really had major depression,” Dr. Sumner said.
The patients, aged 12–18 years, were randomized to receive 9 weeks of treatment with atomoxetine (72 patients) or placebo (70 patients). Boys outnumbered girls 3–1.
The target atomoxetine dose was 1.2 mg/kg each day, though patients could go up to a dose of 1.8 mg/kg each day. Both placebo and atomoxetine were given once a day. The response of ADHD symptoms was measured using the 18-question ADHDRS. Depressive symptoms were measured using the Children's Depression Rating Scale. Patients were assessed using the Young Mania Rating Scale, as a way of determining whether the depression experienced by these adolescents was a heralding event for bipolar disorder or true depression.
The ADHD and depression scores at 9 weeks were analyzed as change from baseline, with last observation carried forward. Treatment emergent mania was described as a patient who started out with a score of less than 15 on the mania scale, and at the end point the mania score was 15 or greater.
“Atomoxetine really helped depression. There was a considerable reduction in the depressive rating scales. The other side of the story is, so did placebo,” Dr. Sumner said.
Placebo showed a very strong effect on depressive symptoms that was independent of its effect on ADHD.
“So this was inconclusive. There was no evidence–that was separable from placebo–that atomoxetine had any benefit in reducing depressive symptoms,” Dr. Sumner said.
Two patients in each group had treatment emergent mania, a result that was not interpretable.
In terms of adverse events, nausea and decreased appetite were more common in the atomoxetine group. Importantly, there were no adverse events involving suicidal ideation or suicidal behavior in either group.
The meeting also was sponsored by the American Pediatric Society, the Society for Pediatric Research, the Ambulatory Pediatric Association, and the American Academy of Pediatrics.
Infertility Treatment Tied to Neural Tube Defects
WASHINGTON — Infants born to women treated for infertility—particularly those treated with clomiphene citrate around the time of conception—have a significantly increased risk of neural tube defects, according to results of a study presented at the annual meeting of the Pediatric Academic Societies.
In the study, singleton infants with neural tube defects were almost five times (OR 4.8) as likely as were those in a healthy control group to have a mother with a history of infertility. Singletons with neural tube defects were 11.7 times more likely to have been exposed to clomiphene around the time of conception than were those in the control group, said Yvonne Wu, M.D., of the University of California, San Francisco.
Clomiphene citrate has been used to treat infertility since the 1960s. Beginning in the 1970s, some case reports suggested that the drug might be a risk factor for neural tube defects in offspring. Several observational studies conducted in the 1980s produced conflicting results. Animal studies have shown that clomiphene administration before ovulation leads to an increased risk of exencephaly in offspring.
The current case-control study was nested within the population of 110,624 singleton live births delivered at 36 weeks' gestation or later from 1994 to 1997. The data came from the Kaiser Permanente database for Northern California.
The researchers identified all infants in this group with a physician diagnosis of spina bifida, other spinal cord anomalies, or spinal cerebellar disease. Reviewers were blinded to maternal infertility.
In all, 18 cases of neural tube defects were identified, resulting in a birth prevalence of 1.6 per 10,000. These included 13 cases of spina bifida aperta (myelomeningocele and meningocele) and 5 cases of spina bifida occulta. A total of 1,608 control infants also were identified. These infants were free of cerebral palsy, genetic abnormalities, or congenital anomalies.
Univariate and multivariate odds ratios were calculated and adjusted for infant gender, maternal age, and maternal race. There were no demographic differences between the infants in the case and control groups.
A maternal history of infertility and clomiphene use were both independent predictors of neural tube defects.
Among the infants with neural tube defects, 22% of the mothers had a history of infertility, compared with only 6% of mothers of infants in the control group. Maternal history of infertility was determined from an infertility diagnosis in the Kaiser database, use of one of 23 infertility drugs documented in the Kaiser system, or evaluation at one of 11 infertility clinics in Northern California.
Seventeen percent of the infants with neural tube defects were exposed to clomiphene around the time of conception, compared with 2% of the infants in the control group. The periconceptional window was defined as 60 days before the date of conception to 15 days after. The date of conception was defined as the date of birth minus seven times the gestational age in weeks.
Eighty percent of the women who had been given clomiphene around the time of conception had received multiple courses of the drug before conception. Previous research has shown that the active component of clomiphene is present in the bloodstream for more than a month, meaning that clomiphene is still present for 3–4 weeks after conception—the period when the neural tube closes.
Three of the mothers of infants with neural tube defects had received an average of 5.7 courses of clomiphene, compared with 2.7 courses for the mothers of infants in the control group exposed to the drug. This difference was significant, suggesting that there may be a dose response.
The meeting was sponsored by the American Pediatric Society, the Society for Pediatric Research, the Ambulatory Pediatric Association, and the American Academy of Pediatrics.
WASHINGTON — Infants born to women treated for infertility—particularly those treated with clomiphene citrate around the time of conception—have a significantly increased risk of neural tube defects, according to results of a study presented at the annual meeting of the Pediatric Academic Societies.
In the study, singleton infants with neural tube defects were almost five times (OR 4.8) as likely as were those in a healthy control group to have a mother with a history of infertility. Singletons with neural tube defects were 11.7 times more likely to have been exposed to clomiphene around the time of conception than were those in the control group, said Yvonne Wu, M.D., of the University of California, San Francisco.
Clomiphene citrate has been used to treat infertility since the 1960s. Beginning in the 1970s, some case reports suggested that the drug might be a risk factor for neural tube defects in offspring. Several observational studies conducted in the 1980s produced conflicting results. Animal studies have shown that clomiphene administration before ovulation leads to an increased risk of exencephaly in offspring.
The current case-control study was nested within the population of 110,624 singleton live births delivered at 36 weeks' gestation or later from 1994 to 1997. The data came from the Kaiser Permanente database for Northern California.
The researchers identified all infants in this group with a physician diagnosis of spina bifida, other spinal cord anomalies, or spinal cerebellar disease. Reviewers were blinded to maternal infertility.
In all, 18 cases of neural tube defects were identified, resulting in a birth prevalence of 1.6 per 10,000. These included 13 cases of spina bifida aperta (myelomeningocele and meningocele) and 5 cases of spina bifida occulta. A total of 1,608 control infants also were identified. These infants were free of cerebral palsy, genetic abnormalities, or congenital anomalies.
Univariate and multivariate odds ratios were calculated and adjusted for infant gender, maternal age, and maternal race. There were no demographic differences between the infants in the case and control groups.
A maternal history of infertility and clomiphene use were both independent predictors of neural tube defects.
Among the infants with neural tube defects, 22% of the mothers had a history of infertility, compared with only 6% of mothers of infants in the control group. Maternal history of infertility was determined from an infertility diagnosis in the Kaiser database, use of one of 23 infertility drugs documented in the Kaiser system, or evaluation at one of 11 infertility clinics in Northern California.
Seventeen percent of the infants with neural tube defects were exposed to clomiphene around the time of conception, compared with 2% of the infants in the control group. The periconceptional window was defined as 60 days before the date of conception to 15 days after. The date of conception was defined as the date of birth minus seven times the gestational age in weeks.
Eighty percent of the women who had been given clomiphene around the time of conception had received multiple courses of the drug before conception. Previous research has shown that the active component of clomiphene is present in the bloodstream for more than a month, meaning that clomiphene is still present for 3–4 weeks after conception—the period when the neural tube closes.
Three of the mothers of infants with neural tube defects had received an average of 5.7 courses of clomiphene, compared with 2.7 courses for the mothers of infants in the control group exposed to the drug. This difference was significant, suggesting that there may be a dose response.
The meeting was sponsored by the American Pediatric Society, the Society for Pediatric Research, the Ambulatory Pediatric Association, and the American Academy of Pediatrics.
WASHINGTON — Infants born to women treated for infertility—particularly those treated with clomiphene citrate around the time of conception—have a significantly increased risk of neural tube defects, according to results of a study presented at the annual meeting of the Pediatric Academic Societies.
In the study, singleton infants with neural tube defects were almost five times (OR 4.8) as likely as were those in a healthy control group to have a mother with a history of infertility. Singletons with neural tube defects were 11.7 times more likely to have been exposed to clomiphene around the time of conception than were those in the control group, said Yvonne Wu, M.D., of the University of California, San Francisco.
Clomiphene citrate has been used to treat infertility since the 1960s. Beginning in the 1970s, some case reports suggested that the drug might be a risk factor for neural tube defects in offspring. Several observational studies conducted in the 1980s produced conflicting results. Animal studies have shown that clomiphene administration before ovulation leads to an increased risk of exencephaly in offspring.
The current case-control study was nested within the population of 110,624 singleton live births delivered at 36 weeks' gestation or later from 1994 to 1997. The data came from the Kaiser Permanente database for Northern California.
The researchers identified all infants in this group with a physician diagnosis of spina bifida, other spinal cord anomalies, or spinal cerebellar disease. Reviewers were blinded to maternal infertility.
In all, 18 cases of neural tube defects were identified, resulting in a birth prevalence of 1.6 per 10,000. These included 13 cases of spina bifida aperta (myelomeningocele and meningocele) and 5 cases of spina bifida occulta. A total of 1,608 control infants also were identified. These infants were free of cerebral palsy, genetic abnormalities, or congenital anomalies.
Univariate and multivariate odds ratios were calculated and adjusted for infant gender, maternal age, and maternal race. There were no demographic differences between the infants in the case and control groups.
A maternal history of infertility and clomiphene use were both independent predictors of neural tube defects.
Among the infants with neural tube defects, 22% of the mothers had a history of infertility, compared with only 6% of mothers of infants in the control group. Maternal history of infertility was determined from an infertility diagnosis in the Kaiser database, use of one of 23 infertility drugs documented in the Kaiser system, or evaluation at one of 11 infertility clinics in Northern California.
Seventeen percent of the infants with neural tube defects were exposed to clomiphene around the time of conception, compared with 2% of the infants in the control group. The periconceptional window was defined as 60 days before the date of conception to 15 days after. The date of conception was defined as the date of birth minus seven times the gestational age in weeks.
Eighty percent of the women who had been given clomiphene around the time of conception had received multiple courses of the drug before conception. Previous research has shown that the active component of clomiphene is present in the bloodstream for more than a month, meaning that clomiphene is still present for 3–4 weeks after conception—the period when the neural tube closes.
Three of the mothers of infants with neural tube defects had received an average of 5.7 courses of clomiphene, compared with 2.7 courses for the mothers of infants in the control group exposed to the drug. This difference was significant, suggesting that there may be a dose response.
The meeting was sponsored by the American Pediatric Society, the Society for Pediatric Research, the Ambulatory Pediatric Association, and the American Academy of Pediatrics.
Opportunistic Disease and AIDS: Use Neuroimaging
ORLANDO, FLA. — Neuroimaging can make a big difference in the care of AIDS patients, who are vulnerable to several opportunistic diseases, one expert said at the annual meeting of the American Society of Neuroimaging.
James G. Smirniotopoulos, M.D., chairman of radiology at the Uniformed Services University of the Health Sciences in Bethesda, Md., noted that AIDS patients are vulnerable to both infectious and neoplastic opportunistic diseases. Neuroimaging is indicated in any AIDS patients who manifest:
▸ Mental status changes.
▸ Neurologic deficits.
▸ Seizures (focal or generalized).
▸ Headaches.
▸ Meningeal signs.
There are some cautions to keep in mind though. AIDS patients typically have depression and other psychological conditions as a result of their situation, and these should be separated out from genuinely neurologic causes. In addition, in a substance abuse population, seizures can be the result of substance withdrawal. Lastly, when AIDS patients complain of headaches, their immune status can determine the type of imaging used. For patients with very suppressed CD4 counts (less than 200 cells/μL), get a CT scan. However, if the CD4 count is mildly suppressed (greater than 200 cells/μL), get an MRI.
Once AIDS patients have been imaged, Dr. Smirniotopoulos and his colleagues triage them based on whether they have normal imaging results, atrophy, lesions without mass effect, or mass lesions.
“When you see a scan that looks like atrophy, you want to remember that you can have the spurious appearance of atrophy in patients with malnutrition, patients with dehydration, patients who are on steroids, patients who are on [long-term] renal dialysis—they all appear like atrophy,” he said.
AIDS encephalopathy—formerly known as AIDS dementia complex—can also appear as atrophy. On images, typically this condition appears as bilateral white matter volume loss that can be symmetrical or not.
“This is a disease process that is destructive of the parenchyma, but there's a lot of debate about what's really going on,” Dr. Smirniotopoulos said. Some have suggested that this condition is the result of the direct effect of the AIDS virus on the neurons and/or oligodendrocytes. Others have suggested that it may be a toxic reaction stimulated or produced by the macrophages or some type of autoimmune effect. Regardless of the exact cause, AIDS encephalopathy “is somehow related to the fact that the macrophages themselves are infected by the HIV virus,” he said.
Progressive multifocal leukoencephalopathy (PML) is a lesion that has geographic signal and density abnormalities but without a mass effect. This lesion usually does not show any effect when enhanced using gadolinium. PML is a demyelinating white matter disease. On images, look for big geographic lesions that come right up to the gray matter and stop, Dr. Smirniotopoulos said.
The lesions are the result of infection with the ubiquitous JC papovavirus. As many as 70% of adults have antibodies to this virus, and almost 20% of patients with AIDS express antigens. PML is responsible for about 4% of AIDS deaths. Mortality is high in these patients. In the past, most patients with PML died within 4–6 months of diagnosis. Zidovudine and other antiretroviral drugs have improved survival only somewhat.
The two most common mass lesions seen on images in patients with AIDS are from primary infections and CNS lymphomas—with toxoplasmosis being the most common of the infections. “Toxoplasmosis is still probably what we think about first and foremost when an AIDS patient has a mass lesion,” Dr. Smirniotopoulos said.
If toxoplasmosis is suspected, try empiric therapy for 3 weeks. If any of the lesions fail to respond, it's time to get a biopsy, he said. The infection results primarily in paracentral brain abscesses. “Abscesses in toxoplasmosis tend to be relatively deep rather than being peripheral,” he said. The abscesses can be in gray or white matter. Abscesses are round, uniformly convex with smooth, thin walls and are often multifocal.
It can be difficult to distinguish between a toxoplasmosis infection and lymphoma. “Lesions that involve the deep white matter and the deep gray matter at the same time might be CNS lymphoma or toxoplasmosis, and the problem is that both of these diseases occur in immunosuppressed patients,” Dr. Smirniotopoulos said.
The good news is that in most cases—roughly five out of six—primary CNS lymphoma has distinguishing features on imaging that allow diagnosis. Lymphoma is a small round tumor with densely packed cells that result in hyperattenuation on a noncontrast scan.
ORLANDO, FLA. — Neuroimaging can make a big difference in the care of AIDS patients, who are vulnerable to several opportunistic diseases, one expert said at the annual meeting of the American Society of Neuroimaging.
James G. Smirniotopoulos, M.D., chairman of radiology at the Uniformed Services University of the Health Sciences in Bethesda, Md., noted that AIDS patients are vulnerable to both infectious and neoplastic opportunistic diseases. Neuroimaging is indicated in any AIDS patients who manifest:
▸ Mental status changes.
▸ Neurologic deficits.
▸ Seizures (focal or generalized).
▸ Headaches.
▸ Meningeal signs.
There are some cautions to keep in mind though. AIDS patients typically have depression and other psychological conditions as a result of their situation, and these should be separated out from genuinely neurologic causes. In addition, in a substance abuse population, seizures can be the result of substance withdrawal. Lastly, when AIDS patients complain of headaches, their immune status can determine the type of imaging used. For patients with very suppressed CD4 counts (less than 200 cells/μL), get a CT scan. However, if the CD4 count is mildly suppressed (greater than 200 cells/μL), get an MRI.
Once AIDS patients have been imaged, Dr. Smirniotopoulos and his colleagues triage them based on whether they have normal imaging results, atrophy, lesions without mass effect, or mass lesions.
“When you see a scan that looks like atrophy, you want to remember that you can have the spurious appearance of atrophy in patients with malnutrition, patients with dehydration, patients who are on steroids, patients who are on [long-term] renal dialysis—they all appear like atrophy,” he said.
AIDS encephalopathy—formerly known as AIDS dementia complex—can also appear as atrophy. On images, typically this condition appears as bilateral white matter volume loss that can be symmetrical or not.
“This is a disease process that is destructive of the parenchyma, but there's a lot of debate about what's really going on,” Dr. Smirniotopoulos said. Some have suggested that this condition is the result of the direct effect of the AIDS virus on the neurons and/or oligodendrocytes. Others have suggested that it may be a toxic reaction stimulated or produced by the macrophages or some type of autoimmune effect. Regardless of the exact cause, AIDS encephalopathy “is somehow related to the fact that the macrophages themselves are infected by the HIV virus,” he said.
Progressive multifocal leukoencephalopathy (PML) is a lesion that has geographic signal and density abnormalities but without a mass effect. This lesion usually does not show any effect when enhanced using gadolinium. PML is a demyelinating white matter disease. On images, look for big geographic lesions that come right up to the gray matter and stop, Dr. Smirniotopoulos said.
The lesions are the result of infection with the ubiquitous JC papovavirus. As many as 70% of adults have antibodies to this virus, and almost 20% of patients with AIDS express antigens. PML is responsible for about 4% of AIDS deaths. Mortality is high in these patients. In the past, most patients with PML died within 4–6 months of diagnosis. Zidovudine and other antiretroviral drugs have improved survival only somewhat.
The two most common mass lesions seen on images in patients with AIDS are from primary infections and CNS lymphomas—with toxoplasmosis being the most common of the infections. “Toxoplasmosis is still probably what we think about first and foremost when an AIDS patient has a mass lesion,” Dr. Smirniotopoulos said.
If toxoplasmosis is suspected, try empiric therapy for 3 weeks. If any of the lesions fail to respond, it's time to get a biopsy, he said. The infection results primarily in paracentral brain abscesses. “Abscesses in toxoplasmosis tend to be relatively deep rather than being peripheral,” he said. The abscesses can be in gray or white matter. Abscesses are round, uniformly convex with smooth, thin walls and are often multifocal.
It can be difficult to distinguish between a toxoplasmosis infection and lymphoma. “Lesions that involve the deep white matter and the deep gray matter at the same time might be CNS lymphoma or toxoplasmosis, and the problem is that both of these diseases occur in immunosuppressed patients,” Dr. Smirniotopoulos said.
The good news is that in most cases—roughly five out of six—primary CNS lymphoma has distinguishing features on imaging that allow diagnosis. Lymphoma is a small round tumor with densely packed cells that result in hyperattenuation on a noncontrast scan.
ORLANDO, FLA. — Neuroimaging can make a big difference in the care of AIDS patients, who are vulnerable to several opportunistic diseases, one expert said at the annual meeting of the American Society of Neuroimaging.
James G. Smirniotopoulos, M.D., chairman of radiology at the Uniformed Services University of the Health Sciences in Bethesda, Md., noted that AIDS patients are vulnerable to both infectious and neoplastic opportunistic diseases. Neuroimaging is indicated in any AIDS patients who manifest:
▸ Mental status changes.
▸ Neurologic deficits.
▸ Seizures (focal or generalized).
▸ Headaches.
▸ Meningeal signs.
There are some cautions to keep in mind though. AIDS patients typically have depression and other psychological conditions as a result of their situation, and these should be separated out from genuinely neurologic causes. In addition, in a substance abuse population, seizures can be the result of substance withdrawal. Lastly, when AIDS patients complain of headaches, their immune status can determine the type of imaging used. For patients with very suppressed CD4 counts (less than 200 cells/μL), get a CT scan. However, if the CD4 count is mildly suppressed (greater than 200 cells/μL), get an MRI.
Once AIDS patients have been imaged, Dr. Smirniotopoulos and his colleagues triage them based on whether they have normal imaging results, atrophy, lesions without mass effect, or mass lesions.
“When you see a scan that looks like atrophy, you want to remember that you can have the spurious appearance of atrophy in patients with malnutrition, patients with dehydration, patients who are on steroids, patients who are on [long-term] renal dialysis—they all appear like atrophy,” he said.
AIDS encephalopathy—formerly known as AIDS dementia complex—can also appear as atrophy. On images, typically this condition appears as bilateral white matter volume loss that can be symmetrical or not.
“This is a disease process that is destructive of the parenchyma, but there's a lot of debate about what's really going on,” Dr. Smirniotopoulos said. Some have suggested that this condition is the result of the direct effect of the AIDS virus on the neurons and/or oligodendrocytes. Others have suggested that it may be a toxic reaction stimulated or produced by the macrophages or some type of autoimmune effect. Regardless of the exact cause, AIDS encephalopathy “is somehow related to the fact that the macrophages themselves are infected by the HIV virus,” he said.
Progressive multifocal leukoencephalopathy (PML) is a lesion that has geographic signal and density abnormalities but without a mass effect. This lesion usually does not show any effect when enhanced using gadolinium. PML is a demyelinating white matter disease. On images, look for big geographic lesions that come right up to the gray matter and stop, Dr. Smirniotopoulos said.
The lesions are the result of infection with the ubiquitous JC papovavirus. As many as 70% of adults have antibodies to this virus, and almost 20% of patients with AIDS express antigens. PML is responsible for about 4% of AIDS deaths. Mortality is high in these patients. In the past, most patients with PML died within 4–6 months of diagnosis. Zidovudine and other antiretroviral drugs have improved survival only somewhat.
The two most common mass lesions seen on images in patients with AIDS are from primary infections and CNS lymphomas—with toxoplasmosis being the most common of the infections. “Toxoplasmosis is still probably what we think about first and foremost when an AIDS patient has a mass lesion,” Dr. Smirniotopoulos said.
If toxoplasmosis is suspected, try empiric therapy for 3 weeks. If any of the lesions fail to respond, it's time to get a biopsy, he said. The infection results primarily in paracentral brain abscesses. “Abscesses in toxoplasmosis tend to be relatively deep rather than being peripheral,” he said. The abscesses can be in gray or white matter. Abscesses are round, uniformly convex with smooth, thin walls and are often multifocal.
It can be difficult to distinguish between a toxoplasmosis infection and lymphoma. “Lesions that involve the deep white matter and the deep gray matter at the same time might be CNS lymphoma or toxoplasmosis, and the problem is that both of these diseases occur in immunosuppressed patients,” Dr. Smirniotopoulos said.
The good news is that in most cases—roughly five out of six—primary CNS lymphoma has distinguishing features on imaging that allow diagnosis. Lymphoma is a small round tumor with densely packed cells that result in hyperattenuation on a noncontrast scan.
Immunosuppressants Pose Challenge in Pregnancy : Balancing immunosuppression with the health of the woman and fetus requires team approach.
WASHINGTON — Balancing immunosuppression in a pregnant allograft transplant patient with the health of the woman and her fetus requires a team approach between high-risk obstetricians and transplant physicians, according to one expert speaking at a meeting sponsored by the National Kidney Foundation.
“Pregnancy in the transplant recipient, aside from the issue of renal dysfunction, poses a unique set of considerations, and that's because of immunosuppressants,” said Michelle A. Josephson, M.D., of the University of Chicago.
None of the immunosuppressants used for transplantation—cyclosporine, tacrolimus, azathioprine, steroids, rapamycin, and mycophenolate mofetil—are rated pregnancy category A, using the Food and Drug Administration classification system. In fact, most are rated category C, meaning there are no data on their use in humans during pregnancy. “All medications used to prevent rejection cross the maternal-placental interface,” she pointed out.
Despite the lack of data and potential risks, a consensus group convened in 2003 by the Women's Health Committee of the American Society of Transplantation recommended immunosuppression be maintained during pregnancy to avoid rejection.
Graft rejection can be difficult to discern during pregnancy because serum creatinine levels are low during this period, and small changes can be missed, Dr. Josephson said. In addition, abnormalities that turn up on liver function tests can have a number of etiologies. For these reasons, graft dysfunction during pregnancy warrants appropriate investigation—by biopsy if necessary.
“If rejection occurs, it can be treated with steroids,” Dr. Josephson recommended. Inadequate immunosuppression, graft instability, and rejection likely affect the graft prognosis. However, age, number of allografts, and repeat pregnancies don't seem to impact graft function and prognosis.
The consensus group also agreed that a high-risk obstetrician and a transplant physician should manage pregnant transplant patients. Obstetricians should optimize maternal health, maintain normal glycemia, ensure adequate fetal growth, and anticipate preterm birth. The transplant physician should ensure maintenance of graft function and aggressively manage hypertension and preeclampsia. Cesarean section is not indicated except for standard obstetric reasons.
During the conference, experts addressed a number of concerns for this group of patients to develop management recommendations. “We recognized that the risk of prematurity in the population was high. We realized that intrauterine growth retardation is high,” said Dr. Josephson. In addition, during pregnancy renal transplant recipients may have renal insufficiency, hypertension, and preeclampsia.
Traditionally, it was recommended to wait 2 years after transplantation to try to become pregnant. However, newer immunosuppressants have made rejection less of an issue. This opens the possibility for a more individualized approach to timing. The group agreed pregnancy could be attempted once certain criteria had been met:
▸ No graft rejection in the year after transplant.
▸ Adequate and stable graft function (creatinine level less than 1.5 mg/dL, no or minimal proteinuria).
▸ No acute infections that could impact the fetus.
▸ Maintenance of immunosuppression at stable dosing.
There are, however, special circumstances that could impact the recommendations:
▸ Rejection outcome within the first year (consider further graft assessment with biopsy and GFR).
▸ Maternal age.
▸ Comorbid factors that may impact pregnancy and graft function.
▸ The patient's history of compliance.
The timing considerations could be met at 1 year post transplant, depending on the individual.
Care Varies for Transplant Recipients
A survey of the management practices of allograft transplant recipients who are or wish to become pregnant highlights the lack of consensus on the care of these patients.
Perhaps the most important finding of the survey was that the care of these women generally has been based on experience, patient preference, or center protocol, not on any available evidence, Dr. Josephson said.
“After nearly 50 years and thousands of deliveries, we should know what we're doing, but do we?” she asked.
The Women's Health Committee of the American Society of Transplantation sent out a questionnaire to all 257 transplant centers in the United States to determine the current practices for the care of transplant recipients who wish to become or are pregnant. The response rate was 56%.
The respondents had an average of 16 years' experience in transplant medicine.
A total of 82% said they recommend that their transplant patients not try to become pregnant for some period of time after receiving the transplant. Most who recommended a waiting period said their patients should wait 1–2 years. Almost 20% recommended that their patients never become pregnant. Most respondents—about three-quarters—did not limit their patients to one pregnancy.
Regarding immunosuppressant therapy in pregnancy, most respondents felt that older drugs—cyclosporine, tacrolimus, and steroids—were probably OK to use. “What I think was really interesting was that with azathioprine—one of the safest medications and actually the one that we have, aside from steroids, the most experience with—there was a little bit of debate,” she said.
Responses varied widely concerning the safety in pregnancy of newer immunosuppressants, such as rapamycin and mycophenolate mofetil.
High-risk obstetricians most commonly managed the pregnancies of transplant recipients, making up 85% of the physicians caring for these patients. Most respondents preferred going ahead with vaginal deliveries, although one-quarter of them recommended cesarean section for these patients.
Two-thirds of respondents advised their patients not to breast-feed.
WASHINGTON — Balancing immunosuppression in a pregnant allograft transplant patient with the health of the woman and her fetus requires a team approach between high-risk obstetricians and transplant physicians, according to one expert speaking at a meeting sponsored by the National Kidney Foundation.
“Pregnancy in the transplant recipient, aside from the issue of renal dysfunction, poses a unique set of considerations, and that's because of immunosuppressants,” said Michelle A. Josephson, M.D., of the University of Chicago.
None of the immunosuppressants used for transplantation—cyclosporine, tacrolimus, azathioprine, steroids, rapamycin, and mycophenolate mofetil—are rated pregnancy category A, using the Food and Drug Administration classification system. In fact, most are rated category C, meaning there are no data on their use in humans during pregnancy. “All medications used to prevent rejection cross the maternal-placental interface,” she pointed out.
Despite the lack of data and potential risks, a consensus group convened in 2003 by the Women's Health Committee of the American Society of Transplantation recommended immunosuppression be maintained during pregnancy to avoid rejection.
Graft rejection can be difficult to discern during pregnancy because serum creatinine levels are low during this period, and small changes can be missed, Dr. Josephson said. In addition, abnormalities that turn up on liver function tests can have a number of etiologies. For these reasons, graft dysfunction during pregnancy warrants appropriate investigation—by biopsy if necessary.
“If rejection occurs, it can be treated with steroids,” Dr. Josephson recommended. Inadequate immunosuppression, graft instability, and rejection likely affect the graft prognosis. However, age, number of allografts, and repeat pregnancies don't seem to impact graft function and prognosis.
The consensus group also agreed that a high-risk obstetrician and a transplant physician should manage pregnant transplant patients. Obstetricians should optimize maternal health, maintain normal glycemia, ensure adequate fetal growth, and anticipate preterm birth. The transplant physician should ensure maintenance of graft function and aggressively manage hypertension and preeclampsia. Cesarean section is not indicated except for standard obstetric reasons.
During the conference, experts addressed a number of concerns for this group of patients to develop management recommendations. “We recognized that the risk of prematurity in the population was high. We realized that intrauterine growth retardation is high,” said Dr. Josephson. In addition, during pregnancy renal transplant recipients may have renal insufficiency, hypertension, and preeclampsia.
Traditionally, it was recommended to wait 2 years after transplantation to try to become pregnant. However, newer immunosuppressants have made rejection less of an issue. This opens the possibility for a more individualized approach to timing. The group agreed pregnancy could be attempted once certain criteria had been met:
▸ No graft rejection in the year after transplant.
▸ Adequate and stable graft function (creatinine level less than 1.5 mg/dL, no or minimal proteinuria).
▸ No acute infections that could impact the fetus.
▸ Maintenance of immunosuppression at stable dosing.
There are, however, special circumstances that could impact the recommendations:
▸ Rejection outcome within the first year (consider further graft assessment with biopsy and GFR).
▸ Maternal age.
▸ Comorbid factors that may impact pregnancy and graft function.
▸ The patient's history of compliance.
The timing considerations could be met at 1 year post transplant, depending on the individual.
Care Varies for Transplant Recipients
A survey of the management practices of allograft transplant recipients who are or wish to become pregnant highlights the lack of consensus on the care of these patients.
Perhaps the most important finding of the survey was that the care of these women generally has been based on experience, patient preference, or center protocol, not on any available evidence, Dr. Josephson said.
“After nearly 50 years and thousands of deliveries, we should know what we're doing, but do we?” she asked.
The Women's Health Committee of the American Society of Transplantation sent out a questionnaire to all 257 transplant centers in the United States to determine the current practices for the care of transplant recipients who wish to become or are pregnant. The response rate was 56%.
The respondents had an average of 16 years' experience in transplant medicine.
A total of 82% said they recommend that their transplant patients not try to become pregnant for some period of time after receiving the transplant. Most who recommended a waiting period said their patients should wait 1–2 years. Almost 20% recommended that their patients never become pregnant. Most respondents—about three-quarters—did not limit their patients to one pregnancy.
Regarding immunosuppressant therapy in pregnancy, most respondents felt that older drugs—cyclosporine, tacrolimus, and steroids—were probably OK to use. “What I think was really interesting was that with azathioprine—one of the safest medications and actually the one that we have, aside from steroids, the most experience with—there was a little bit of debate,” she said.
Responses varied widely concerning the safety in pregnancy of newer immunosuppressants, such as rapamycin and mycophenolate mofetil.
High-risk obstetricians most commonly managed the pregnancies of transplant recipients, making up 85% of the physicians caring for these patients. Most respondents preferred going ahead with vaginal deliveries, although one-quarter of them recommended cesarean section for these patients.
Two-thirds of respondents advised their patients not to breast-feed.
WASHINGTON — Balancing immunosuppression in a pregnant allograft transplant patient with the health of the woman and her fetus requires a team approach between high-risk obstetricians and transplant physicians, according to one expert speaking at a meeting sponsored by the National Kidney Foundation.
“Pregnancy in the transplant recipient, aside from the issue of renal dysfunction, poses a unique set of considerations, and that's because of immunosuppressants,” said Michelle A. Josephson, M.D., of the University of Chicago.
None of the immunosuppressants used for transplantation—cyclosporine, tacrolimus, azathioprine, steroids, rapamycin, and mycophenolate mofetil—are rated pregnancy category A, using the Food and Drug Administration classification system. In fact, most are rated category C, meaning there are no data on their use in humans during pregnancy. “All medications used to prevent rejection cross the maternal-placental interface,” she pointed out.
Despite the lack of data and potential risks, a consensus group convened in 2003 by the Women's Health Committee of the American Society of Transplantation recommended immunosuppression be maintained during pregnancy to avoid rejection.
Graft rejection can be difficult to discern during pregnancy because serum creatinine levels are low during this period, and small changes can be missed, Dr. Josephson said. In addition, abnormalities that turn up on liver function tests can have a number of etiologies. For these reasons, graft dysfunction during pregnancy warrants appropriate investigation—by biopsy if necessary.
“If rejection occurs, it can be treated with steroids,” Dr. Josephson recommended. Inadequate immunosuppression, graft instability, and rejection likely affect the graft prognosis. However, age, number of allografts, and repeat pregnancies don't seem to impact graft function and prognosis.
The consensus group also agreed that a high-risk obstetrician and a transplant physician should manage pregnant transplant patients. Obstetricians should optimize maternal health, maintain normal glycemia, ensure adequate fetal growth, and anticipate preterm birth. The transplant physician should ensure maintenance of graft function and aggressively manage hypertension and preeclampsia. Cesarean section is not indicated except for standard obstetric reasons.
During the conference, experts addressed a number of concerns for this group of patients to develop management recommendations. “We recognized that the risk of prematurity in the population was high. We realized that intrauterine growth retardation is high,” said Dr. Josephson. In addition, during pregnancy renal transplant recipients may have renal insufficiency, hypertension, and preeclampsia.
Traditionally, it was recommended to wait 2 years after transplantation to try to become pregnant. However, newer immunosuppressants have made rejection less of an issue. This opens the possibility for a more individualized approach to timing. The group agreed pregnancy could be attempted once certain criteria had been met:
▸ No graft rejection in the year after transplant.
▸ Adequate and stable graft function (creatinine level less than 1.5 mg/dL, no or minimal proteinuria).
▸ No acute infections that could impact the fetus.
▸ Maintenance of immunosuppression at stable dosing.
There are, however, special circumstances that could impact the recommendations:
▸ Rejection outcome within the first year (consider further graft assessment with biopsy and GFR).
▸ Maternal age.
▸ Comorbid factors that may impact pregnancy and graft function.
▸ The patient's history of compliance.
The timing considerations could be met at 1 year post transplant, depending on the individual.
Care Varies for Transplant Recipients
A survey of the management practices of allograft transplant recipients who are or wish to become pregnant highlights the lack of consensus on the care of these patients.
Perhaps the most important finding of the survey was that the care of these women generally has been based on experience, patient preference, or center protocol, not on any available evidence, Dr. Josephson said.
“After nearly 50 years and thousands of deliveries, we should know what we're doing, but do we?” she asked.
The Women's Health Committee of the American Society of Transplantation sent out a questionnaire to all 257 transplant centers in the United States to determine the current practices for the care of transplant recipients who wish to become or are pregnant. The response rate was 56%.
The respondents had an average of 16 years' experience in transplant medicine.
A total of 82% said they recommend that their transplant patients not try to become pregnant for some period of time after receiving the transplant. Most who recommended a waiting period said their patients should wait 1–2 years. Almost 20% recommended that their patients never become pregnant. Most respondents—about three-quarters—did not limit their patients to one pregnancy.
Regarding immunosuppressant therapy in pregnancy, most respondents felt that older drugs—cyclosporine, tacrolimus, and steroids—were probably OK to use. “What I think was really interesting was that with azathioprine—one of the safest medications and actually the one that we have, aside from steroids, the most experience with—there was a little bit of debate,” she said.
Responses varied widely concerning the safety in pregnancy of newer immunosuppressants, such as rapamycin and mycophenolate mofetil.
High-risk obstetricians most commonly managed the pregnancies of transplant recipients, making up 85% of the physicians caring for these patients. Most respondents preferred going ahead with vaginal deliveries, although one-quarter of them recommended cesarean section for these patients.
Two-thirds of respondents advised their patients not to breast-feed.
Image of the Month
Magnetoencephalography (MEG) measures magnetic fields that are produced by small electrical currents that arise from neuronal activity in the brain. Through analysis of the spatial distribution of the magnetic fields, MEG enables physicians to localize epilepsy-induced abnormal electrical activity within the brain. This information is then overlaid on a magnetic resonance (MR) image, which provides anatomical detail. Both functional and structural information about the brain is visible in the combined image.
MEG has a number of advantages over other imaging modalities, beginning with its noninvasive nature. “We don't have to inject anything into the patient,” unlike some nuclear imaging techniques, said Eduardo M. Castillo, Ph.D., of the University of Texas in Houston. Patients aren't subjected to radiation or strong magnetic fields. Tests can be repeated without safety concerns, making MEG an especially attractive option for children and infants.
While functional magnetic resonance imaging (fMRI), positron-emission tomography (PET), and single-photon emission computed tomography (SPECT) assess brain function indirectly, MEG takes direct measurement of the brain's electrical function in real time.
With its high temporal resolution, MEG can be used to measure events lasting only milliseconds. fMRI, PET, and SPECT have much longer time scales. MEG also has excellent spatial resolution, localizing sources of activity with millimeter precision.
Changes in electrical activity in the brain affect the associated magnetic fields. These changes in the magnetic fields are captured by the MEG machine's array of superconducting detectors and amplifiers, said Dr. Castillo.
The equipment is housed in a specially shielded room to isolate the sensor from external noise produced by vibration and from electrical devices that produce magnetic fields.
In this case, MEG provided two types of information on the location of the abnormal electrical activity (i.e., epileptiform activity) and location of his speech centers, both of which were necessary for planning the boy's epilepsy surgery.
The yellow triangles in the image on the left located the site of interictal epileptiform electrical activity. For measurement of interictal epileptiform activity, the boy was placed in the helmet-shaped sensor, resting with his eyes closed. MEG measurement of interictal epileptiform activity was done in tandem with EEG to zero in on the abnormal activity, said Dr. Castillo.
The second type of information (indicated by red dots in the image on the right) is functional activity, recorded while the child listened to a series of words, to locate language function within the brain. When mapping functional activity, such as the ability to recognize words, the patient is subjected to repetitions of specific stimuli. Brain activity is averaged across all of the repetitions, which filters out any background brain activity that is not related to the task. The language function measurement takes about 30 minutes—long enough to repeat the task twice.
Typically children older than 5 years don't need to be sedated, but younger children do in order to remain still for the duration of the test.
After the MEG-derived map of the epileptogenic zone was intraoperatively confirmed, the area was resected, sparing areas of the eloquent cortex within the dominant hemisphere language-specific cortex. The postsurgery map on the right shows that the boy's language-specific cortex was spared by the surgeon. After surgery his linguistic skills were intact, and he is currently seizure free, said Dr. Castillo. In addition, the boy has regained some of the cognitive abilities he had lost.
MEG also is used currently to map cognitive and sensory functions prior to surgery to remove brain tumors. In addition, the technique is being investigated to track the effect of different interventions following stroke, when the brain reorganizes the location of functions to compensate for the areas lost due to stroke. “We try to track changes in the organization of functions in the brain after stroke and understand how different types of interventions can modulate those changes,” said Dr. Castillo.
The group at the University of Texas in Houston is also conducting research into dyslexia and ADHD using MEG. Other groups are using MEG to better understand the progression of Alzheimer's disease.
Currently there are nine facilities in the United States that are using MEG clinically to prepare for surgery due to epilepsy or brain tumors, said Dr. Castillo.
MEG data are overlaid on an MRI to allow resection planning; yellow triangles mark interictal activity, and red dots localize language activity (left). Postsurgical image confirms sparing of language cortex (middle). The sensor array covers the head only (right). Photo Courtesy Dr. Eduardo M. Castillo
Magnetoencephalography (MEG) measures magnetic fields that are produced by small electrical currents that arise from neuronal activity in the brain. Through analysis of the spatial distribution of the magnetic fields, MEG enables physicians to localize epilepsy-induced abnormal electrical activity within the brain. This information is then overlaid on a magnetic resonance (MR) image, which provides anatomical detail. Both functional and structural information about the brain is visible in the combined image.
MEG has a number of advantages over other imaging modalities, beginning with its noninvasive nature. “We don't have to inject anything into the patient,” unlike some nuclear imaging techniques, said Eduardo M. Castillo, Ph.D., of the University of Texas in Houston. Patients aren't subjected to radiation or strong magnetic fields. Tests can be repeated without safety concerns, making MEG an especially attractive option for children and infants.
While functional magnetic resonance imaging (fMRI), positron-emission tomography (PET), and single-photon emission computed tomography (SPECT) assess brain function indirectly, MEG takes direct measurement of the brain's electrical function in real time.
With its high temporal resolution, MEG can be used to measure events lasting only milliseconds. fMRI, PET, and SPECT have much longer time scales. MEG also has excellent spatial resolution, localizing sources of activity with millimeter precision.
Changes in electrical activity in the brain affect the associated magnetic fields. These changes in the magnetic fields are captured by the MEG machine's array of superconducting detectors and amplifiers, said Dr. Castillo.
The equipment is housed in a specially shielded room to isolate the sensor from external noise produced by vibration and from electrical devices that produce magnetic fields.
In this case, MEG provided two types of information on the location of the abnormal electrical activity (i.e., epileptiform activity) and location of his speech centers, both of which were necessary for planning the boy's epilepsy surgery.
The yellow triangles in the image on the left located the site of interictal epileptiform electrical activity. For measurement of interictal epileptiform activity, the boy was placed in the helmet-shaped sensor, resting with his eyes closed. MEG measurement of interictal epileptiform activity was done in tandem with EEG to zero in on the abnormal activity, said Dr. Castillo.
The second type of information (indicated by red dots in the image on the right) is functional activity, recorded while the child listened to a series of words, to locate language function within the brain. When mapping functional activity, such as the ability to recognize words, the patient is subjected to repetitions of specific stimuli. Brain activity is averaged across all of the repetitions, which filters out any background brain activity that is not related to the task. The language function measurement takes about 30 minutes—long enough to repeat the task twice.
Typically children older than 5 years don't need to be sedated, but younger children do in order to remain still for the duration of the test.
After the MEG-derived map of the epileptogenic zone was intraoperatively confirmed, the area was resected, sparing areas of the eloquent cortex within the dominant hemisphere language-specific cortex. The postsurgery map on the right shows that the boy's language-specific cortex was spared by the surgeon. After surgery his linguistic skills were intact, and he is currently seizure free, said Dr. Castillo. In addition, the boy has regained some of the cognitive abilities he had lost.
MEG also is used currently to map cognitive and sensory functions prior to surgery to remove brain tumors. In addition, the technique is being investigated to track the effect of different interventions following stroke, when the brain reorganizes the location of functions to compensate for the areas lost due to stroke. “We try to track changes in the organization of functions in the brain after stroke and understand how different types of interventions can modulate those changes,” said Dr. Castillo.
The group at the University of Texas in Houston is also conducting research into dyslexia and ADHD using MEG. Other groups are using MEG to better understand the progression of Alzheimer's disease.
Currently there are nine facilities in the United States that are using MEG clinically to prepare for surgery due to epilepsy or brain tumors, said Dr. Castillo.
MEG data are overlaid on an MRI to allow resection planning; yellow triangles mark interictal activity, and red dots localize language activity (left). Postsurgical image confirms sparing of language cortex (middle). The sensor array covers the head only (right). Photo Courtesy Dr. Eduardo M. Castillo
Magnetoencephalography (MEG) measures magnetic fields that are produced by small electrical currents that arise from neuronal activity in the brain. Through analysis of the spatial distribution of the magnetic fields, MEG enables physicians to localize epilepsy-induced abnormal electrical activity within the brain. This information is then overlaid on a magnetic resonance (MR) image, which provides anatomical detail. Both functional and structural information about the brain is visible in the combined image.
MEG has a number of advantages over other imaging modalities, beginning with its noninvasive nature. “We don't have to inject anything into the patient,” unlike some nuclear imaging techniques, said Eduardo M. Castillo, Ph.D., of the University of Texas in Houston. Patients aren't subjected to radiation or strong magnetic fields. Tests can be repeated without safety concerns, making MEG an especially attractive option for children and infants.
While functional magnetic resonance imaging (fMRI), positron-emission tomography (PET), and single-photon emission computed tomography (SPECT) assess brain function indirectly, MEG takes direct measurement of the brain's electrical function in real time.
With its high temporal resolution, MEG can be used to measure events lasting only milliseconds. fMRI, PET, and SPECT have much longer time scales. MEG also has excellent spatial resolution, localizing sources of activity with millimeter precision.
Changes in electrical activity in the brain affect the associated magnetic fields. These changes in the magnetic fields are captured by the MEG machine's array of superconducting detectors and amplifiers, said Dr. Castillo.
The equipment is housed in a specially shielded room to isolate the sensor from external noise produced by vibration and from electrical devices that produce magnetic fields.
In this case, MEG provided two types of information on the location of the abnormal electrical activity (i.e., epileptiform activity) and location of his speech centers, both of which were necessary for planning the boy's epilepsy surgery.
The yellow triangles in the image on the left located the site of interictal epileptiform electrical activity. For measurement of interictal epileptiform activity, the boy was placed in the helmet-shaped sensor, resting with his eyes closed. MEG measurement of interictal epileptiform activity was done in tandem with EEG to zero in on the abnormal activity, said Dr. Castillo.
The second type of information (indicated by red dots in the image on the right) is functional activity, recorded while the child listened to a series of words, to locate language function within the brain. When mapping functional activity, such as the ability to recognize words, the patient is subjected to repetitions of specific stimuli. Brain activity is averaged across all of the repetitions, which filters out any background brain activity that is not related to the task. The language function measurement takes about 30 minutes—long enough to repeat the task twice.
Typically children older than 5 years don't need to be sedated, but younger children do in order to remain still for the duration of the test.
After the MEG-derived map of the epileptogenic zone was intraoperatively confirmed, the area was resected, sparing areas of the eloquent cortex within the dominant hemisphere language-specific cortex. The postsurgery map on the right shows that the boy's language-specific cortex was spared by the surgeon. After surgery his linguistic skills were intact, and he is currently seizure free, said Dr. Castillo. In addition, the boy has regained some of the cognitive abilities he had lost.
MEG also is used currently to map cognitive and sensory functions prior to surgery to remove brain tumors. In addition, the technique is being investigated to track the effect of different interventions following stroke, when the brain reorganizes the location of functions to compensate for the areas lost due to stroke. “We try to track changes in the organization of functions in the brain after stroke and understand how different types of interventions can modulate those changes,” said Dr. Castillo.
The group at the University of Texas in Houston is also conducting research into dyslexia and ADHD using MEG. Other groups are using MEG to better understand the progression of Alzheimer's disease.
Currently there are nine facilities in the United States that are using MEG clinically to prepare for surgery due to epilepsy or brain tumors, said Dr. Castillo.
MEG data are overlaid on an MRI to allow resection planning; yellow triangles mark interictal activity, and red dots localize language activity (left). Postsurgical image confirms sparing of language cortex (middle). The sensor array covers the head only (right). Photo Courtesy Dr. Eduardo M. Castillo
Quantitative MRI Assessment Method Looks at Whole OA Joint
DESTIN, FLA. — While MRI applications for evaluating osteoarthritis are currently limited, methods are being developed that will eventually enable quantitative assessment of the disease, according to Charles Peterfy, M.D., who spoke at a rheumatology meeting sponsored by Virginia Commonwealth University.
Current applications of MRI including its use to noninvasively guide cartilage repair. The high-resolution delineation of cartilage defects and abnormalities that MRI provides can help guide patient selection and preoperative planning. Postoperatively, MRI can be used to monitor the integrity and durability of the repair, said Dr. Peterfy, a radiologist specializing in musculoskeletal imaging and the chief medical officer of Synarc Inc., a radiology services company.
In addition, newer applications of MRI “allow us, for the first time, to visualize all of the components of the joint simultaneously,” he said. MRI can be used to visualize menisci, ligaments, synovitis, bone abnormalities, and periarticular abnormalities.
The result is that instead of assessing in an isolated fashion any one aspect of the disease's effect, physicians can analyze the whole joint. This approach is facilitated by the development of a semiquantitative scoring system for evaluating osteoarthritis using MRI. Dr. Peterfy was one of the authors of the Whole-Organ Magnetic Resonance Imaging Score (WORMS) method for assessing the structural integrity of the knee.
The WORMS method can be used to evaluate independent articular features including: cartilage signal and morphology, subarticular bone marrow abnormality, subarticular cysts, subarticular bone attrition, marginal osteophytes, medial and lateral meniscal integrity, anterior and posterior cruciate ligament integrity, medial and lateral collateral ligament integrity, synovitis, loose bodies, and periarticular cysts/bursae.
One the most promising areas of MRI research centers around bone marrow edema-like abnormalities. These edema-like signals may represent pulsion of joint fluid through breaks in the articular surface, localized inflammation, or changes associated with trauma due to biomechanical incompetence of the articular surface.
Whatever their etiology, bone marrow edema abnormalities behave like microtrauma, said Dr. Peterfy, who is also on the advisory board for MagneVu, the maker of portable MRI units.
Using a WORMS-like evaluation method, bone marrow edema abnormalities have been shown to correlate with pain and collagen II breakdown and even to predict joint space narrowing on x-ray, and cartilage narrowing. In addition, these findings have been shown to progress very rapidly—even in as few as 3 months.
In a study of 378 patients treated at several clinical centers worldwide, 82% had bone marrow abnormalities on MRI at baseline (as scored on the bone marrow subscale of WORMS), said Dr. Peterfy. The patients were followed for 3 months, at which time, 34% of those with baseline abnormalities had progression of these abnormalities. This change correlated well with urine concentrations of collagen type II degradation product (CTXII), which results from cartilage breakdown (Arthritis Rheum. 2005 [in press]).
DESTIN, FLA. — While MRI applications for evaluating osteoarthritis are currently limited, methods are being developed that will eventually enable quantitative assessment of the disease, according to Charles Peterfy, M.D., who spoke at a rheumatology meeting sponsored by Virginia Commonwealth University.
Current applications of MRI including its use to noninvasively guide cartilage repair. The high-resolution delineation of cartilage defects and abnormalities that MRI provides can help guide patient selection and preoperative planning. Postoperatively, MRI can be used to monitor the integrity and durability of the repair, said Dr. Peterfy, a radiologist specializing in musculoskeletal imaging and the chief medical officer of Synarc Inc., a radiology services company.
In addition, newer applications of MRI “allow us, for the first time, to visualize all of the components of the joint simultaneously,” he said. MRI can be used to visualize menisci, ligaments, synovitis, bone abnormalities, and periarticular abnormalities.
The result is that instead of assessing in an isolated fashion any one aspect of the disease's effect, physicians can analyze the whole joint. This approach is facilitated by the development of a semiquantitative scoring system for evaluating osteoarthritis using MRI. Dr. Peterfy was one of the authors of the Whole-Organ Magnetic Resonance Imaging Score (WORMS) method for assessing the structural integrity of the knee.
The WORMS method can be used to evaluate independent articular features including: cartilage signal and morphology, subarticular bone marrow abnormality, subarticular cysts, subarticular bone attrition, marginal osteophytes, medial and lateral meniscal integrity, anterior and posterior cruciate ligament integrity, medial and lateral collateral ligament integrity, synovitis, loose bodies, and periarticular cysts/bursae.
One the most promising areas of MRI research centers around bone marrow edema-like abnormalities. These edema-like signals may represent pulsion of joint fluid through breaks in the articular surface, localized inflammation, or changes associated with trauma due to biomechanical incompetence of the articular surface.
Whatever their etiology, bone marrow edema abnormalities behave like microtrauma, said Dr. Peterfy, who is also on the advisory board for MagneVu, the maker of portable MRI units.
Using a WORMS-like evaluation method, bone marrow edema abnormalities have been shown to correlate with pain and collagen II breakdown and even to predict joint space narrowing on x-ray, and cartilage narrowing. In addition, these findings have been shown to progress very rapidly—even in as few as 3 months.
In a study of 378 patients treated at several clinical centers worldwide, 82% had bone marrow abnormalities on MRI at baseline (as scored on the bone marrow subscale of WORMS), said Dr. Peterfy. The patients were followed for 3 months, at which time, 34% of those with baseline abnormalities had progression of these abnormalities. This change correlated well with urine concentrations of collagen type II degradation product (CTXII), which results from cartilage breakdown (Arthritis Rheum. 2005 [in press]).
DESTIN, FLA. — While MRI applications for evaluating osteoarthritis are currently limited, methods are being developed that will eventually enable quantitative assessment of the disease, according to Charles Peterfy, M.D., who spoke at a rheumatology meeting sponsored by Virginia Commonwealth University.
Current applications of MRI including its use to noninvasively guide cartilage repair. The high-resolution delineation of cartilage defects and abnormalities that MRI provides can help guide patient selection and preoperative planning. Postoperatively, MRI can be used to monitor the integrity and durability of the repair, said Dr. Peterfy, a radiologist specializing in musculoskeletal imaging and the chief medical officer of Synarc Inc., a radiology services company.
In addition, newer applications of MRI “allow us, for the first time, to visualize all of the components of the joint simultaneously,” he said. MRI can be used to visualize menisci, ligaments, synovitis, bone abnormalities, and periarticular abnormalities.
The result is that instead of assessing in an isolated fashion any one aspect of the disease's effect, physicians can analyze the whole joint. This approach is facilitated by the development of a semiquantitative scoring system for evaluating osteoarthritis using MRI. Dr. Peterfy was one of the authors of the Whole-Organ Magnetic Resonance Imaging Score (WORMS) method for assessing the structural integrity of the knee.
The WORMS method can be used to evaluate independent articular features including: cartilage signal and morphology, subarticular bone marrow abnormality, subarticular cysts, subarticular bone attrition, marginal osteophytes, medial and lateral meniscal integrity, anterior and posterior cruciate ligament integrity, medial and lateral collateral ligament integrity, synovitis, loose bodies, and periarticular cysts/bursae.
One the most promising areas of MRI research centers around bone marrow edema-like abnormalities. These edema-like signals may represent pulsion of joint fluid through breaks in the articular surface, localized inflammation, or changes associated with trauma due to biomechanical incompetence of the articular surface.
Whatever their etiology, bone marrow edema abnormalities behave like microtrauma, said Dr. Peterfy, who is also on the advisory board for MagneVu, the maker of portable MRI units.
Using a WORMS-like evaluation method, bone marrow edema abnormalities have been shown to correlate with pain and collagen II breakdown and even to predict joint space narrowing on x-ray, and cartilage narrowing. In addition, these findings have been shown to progress very rapidly—even in as few as 3 months.
In a study of 378 patients treated at several clinical centers worldwide, 82% had bone marrow abnormalities on MRI at baseline (as scored on the bone marrow subscale of WORMS), said Dr. Peterfy. The patients were followed for 3 months, at which time, 34% of those with baseline abnormalities had progression of these abnormalities. This change correlated well with urine concentrations of collagen type II degradation product (CTXII), which results from cartilage breakdown (Arthritis Rheum. 2005 [in press]).
ACR's Tender Point Criteria for Fibromyalgia Flawed, Expert Says
DESTIN, FLA. — The tender point criteria commonly used to diagnose fibromyalgia are not useful and in fact may even explain why the disease appears to disproportionately affect women, said Daniel Clauw, M.D., speaking at a rheumatology meeting sponsored by Virginia Commonwealth University.
According to the American College of Rheumatology's 1990 classification criteria, patients must have both widespread pain and tenderness in 11 of 18 tender points in order to be diagnosed with fibromyalgia.
Yet “tender points merely represent areas of the body where everyone is more tender,” explained Dr. Clauw, the executive director of the Chronic Pain and Fatigue Research Center at the University of Michigan in Ann Arbor. Fibromyalgia patients and healthy individuals were found to have different thresholds of pain in those tender points. These two groups also had different thresholds of pain in areas not thought to be tender—the forehead and fingernails, for example—as at the recognized tender points. In addition, the cutoff of 11 out of 18 tender points is arbitrary. “We know that tenderness varies a great deal from day to day and week to week, especially in women,” he said.
In clinical practice, many physicians are realizing the arbitrary nature of the diagnostic criteria. The diminished role of tender points represents a shift in the way that they view the disorder. In the past, the disorder was considered a discrete illness with pain and focal areas of tenderness. In more recent years, fibromyalgia has been appreciated as part of a larger continuum, with many somatic symptoms and diffuse tenderness all over the body—not just at tender points.
Tender points are “not even a good way to measure tenderness,” as study findings suggest that the number of tender points correlates better with a patient's general stress than with pain, Dr. Clauw pointed out.
DESTIN, FLA. — The tender point criteria commonly used to diagnose fibromyalgia are not useful and in fact may even explain why the disease appears to disproportionately affect women, said Daniel Clauw, M.D., speaking at a rheumatology meeting sponsored by Virginia Commonwealth University.
According to the American College of Rheumatology's 1990 classification criteria, patients must have both widespread pain and tenderness in 11 of 18 tender points in order to be diagnosed with fibromyalgia.
Yet “tender points merely represent areas of the body where everyone is more tender,” explained Dr. Clauw, the executive director of the Chronic Pain and Fatigue Research Center at the University of Michigan in Ann Arbor. Fibromyalgia patients and healthy individuals were found to have different thresholds of pain in those tender points. These two groups also had different thresholds of pain in areas not thought to be tender—the forehead and fingernails, for example—as at the recognized tender points. In addition, the cutoff of 11 out of 18 tender points is arbitrary. “We know that tenderness varies a great deal from day to day and week to week, especially in women,” he said.
In clinical practice, many physicians are realizing the arbitrary nature of the diagnostic criteria. The diminished role of tender points represents a shift in the way that they view the disorder. In the past, the disorder was considered a discrete illness with pain and focal areas of tenderness. In more recent years, fibromyalgia has been appreciated as part of a larger continuum, with many somatic symptoms and diffuse tenderness all over the body—not just at tender points.
Tender points are “not even a good way to measure tenderness,” as study findings suggest that the number of tender points correlates better with a patient's general stress than with pain, Dr. Clauw pointed out.
DESTIN, FLA. — The tender point criteria commonly used to diagnose fibromyalgia are not useful and in fact may even explain why the disease appears to disproportionately affect women, said Daniel Clauw, M.D., speaking at a rheumatology meeting sponsored by Virginia Commonwealth University.
According to the American College of Rheumatology's 1990 classification criteria, patients must have both widespread pain and tenderness in 11 of 18 tender points in order to be diagnosed with fibromyalgia.
Yet “tender points merely represent areas of the body where everyone is more tender,” explained Dr. Clauw, the executive director of the Chronic Pain and Fatigue Research Center at the University of Michigan in Ann Arbor. Fibromyalgia patients and healthy individuals were found to have different thresholds of pain in those tender points. These two groups also had different thresholds of pain in areas not thought to be tender—the forehead and fingernails, for example—as at the recognized tender points. In addition, the cutoff of 11 out of 18 tender points is arbitrary. “We know that tenderness varies a great deal from day to day and week to week, especially in women,” he said.
In clinical practice, many physicians are realizing the arbitrary nature of the diagnostic criteria. The diminished role of tender points represents a shift in the way that they view the disorder. In the past, the disorder was considered a discrete illness with pain and focal areas of tenderness. In more recent years, fibromyalgia has been appreciated as part of a larger continuum, with many somatic symptoms and diffuse tenderness all over the body—not just at tender points.
Tender points are “not even a good way to measure tenderness,” as study findings suggest that the number of tender points correlates better with a patient's general stress than with pain, Dr. Clauw pointed out.