User login
Dissecting melancholia with evidence-based biomarker tools
For more than 50 years, depression has been studied, and understood, as a deficiency of specific neurotransmitters in the brain—namely dopamine, norepinephrine, and serotonin. Treatments for depression have been engineered to increase the release, or block the degradation, of these neurotransmitters within the synaptic cleft. Although a large body of evidence supports involvement of dopamine, norepinephrine, and serotonin in the pathophysiology of depression, the observation that pharmacotherapy is able to induce remission only in <50% of patients1 has prompted researchers to look beyond neurotransmitters for an understanding of depressive disorders (Table 1).
Today, theories of depression focus more on differences in neuron density in various regions of the brain; the effect of stress on neurogenesis and neuronal cell apoptosis; alterations in feedback pathways connecting the pre-frontal cortex to the limbic system; and the role of proinflammatory mediators evoked during the stress response (Box,2,3). These theories should not be viewed as separate entities because they are highly interconnected. Integrating them provides for a more expansive understanding of the pathophysiology of depression and biomarkers that are involved (Table 2).
In this article, we:
- integrate the large body of evidence supporting the contribution of the above variables to the onset and persistence of depression
- propose a possible risk stratification model
- explore possibilities for treatment.
The stress response: How does it affect the brain?
Stress initiates a cascade of events in the brain and peripheral systems that enable an organism to cope with, and adapt to, new and challenging situations. That is why physiologic and behavioral responses to stress generally are considered beneficial to survival.
When stress is maintained for a long period, both brain and body are harmed because target cells undergo prolonged exposure to physiologic stress mediators. For example, Woolley and Gould4 exposed rats to varying durations of glucocorticoids and observed that treating animals with corticosterone injection for 21 days induced neuronal atrophy in the hippocampus and prefrontal cortex and increased release of proinflammatory cytokines from astrocytes within the limbic system. Stressful experiences are believed to be closely associated with development of psychological alterations and, thus, neuropsychiatric disorders.5 To go further: Chronic stress is believed to be the leading cause of depression.
When the brain perceives an external threat, the stress response is called into action. The amygdala, part of the primitive limbic system, is the primary area of the brain responsible for triggering the stress response,6 signaling the hypothalamus to release corticotropin-releasing hormone (CRH) to the anterior pituitary gland, which, in turn releases adrenocorticotropic hormone to the adrenal glands (Figure 1).7 The adrenal glands are responsible for releasing glucocorticoids, which, because of their lipophilic nature, can cross the blood-brain barrier and are found in higher levels in the cerebrospinal fluid (CSF) of depressed persons.7
Once in the brain, glucocorticoids can be irreversibly degraded in the cytosol by the enzyme 11-β hydroxysteroid dehydrogenase type 2, a potential target for treating depression, or can bind to the glucocorticoid receptor (GR). Results of a research study of the role of cortisol in suppression of proinflammatory cytokine signaling activity in rainbow trout hepatocytes suggest a negative feedback loop for GR gene regulation during stress.8
Because this auto-regulation is a crucial step in the physiological stress response, the idea of the GR as an important biomarker in depression has gained popularity. In humans, when the GR binds to glucocorticoids that are released from the adrenal cortex during the stress response, the activated GR-cortisol complex represses expression of proinflammatory proteins in astrocytes and microglial cells and in all cells in the periphery before they are transcribed into proteins.9 The GR also has been shown to modulate neurogenesis.8 Repeated stress that persists over a long period leads to GR resistance, thereby reducing inhibition of production of proinflammatory cytokines.
Exposure to stress for >21 days leads to overactivity of the HPA axis and GR resistance,10 which decreases suppression of proinflammatory cytokines. There is evidence that proinflammatory cytokines, tumor necrosis factor-α, and interleukin-6 further induce GR receptor resistance by preventing the cortisol-GR receptor complex from entering cell nuclei and decreasing binding to DNA within the nuclei.11 Dexamethasone, a GR agonist, has been implicated in research studies for potential re-regulation of the HPA axis in depressed persons.12
Nerve cell death in the hippocampus
Studies showing reduced hippocampal volume in unipolar depression and a correlation between the number of episodes and a consequence of untreated depression and studies suggesting that treatment can stop or reduce shrinkage,13 and recent findings of rapid neurogenesis in hippocampi in response to ketamine, brings our focus to hippocampus in depression.
The greatest density of GRs is found in the hippocampus, which is closely associated with the limbic system.7 Therefore, the hippocampus is sensitive to increases in glucocorticoids in the brain and plays a crucial role in regulation of the HPA axis.
Evidence shows that in chronic stress exposure (≥21 days), nerve cells in the hippocampus begin to atrophy and can no longer provide negative feedback inhibition to the hypothalamus, causing HPA axis dysregulation and uncontrolled release of glucocorticoids into the bloodstream and CSF.2 In patients with Cushing syndrome, who produce abnormally high levels of glucocorticoid, the incidence of depression is as high as 50%.14 Similarly, patients treated with glucocorticoids such as prednisone often experience psychiatric symptoms, the most common being depression. Gould found that partial adrenalectomy increased hippocampal neurogenesis in rat brains, indicating the beneficial effect of stress hormone antagonism.4 CRH antagonists are being looked at as a promising and less invasive treatment option for depression.
Focus has been diverted to the role of the hippocampus in depression because of its ability to regenerate throughout adulthood, leading potentially to a re-regulation of the HPA axis and subsiding of the stress response, which is universally believed to be the primary precipitating factor in depression onset. Rats require 10 to 21 days of rest to recover from the effects of chronic (21 days) administration of glucocorticoids.15 If this proves to be a directly proportional relationship, then rats would need an estimated 120 days to recover from 6 months of constant glucocorticoid exposure. Considering that the same is true for humans, current depression treatment programs, which average 6 weeks, are not long enough for adequate recovery.
Antidepressants such as selective serotonin reuptake inhibitors, serotonin-norepinephrine reuptake inhibitors, and tricyclics stimulate neurogenesis in the hippocampus via increases in brain-derived neurotrophic factor (BDNF), suggesting that these neurotransmitters play an important role depression.16
Repetitive transcranial magnetic stimulation (rTMS), a noninvasive neuromodulation therapy approved to treat major depression, delivers brief magnetic pulses to the limbic structures. Treatment facilitates focal stimulation, rapidly applying electrical charges to the cortical neurons. TMS targets prefrontal circuits of the brain that are underactive during depressive episodes. Recent animal studies have suggested that bromodeoxyuridine (BrdU)-positive cells (newborn cells) are increased significantly in the dentate gyrus, in turn suggesting that hippocampal neurogenesis might be involved in the antidepressant effects of chronic rTMS.17 Although the underlying therapeutic mechanisms of rTMS treatment of depression remain unclear, it appears that hippocampal neurogenesis might be required to produce the effects of antidepressant treatments, including drugs and electroconvulsive therapy.17
Selective ‘shunting’ of energy occurs during the stress response
Hormones released from the adrenal glands during stress divert glucose to exercising muscles and the brain’s limbic system, which are involved in the fight-or-flight response.18 However, metabolic functions and areas of the brain that are not involved in the stress response, such as the cerebral cortex and hippocampus, are deprived of energy as a consequence of this innate selective shunting (Figure 2).19
Positron-emission tomography (PET) scanning of the resting brain shows that components of the cerebral cortex (prefrontal cortex, hippocampus, striatum) and areas connecting the cerebral cortex to the limbic system exhibit the most energy consumption in the brain during rest (Figure 3).20 PET studies also show that neuronal connections within these energy-demanding areas atrophy more rapidly than in any other area of the brain when their energy supply is reduced or cut off.6
When the supply of oxygen and glucose to certain areas of the brain is reduced—such as in traumatic brain injury or stroke—the excitatory neurotransmitter glutamate accumulates in extracellular fluid and causes nerve-cell death.21 When a conditioned stimulus is presented during fear acquisition, functional magnetic resonance imaging (fMRI) studies of fear-conditioning have consistently reported, in the prefrontal cortex:
- a decrease in the blood oxygen level-dependent signal, below resting baseline
- a reduction in blood flow (Figure 4).22
This discovery adds to evidence that demonstrates a decrease in gray-matter density in the frontal lobes as a result of glutaminergic toxicity (Figure 5).
Activation of L-glutamate, believed to play a significant role in depression and other neuropsychiatric disorders, triggers calcium-dependent intracellular responses that “excite cells to death,” so to speak—thereby causing nerve-cell apoptosis and a reduction in synaptic connections between different areas of the brain responsible for learning and memory.23 Malfunction of these synaptic connections is thought to be partially responsible for depression and other psychiatric disorders.
Excessive activation of N-methyl-d-asparate (NMDA) receptors is thought to be the underlying mechanism that leads to neuronal cell death in glutaminergic toxicity. Therefore, NMDA receptor proteins have become a target in treating neurodegenerative psychiatric illnesses. There is more than one type of NMDA receptor; some of them are excitatory, others are inhibitory. Four compounds have presented as therapeutic candidates for inhibition of NMDA receptor functioning and treatment of depression: those that inhibit glutamate binding, those that block the ion channel, and those that inhibit receptor binding to the terminal regulatory domain.24
Regrettably, these chemical compounds are not receptor-selective, but small structural modifications of these NMDA receptors have been found and lead to significant changes in potency and selectivity. This should serve as a unique starting point for developing highly specific NMDA receptor modulator agents for a variety of neuropsychiatric and neurological conditions. GLYX-13, a derivative of ketamine (an NMDA receptor antagonist), has been implicated for use in treating depression. It has been tested on 2 large phase-II study groups.25
Neuronal circuitry of depression is altered by prolonged stress
Symptoms of depression can be explained by the anatomical circuit shown in Figure 6.15,20 Impaired concentration, diminished ability to process new information, and decline in memory function are associated with decreased nerve density in the hippocampus, which plays a key role in learning, memory, and encoding of emotionally relevant data into memory.26 The hippocampus interacts with the amygdala to provide input about the context in which stimuli occur.
Depressed people often demonstrate impulsivity and have difficulty controlling expression of emotions—traits that are attributed to increased neuronal density in the amygdala and insula, which has been illustrated in PET scans and voxel-based morphometry in depressed patients.27 These brain areas are implicated in subjective emotional experience, processing of emotional reactions, and impulsive decision-making. The amygdala is normally highly regulated by the prefrontal cortex, which uses rational judgment to interpret stimuli and regulate the expression of emotion.
A study involving a facial expression processing task demonstrated reduced connectivity between the amygdala and prefrontal cortex and increased functional connectivity among the amygdala, hippocampus, and caudate-putamen in depressed patients.24 And in a study that measured white matter conduction in various brain areas in depressed patients, the greatest reduction was found in areas connecting the limbic system to the prefrontal cortex and hippocampus—believed to be caused by stress response-induced ischemic glutaminergic neuroapoptosis.21 Such neuroapoptosis might lead to irrational interpretation of stimuli, unchecked expression of emotion, and impulsive thoughts and behavior that are often present in depression and other mood disorders.
Deep brain stimulation (DBS), in which electrodes are implanted in the brain, has proved effective at increasing synaptic connections between the prefrontal cortex and the limbic system when electrodes are placed appropriately.28 Patients with refractory depression who are treated with DBS show increased gray-matter density and functional activity in the prefrontal cortex, hippocampus, and fronto-limbic connections.29 DBS also increases neurotransmission of dopamine, serotonin, and norepinephrine within the fronto-limbic circuitry.30
Identifying risk factors for depression
Genetic risk factors. Forty percent of patients with depression have a first-degree relative with depression, suggesting a strong genetic component.10 Inherited differences in hippocampal volume, synaptic connections between the prefrontal cortex and amygdala, γ-aminobutyric acid (GABA)/glutamate balance, BDNF neurotransmitter receptors, and anatomic positioning of the limbic system in relation to other brain structures might account for the heritability of psychiatric disorders such as depression.
Evidence has been consistent that hippocampal volume is diminished in the brain of depressed persons. However, there is no prospective cohort study to determine whether people who have lower gray-matter hippocampal density or volume, or both, before depression onset develop symptoms later in life. There also is no study to determine the percentage of people who have lower-than-average hippocampal gray-matter density or volume and who have a first-degree relative with depression. Such studies would yield valuable information about anatomic variables that increase the risk of depression.
It has been proposed that low GABA function is an inherited biomarker for depression. Bjork and co-workers found a lower plasma level of GABA in depressed subjects and in their first-degree relatives, confirming that GABAergic tone might be under genetic control.11 Genetic loci studies in mice have linked depressive-like behavior to GABAergic loci on chromosomes 8 and 11, encoding alpha 1, alpha 6, and gamma subunits of GABAA receptors.23
A recent study in humans showed that severe, treatment-resistant depression with anxiety was linked to a mutation in the B1 subunit of the GABAA receptor. Positive genetic associations were found between polymorphism in human GABAA receptor subunit genes.11
GABA metabolizing enzymes also can be considered biological modifiers of depression. For example:
- GABA uptake and metabolism is controlled by the enzyme glutamic acid decarboxylase (GAD); depression has been found to be associated with a polymorphism in the GAD67 gene encoding an isoform of GAD.11
- GABA transaminase (GABA-T) is another key enzyme in GABA turnover.31 It catabolizes GABA.
We can conclude that, to a high degree, depression depends on GABA production and metabolism.
A variant in the human BDNF gene, in which valine is substituted for methionine in position 66 of the pro-domain of the BDNF protein, is associated with
- a decrease in the production of BDNF
- increased susceptibility to neuropsychiatric disorders, including depression, anxiety disorder, and bipolar disorder (Figure 7).32
People with the MM allele have been found to have a small hippocampal neuronal density and poor hippocampus-dependent memory function in neuroimaging studies.23 They also displayed diminished ventromedial prefrontal cortex volume and presented with aversive memory extinction deficit (ie, “holding grudges”).
Another neurotrophic factor, vascular endothelial growth factor (VEGF), is a survival factor for endothelial cells and neurons and a modulator of synaptic transmission. Understanding the molecular and cellular specificity of antidepressant-induced VEGF will be critical to determine its potential as a therapeutic target in depression.33 Delineating the relationship between VEGF and depression has, ultimately, the potential to shed light on the still elusive neural mechanisms that underlie the pathophysiology of depression and the mechanisms by which antidepressants exert their effects.34
Genetic polymorphisms in monoamine receptors (5-HT2A), transporters (SERTPR, 5-HTTLPR, STin2, rs25531, SLC6A4), and regulatory enzymes should not be overlooked.35 There is reproducible evidence that variability in these polymorphisms are associated with variability in:
- vulnerability to depression
- the response to treatment with existing antidepressant medications.1
Most studies that look at changes in neuronal circuitry focus on the integrity of synaptic connections between the frontal cortex and limbic system; few of them have closely examined the importance of the anatomic proximity of the 2 regions. It might be that having an amygdala that is relatively closer to the frontal cortex and the hippocampus reduces a person’s risk of depression, and vice versa. This association needs to be investigated further with imaging studies.
Environmental risk factors. The brain is thought to be plastic until age 30.5 Plasticity diminishes with age after age 7—except for the hippocampus, which can regenerate throughout life.36 Early life experiences play an important role in forming synaptic connections between the frontal cortex and the limbic system, through a process known as fear conditioning.
Children learn early in life which stimuli are to be perceived as threatening or aversive and how to respond to best preserves their safety and internal sense of well-being. Those who grow up in a hostile environment learn to perceive more stimuli as threatening than children who grow up in a nurturing environment.32 It is possible that the amygdala is larger in children who grow up in less-than-ideal circumstances because this region is constantly being recruited—at the expense of the more rational frontal cortex.
Evidence suggests that these conditions reduce hippocampal neurogenesis37:
- increasing age
- substance abuse (opiates and methamphetamines)
- inadequate housing
- minimal physical activity
- little opportunity for social stimulation
- minimal learning experience.
Bottom Line
Depression has been understood as a neurotransmitter deficiency in the brain; treatments were engineered to increase release, or block degradation, of those neurotransmitters. Novel theories—all interconnected—of the neuroanatomical pathophysiology of depression focus more on differences in neuron density in the brain; effects of stress on neurogenesis and neuronal cell apoptosis; alterations in feedback pathways connecting the pre-frontal cortex to the limbic system; and the role of pro-inflammatory mediators evoked during the stress response.
Related Resources
- Fuchs E. Neurogenesis in the adult brain: is there an association with mental disorders? Eur Arch Psychiatry Clin Neurosci. 2007;257(5):247-249.
- Videbech P, Ravnkilde B. Hippocampal volume and depression: a meta-analysis of MRI studies. Am J Psychiatry. 2004; 161(11):1957-1966.
Disclosure
The authors report no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.
Acknowledgement
Anita Rao, second-year medical student, Stritch School of Medicine, Loyola University, Chicago, Illinois, assisted in the preparation of this manuscript.
1. Eley TC, Sugden K, Corsico A, et al. Gene-environment interaction analysis of serotonin system markers with adolescent depression. Mol Psychiatry. 2004;9(10):908-915.
2. Haber SN, Rauch SL. Neurocircuitry: a window into the networks underlying neuropsychiatric disease. Neuropsychopharmacology. 2010;35(1):1-3.
3. Frodl T, Bokde AL, Scheuerecker J, et al. Functional connectivity bias of the orbitofrontal cortex in drug-free patients with major depression. Biol Psychiatry. 2010; 67(2):161-167.
4. Woolley CS, Gould E, McEwen BS. Exposure to excess glucocorticoids alters dendritic morphology of adult hippocampal pyramidal neurons. Brain Res. 1990;531(1-2): 225-231.
5. Heim C, Nemeroff CB. The impact of early adverse experiences on brain systems involved in the pathophysiology of anxiety and affective disorders. Biol Psychiatry. 1999;46(11):1509-1522.
6. Isgor C, Kabbaj M, Akil H, et al. Delayed effects of chronic variable stress during peripubertal-juvenile period on hippocampal morphology and on cognitive and stress axis functions in rats. Hippocampus. 2004;14(5):636-648.
7. De Kloet ER, Vreugdenhil E, Oitzl MS, et al. Brain corticosteroid receptor balance in health and disease. Endocr Rev. 1998;19(3):269-301.
8. Philip AM, Kim SD, Vijayan MM. Cortisol modulates the expression of cytokines and suppressors of cytokine signaling (SOCS) in rainbow trout hepatocytes. Dev Comp Immunol. 2012;38(2):360-367.
9. Coplan JD, Lydiard RB. Brain circuits in panic disorder. Biol Psychiatry. 1998;44(12):1264-1276.
10. Anisman H, Merali Z. Cytokines, stress and depressive illness: brain-immune interactions. Ann Med. 2003;35(1):2-11.
11. Crowley JJ, Lucki I. Opportunities to discover genes regulating depression and antidepressant response from rodent behavioral genetics. Curr Pharm Des. 2005;11(2):157-169.
12. Covington HE 3rd, Vialou V, Nestler EJ. From synapse to nucleus: novel targets for treating depression. Neuropharmacology. 2010;58(4-5):683-693.
13. Videbech P, Ravnkilde B. Hippocampal volume and depression: a meta-analysis of MRI studies. Am J Psychiatry. 2004;161(11):1957-1966.
14. Sandi C. Stress, cognitive impairment and cell adhesion molecules. Nat Rev Neurosci. 2004;5(12):917-930.
15. Hartley CA, Phelps EA. Changing fear: the neurocircuitry of emotion regulation. Neuropsychopharmacology. 2010;35(1): 136-146.
16. Kim DK, Lim SW, Lee S, et al. Serotonin transporter gene polymorphism and antidepressant response. Neuroreport. 2000;11(1):215-219.
17. Ueyama E, Ukai S, Ogawa A, et al, Chronic repetitive transcranial magnetic stimulation increases hippocampal neurogenesis in rats. Psychiatry Clin Neurosci. 2011; 65(1):77-81.
18. Irwin W, Anderle MJ, Abercrombie HC, et al. Amygdalar interhemispheric functional connectivity differs between the non-depressed and depressed human brain. Neuroimage. 2004;21(2):674-686.
19. McEwen BS. Physiology and neurobiology of stress and adaptation: central role of the brain. Physiol Rev. 2007; 87(3):873-904.
20. Gusnard DA, Raichle ME, Raichle ME. Searching for a baseline: functional imaging and the resting human brain. Nat Rev Neurosci. 2001;2(10):685-694.
21. Hulsebosch CE, Hains BC, Crown ED, et al. Mechanisms of chronic central neuropathic pain after spinal cord injury. Brain Res Rev. 2009;60(1):202-213.
22. Gottfried JA, Dolan RJ. Human orbitofrontal cortex mediates extinction learning while accessing conditioned representations of value. Nat Neurosci. 2004;7(10):1144-1152.
23 Arnone D, McKie S, Elliott R, et al. State-dependent changes in hippocampal grey matter in depression. Mol Psychiatry. 2012;1(8):1359-4184.
24. Brunoni AR, Lopes M, Fregni F. A systematic review and meta-analysis of clinical studies on major depression and BDNF levels: implications for the role of neuroplasticity in depression. Int J Neuropsychopharmacol. 2008;11(8):1169-1180.
25. Maeng S, Zarate CA Jr. The role of glutamate in mood disorders: results from the ketamine in major depression study and the presumed cellular mechanism underlying its antidepressant effects. Curr Psychiatry Rep. 2007;9(6):467-474.
26. Vaidya VA, Fernandes K, Jha S. Regulation of adult hippocampal neurogenesis: relevance to depression. Expert Rev Neurother. 2007;7(7):853-864.
27. Lisiecka DM, Carballedo A, Fagan AJ, et al. Altered inhibition of negative emotions in subjects at family risk of major depressive disorder. J Psychiatr Res. 2012;46(2):181-188.
28. Mayberg HS, Lozano AM, Voon V, et al. Deep brain stimulation for treatment-resistant depression. Neuron. 2005;45(5):651-660.
29. Levkovitz Y, Harel EV, Roth Y, et al. Deep transcranial magnetic stimulation over the prefrontal cortex: evaluation of antidepressant and cognitive effects in depressive patients. Brain Stimul. 2009;2(4):188-200.
30. Schlaepfer TE, Lieb K. Deep brain stimulation for treatment of refractory depression. Lancet. 2005;366(9495):1420-1422.
31. Astrup, J. Energy-requiring cell functions in the ischemic brain. Their critical supply and possible inhibition in protective therapy. J Neurosurg. 1982;56(4):482-497.
32. Fletcher JM. Childhood mistreatment and adolescent and young adult depression. Soc Sci Med. 2009;68(5):799-806.
33. Warner-Schmidt JL, Duman R. VEGF as a potential target for therapeutic intervention in depression. Curr Opin Pharmacol. 2008;8(1):14-19.
34. Clark-Raymond A, Halaris A. VEGF and depression: a comprehensive assessment of clinical data. J Psychiatr Res. 2013;47(8):1080-1087.
35. Alonso R, Griebel G, Pavone G, et al. Blockade of CRF(1) or V(1b) receptors reverses stress-induced suppression of neurogenesis in a mouse model of depression. Mol Psychiatry. 2004;9(3):278-286.
36. Thomas RM, Peterson DA. A neurogenic theory of depression gains momentum. Mol Interv. 2003;3(8):441-444.
37. Jacobs BL. Adult brain neurogenesis and depression. Brain Behav Immun. 2002;16(5):602-609.
For more than 50 years, depression has been studied, and understood, as a deficiency of specific neurotransmitters in the brain—namely dopamine, norepinephrine, and serotonin. Treatments for depression have been engineered to increase the release, or block the degradation, of these neurotransmitters within the synaptic cleft. Although a large body of evidence supports involvement of dopamine, norepinephrine, and serotonin in the pathophysiology of depression, the observation that pharmacotherapy is able to induce remission only in <50% of patients1 has prompted researchers to look beyond neurotransmitters for an understanding of depressive disorders (Table 1).
Today, theories of depression focus more on differences in neuron density in various regions of the brain; the effect of stress on neurogenesis and neuronal cell apoptosis; alterations in feedback pathways connecting the pre-frontal cortex to the limbic system; and the role of proinflammatory mediators evoked during the stress response (Box,2,3). These theories should not be viewed as separate entities because they are highly interconnected. Integrating them provides for a more expansive understanding of the pathophysiology of depression and biomarkers that are involved (Table 2).
In this article, we:
- integrate the large body of evidence supporting the contribution of the above variables to the onset and persistence of depression
- propose a possible risk stratification model
- explore possibilities for treatment.
The stress response: How does it affect the brain?
Stress initiates a cascade of events in the brain and peripheral systems that enable an organism to cope with, and adapt to, new and challenging situations. That is why physiologic and behavioral responses to stress generally are considered beneficial to survival.
When stress is maintained for a long period, both brain and body are harmed because target cells undergo prolonged exposure to physiologic stress mediators. For example, Woolley and Gould4 exposed rats to varying durations of glucocorticoids and observed that treating animals with corticosterone injection for 21 days induced neuronal atrophy in the hippocampus and prefrontal cortex and increased release of proinflammatory cytokines from astrocytes within the limbic system. Stressful experiences are believed to be closely associated with development of psychological alterations and, thus, neuropsychiatric disorders.5 To go further: Chronic stress is believed to be the leading cause of depression.
When the brain perceives an external threat, the stress response is called into action. The amygdala, part of the primitive limbic system, is the primary area of the brain responsible for triggering the stress response,6 signaling the hypothalamus to release corticotropin-releasing hormone (CRH) to the anterior pituitary gland, which, in turn releases adrenocorticotropic hormone to the adrenal glands (Figure 1).7 The adrenal glands are responsible for releasing glucocorticoids, which, because of their lipophilic nature, can cross the blood-brain barrier and are found in higher levels in the cerebrospinal fluid (CSF) of depressed persons.7
Once in the brain, glucocorticoids can be irreversibly degraded in the cytosol by the enzyme 11-β hydroxysteroid dehydrogenase type 2, a potential target for treating depression, or can bind to the glucocorticoid receptor (GR). Results of a research study of the role of cortisol in suppression of proinflammatory cytokine signaling activity in rainbow trout hepatocytes suggest a negative feedback loop for GR gene regulation during stress.8
Because this auto-regulation is a crucial step in the physiological stress response, the idea of the GR as an important biomarker in depression has gained popularity. In humans, when the GR binds to glucocorticoids that are released from the adrenal cortex during the stress response, the activated GR-cortisol complex represses expression of proinflammatory proteins in astrocytes and microglial cells and in all cells in the periphery before they are transcribed into proteins.9 The GR also has been shown to modulate neurogenesis.8 Repeated stress that persists over a long period leads to GR resistance, thereby reducing inhibition of production of proinflammatory cytokines.
Exposure to stress for >21 days leads to overactivity of the HPA axis and GR resistance,10 which decreases suppression of proinflammatory cytokines. There is evidence that proinflammatory cytokines, tumor necrosis factor-α, and interleukin-6 further induce GR receptor resistance by preventing the cortisol-GR receptor complex from entering cell nuclei and decreasing binding to DNA within the nuclei.11 Dexamethasone, a GR agonist, has been implicated in research studies for potential re-regulation of the HPA axis in depressed persons.12
Nerve cell death in the hippocampus
Studies showing reduced hippocampal volume in unipolar depression and a correlation between the number of episodes and a consequence of untreated depression and studies suggesting that treatment can stop or reduce shrinkage,13 and recent findings of rapid neurogenesis in hippocampi in response to ketamine, brings our focus to hippocampus in depression.
The greatest density of GRs is found in the hippocampus, which is closely associated with the limbic system.7 Therefore, the hippocampus is sensitive to increases in glucocorticoids in the brain and plays a crucial role in regulation of the HPA axis.
Evidence shows that in chronic stress exposure (≥21 days), nerve cells in the hippocampus begin to atrophy and can no longer provide negative feedback inhibition to the hypothalamus, causing HPA axis dysregulation and uncontrolled release of glucocorticoids into the bloodstream and CSF.2 In patients with Cushing syndrome, who produce abnormally high levels of glucocorticoid, the incidence of depression is as high as 50%.14 Similarly, patients treated with glucocorticoids such as prednisone often experience psychiatric symptoms, the most common being depression. Gould found that partial adrenalectomy increased hippocampal neurogenesis in rat brains, indicating the beneficial effect of stress hormone antagonism.4 CRH antagonists are being looked at as a promising and less invasive treatment option for depression.
Focus has been diverted to the role of the hippocampus in depression because of its ability to regenerate throughout adulthood, leading potentially to a re-regulation of the HPA axis and subsiding of the stress response, which is universally believed to be the primary precipitating factor in depression onset. Rats require 10 to 21 days of rest to recover from the effects of chronic (21 days) administration of glucocorticoids.15 If this proves to be a directly proportional relationship, then rats would need an estimated 120 days to recover from 6 months of constant glucocorticoid exposure. Considering that the same is true for humans, current depression treatment programs, which average 6 weeks, are not long enough for adequate recovery.
Antidepressants such as selective serotonin reuptake inhibitors, serotonin-norepinephrine reuptake inhibitors, and tricyclics stimulate neurogenesis in the hippocampus via increases in brain-derived neurotrophic factor (BDNF), suggesting that these neurotransmitters play an important role depression.16
Repetitive transcranial magnetic stimulation (rTMS), a noninvasive neuromodulation therapy approved to treat major depression, delivers brief magnetic pulses to the limbic structures. Treatment facilitates focal stimulation, rapidly applying electrical charges to the cortical neurons. TMS targets prefrontal circuits of the brain that are underactive during depressive episodes. Recent animal studies have suggested that bromodeoxyuridine (BrdU)-positive cells (newborn cells) are increased significantly in the dentate gyrus, in turn suggesting that hippocampal neurogenesis might be involved in the antidepressant effects of chronic rTMS.17 Although the underlying therapeutic mechanisms of rTMS treatment of depression remain unclear, it appears that hippocampal neurogenesis might be required to produce the effects of antidepressant treatments, including drugs and electroconvulsive therapy.17
Selective ‘shunting’ of energy occurs during the stress response
Hormones released from the adrenal glands during stress divert glucose to exercising muscles and the brain’s limbic system, which are involved in the fight-or-flight response.18 However, metabolic functions and areas of the brain that are not involved in the stress response, such as the cerebral cortex and hippocampus, are deprived of energy as a consequence of this innate selective shunting (Figure 2).19
Positron-emission tomography (PET) scanning of the resting brain shows that components of the cerebral cortex (prefrontal cortex, hippocampus, striatum) and areas connecting the cerebral cortex to the limbic system exhibit the most energy consumption in the brain during rest (Figure 3).20 PET studies also show that neuronal connections within these energy-demanding areas atrophy more rapidly than in any other area of the brain when their energy supply is reduced or cut off.6
When the supply of oxygen and glucose to certain areas of the brain is reduced—such as in traumatic brain injury or stroke—the excitatory neurotransmitter glutamate accumulates in extracellular fluid and causes nerve-cell death.21 When a conditioned stimulus is presented during fear acquisition, functional magnetic resonance imaging (fMRI) studies of fear-conditioning have consistently reported, in the prefrontal cortex:
- a decrease in the blood oxygen level-dependent signal, below resting baseline
- a reduction in blood flow (Figure 4).22
This discovery adds to evidence that demonstrates a decrease in gray-matter density in the frontal lobes as a result of glutaminergic toxicity (Figure 5).
Activation of L-glutamate, believed to play a significant role in depression and other neuropsychiatric disorders, triggers calcium-dependent intracellular responses that “excite cells to death,” so to speak—thereby causing nerve-cell apoptosis and a reduction in synaptic connections between different areas of the brain responsible for learning and memory.23 Malfunction of these synaptic connections is thought to be partially responsible for depression and other psychiatric disorders.
Excessive activation of N-methyl-d-asparate (NMDA) receptors is thought to be the underlying mechanism that leads to neuronal cell death in glutaminergic toxicity. Therefore, NMDA receptor proteins have become a target in treating neurodegenerative psychiatric illnesses. There is more than one type of NMDA receptor; some of them are excitatory, others are inhibitory. Four compounds have presented as therapeutic candidates for inhibition of NMDA receptor functioning and treatment of depression: those that inhibit glutamate binding, those that block the ion channel, and those that inhibit receptor binding to the terminal regulatory domain.24
Regrettably, these chemical compounds are not receptor-selective, but small structural modifications of these NMDA receptors have been found and lead to significant changes in potency and selectivity. This should serve as a unique starting point for developing highly specific NMDA receptor modulator agents for a variety of neuropsychiatric and neurological conditions. GLYX-13, a derivative of ketamine (an NMDA receptor antagonist), has been implicated for use in treating depression. It has been tested on 2 large phase-II study groups.25
Neuronal circuitry of depression is altered by prolonged stress
Symptoms of depression can be explained by the anatomical circuit shown in Figure 6.15,20 Impaired concentration, diminished ability to process new information, and decline in memory function are associated with decreased nerve density in the hippocampus, which plays a key role in learning, memory, and encoding of emotionally relevant data into memory.26 The hippocampus interacts with the amygdala to provide input about the context in which stimuli occur.
Depressed people often demonstrate impulsivity and have difficulty controlling expression of emotions—traits that are attributed to increased neuronal density in the amygdala and insula, which has been illustrated in PET scans and voxel-based morphometry in depressed patients.27 These brain areas are implicated in subjective emotional experience, processing of emotional reactions, and impulsive decision-making. The amygdala is normally highly regulated by the prefrontal cortex, which uses rational judgment to interpret stimuli and regulate the expression of emotion.
A study involving a facial expression processing task demonstrated reduced connectivity between the amygdala and prefrontal cortex and increased functional connectivity among the amygdala, hippocampus, and caudate-putamen in depressed patients.24 And in a study that measured white matter conduction in various brain areas in depressed patients, the greatest reduction was found in areas connecting the limbic system to the prefrontal cortex and hippocampus—believed to be caused by stress response-induced ischemic glutaminergic neuroapoptosis.21 Such neuroapoptosis might lead to irrational interpretation of stimuli, unchecked expression of emotion, and impulsive thoughts and behavior that are often present in depression and other mood disorders.
Deep brain stimulation (DBS), in which electrodes are implanted in the brain, has proved effective at increasing synaptic connections between the prefrontal cortex and the limbic system when electrodes are placed appropriately.28 Patients with refractory depression who are treated with DBS show increased gray-matter density and functional activity in the prefrontal cortex, hippocampus, and fronto-limbic connections.29 DBS also increases neurotransmission of dopamine, serotonin, and norepinephrine within the fronto-limbic circuitry.30
Identifying risk factors for depression
Genetic risk factors. Forty percent of patients with depression have a first-degree relative with depression, suggesting a strong genetic component.10 Inherited differences in hippocampal volume, synaptic connections between the prefrontal cortex and amygdala, γ-aminobutyric acid (GABA)/glutamate balance, BDNF neurotransmitter receptors, and anatomic positioning of the limbic system in relation to other brain structures might account for the heritability of psychiatric disorders such as depression.
Evidence has been consistent that hippocampal volume is diminished in the brain of depressed persons. However, there is no prospective cohort study to determine whether people who have lower gray-matter hippocampal density or volume, or both, before depression onset develop symptoms later in life. There also is no study to determine the percentage of people who have lower-than-average hippocampal gray-matter density or volume and who have a first-degree relative with depression. Such studies would yield valuable information about anatomic variables that increase the risk of depression.
It has been proposed that low GABA function is an inherited biomarker for depression. Bjork and co-workers found a lower plasma level of GABA in depressed subjects and in their first-degree relatives, confirming that GABAergic tone might be under genetic control.11 Genetic loci studies in mice have linked depressive-like behavior to GABAergic loci on chromosomes 8 and 11, encoding alpha 1, alpha 6, and gamma subunits of GABAA receptors.23
A recent study in humans showed that severe, treatment-resistant depression with anxiety was linked to a mutation in the B1 subunit of the GABAA receptor. Positive genetic associations were found between polymorphism in human GABAA receptor subunit genes.11
GABA metabolizing enzymes also can be considered biological modifiers of depression. For example:
- GABA uptake and metabolism is controlled by the enzyme glutamic acid decarboxylase (GAD); depression has been found to be associated with a polymorphism in the GAD67 gene encoding an isoform of GAD.11
- GABA transaminase (GABA-T) is another key enzyme in GABA turnover.31 It catabolizes GABA.
We can conclude that, to a high degree, depression depends on GABA production and metabolism.
A variant in the human BDNF gene, in which valine is substituted for methionine in position 66 of the pro-domain of the BDNF protein, is associated with
- a decrease in the production of BDNF
- increased susceptibility to neuropsychiatric disorders, including depression, anxiety disorder, and bipolar disorder (Figure 7).32
People with the MM allele have been found to have a small hippocampal neuronal density and poor hippocampus-dependent memory function in neuroimaging studies.23 They also displayed diminished ventromedial prefrontal cortex volume and presented with aversive memory extinction deficit (ie, “holding grudges”).
Another neurotrophic factor, vascular endothelial growth factor (VEGF), is a survival factor for endothelial cells and neurons and a modulator of synaptic transmission. Understanding the molecular and cellular specificity of antidepressant-induced VEGF will be critical to determine its potential as a therapeutic target in depression.33 Delineating the relationship between VEGF and depression has, ultimately, the potential to shed light on the still elusive neural mechanisms that underlie the pathophysiology of depression and the mechanisms by which antidepressants exert their effects.34
Genetic polymorphisms in monoamine receptors (5-HT2A), transporters (SERTPR, 5-HTTLPR, STin2, rs25531, SLC6A4), and regulatory enzymes should not be overlooked.35 There is reproducible evidence that variability in these polymorphisms are associated with variability in:
- vulnerability to depression
- the response to treatment with existing antidepressant medications.1
Most studies that look at changes in neuronal circuitry focus on the integrity of synaptic connections between the frontal cortex and limbic system; few of them have closely examined the importance of the anatomic proximity of the 2 regions. It might be that having an amygdala that is relatively closer to the frontal cortex and the hippocampus reduces a person’s risk of depression, and vice versa. This association needs to be investigated further with imaging studies.
Environmental risk factors. The brain is thought to be plastic until age 30.5 Plasticity diminishes with age after age 7—except for the hippocampus, which can regenerate throughout life.36 Early life experiences play an important role in forming synaptic connections between the frontal cortex and the limbic system, through a process known as fear conditioning.
Children learn early in life which stimuli are to be perceived as threatening or aversive and how to respond to best preserves their safety and internal sense of well-being. Those who grow up in a hostile environment learn to perceive more stimuli as threatening than children who grow up in a nurturing environment.32 It is possible that the amygdala is larger in children who grow up in less-than-ideal circumstances because this region is constantly being recruited—at the expense of the more rational frontal cortex.
Evidence suggests that these conditions reduce hippocampal neurogenesis37:
- increasing age
- substance abuse (opiates and methamphetamines)
- inadequate housing
- minimal physical activity
- little opportunity for social stimulation
- minimal learning experience.
Bottom Line
Depression has been understood as a neurotransmitter deficiency in the brain; treatments were engineered to increase release, or block degradation, of those neurotransmitters. Novel theories—all interconnected—of the neuroanatomical pathophysiology of depression focus more on differences in neuron density in the brain; effects of stress on neurogenesis and neuronal cell apoptosis; alterations in feedback pathways connecting the pre-frontal cortex to the limbic system; and the role of pro-inflammatory mediators evoked during the stress response.
Related Resources
- Fuchs E. Neurogenesis in the adult brain: is there an association with mental disorders? Eur Arch Psychiatry Clin Neurosci. 2007;257(5):247-249.
- Videbech P, Ravnkilde B. Hippocampal volume and depression: a meta-analysis of MRI studies. Am J Psychiatry. 2004; 161(11):1957-1966.
Disclosure
The authors report no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.
Acknowledgement
Anita Rao, second-year medical student, Stritch School of Medicine, Loyola University, Chicago, Illinois, assisted in the preparation of this manuscript.
For more than 50 years, depression has been studied, and understood, as a deficiency of specific neurotransmitters in the brain—namely dopamine, norepinephrine, and serotonin. Treatments for depression have been engineered to increase the release, or block the degradation, of these neurotransmitters within the synaptic cleft. Although a large body of evidence supports involvement of dopamine, norepinephrine, and serotonin in the pathophysiology of depression, the observation that pharmacotherapy is able to induce remission only in <50% of patients1 has prompted researchers to look beyond neurotransmitters for an understanding of depressive disorders (Table 1).
Today, theories of depression focus more on differences in neuron density in various regions of the brain; the effect of stress on neurogenesis and neuronal cell apoptosis; alterations in feedback pathways connecting the pre-frontal cortex to the limbic system; and the role of proinflammatory mediators evoked during the stress response (Box,2,3). These theories should not be viewed as separate entities because they are highly interconnected. Integrating them provides for a more expansive understanding of the pathophysiology of depression and biomarkers that are involved (Table 2).
In this article, we:
- integrate the large body of evidence supporting the contribution of the above variables to the onset and persistence of depression
- propose a possible risk stratification model
- explore possibilities for treatment.
The stress response: How does it affect the brain?
Stress initiates a cascade of events in the brain and peripheral systems that enable an organism to cope with, and adapt to, new and challenging situations. That is why physiologic and behavioral responses to stress generally are considered beneficial to survival.
When stress is maintained for a long period, both brain and body are harmed because target cells undergo prolonged exposure to physiologic stress mediators. For example, Woolley and Gould4 exposed rats to varying durations of glucocorticoids and observed that treating animals with corticosterone injection for 21 days induced neuronal atrophy in the hippocampus and prefrontal cortex and increased release of proinflammatory cytokines from astrocytes within the limbic system. Stressful experiences are believed to be closely associated with development of psychological alterations and, thus, neuropsychiatric disorders.5 To go further: Chronic stress is believed to be the leading cause of depression.
When the brain perceives an external threat, the stress response is called into action. The amygdala, part of the primitive limbic system, is the primary area of the brain responsible for triggering the stress response,6 signaling the hypothalamus to release corticotropin-releasing hormone (CRH) to the anterior pituitary gland, which, in turn releases adrenocorticotropic hormone to the adrenal glands (Figure 1).7 The adrenal glands are responsible for releasing glucocorticoids, which, because of their lipophilic nature, can cross the blood-brain barrier and are found in higher levels in the cerebrospinal fluid (CSF) of depressed persons.7
Once in the brain, glucocorticoids can be irreversibly degraded in the cytosol by the enzyme 11-β hydroxysteroid dehydrogenase type 2, a potential target for treating depression, or can bind to the glucocorticoid receptor (GR). Results of a research study of the role of cortisol in suppression of proinflammatory cytokine signaling activity in rainbow trout hepatocytes suggest a negative feedback loop for GR gene regulation during stress.8
Because this auto-regulation is a crucial step in the physiological stress response, the idea of the GR as an important biomarker in depression has gained popularity. In humans, when the GR binds to glucocorticoids that are released from the adrenal cortex during the stress response, the activated GR-cortisol complex represses expression of proinflammatory proteins in astrocytes and microglial cells and in all cells in the periphery before they are transcribed into proteins.9 The GR also has been shown to modulate neurogenesis.8 Repeated stress that persists over a long period leads to GR resistance, thereby reducing inhibition of production of proinflammatory cytokines.
Exposure to stress for >21 days leads to overactivity of the HPA axis and GR resistance,10 which decreases suppression of proinflammatory cytokines. There is evidence that proinflammatory cytokines, tumor necrosis factor-α, and interleukin-6 further induce GR receptor resistance by preventing the cortisol-GR receptor complex from entering cell nuclei and decreasing binding to DNA within the nuclei.11 Dexamethasone, a GR agonist, has been implicated in research studies for potential re-regulation of the HPA axis in depressed persons.12
Nerve cell death in the hippocampus
Studies showing reduced hippocampal volume in unipolar depression and a correlation between the number of episodes and a consequence of untreated depression and studies suggesting that treatment can stop or reduce shrinkage,13 and recent findings of rapid neurogenesis in hippocampi in response to ketamine, brings our focus to hippocampus in depression.
The greatest density of GRs is found in the hippocampus, which is closely associated with the limbic system.7 Therefore, the hippocampus is sensitive to increases in glucocorticoids in the brain and plays a crucial role in regulation of the HPA axis.
Evidence shows that in chronic stress exposure (≥21 days), nerve cells in the hippocampus begin to atrophy and can no longer provide negative feedback inhibition to the hypothalamus, causing HPA axis dysregulation and uncontrolled release of glucocorticoids into the bloodstream and CSF.2 In patients with Cushing syndrome, who produce abnormally high levels of glucocorticoid, the incidence of depression is as high as 50%.14 Similarly, patients treated with glucocorticoids such as prednisone often experience psychiatric symptoms, the most common being depression. Gould found that partial adrenalectomy increased hippocampal neurogenesis in rat brains, indicating the beneficial effect of stress hormone antagonism.4 CRH antagonists are being looked at as a promising and less invasive treatment option for depression.
Focus has been diverted to the role of the hippocampus in depression because of its ability to regenerate throughout adulthood, leading potentially to a re-regulation of the HPA axis and subsiding of the stress response, which is universally believed to be the primary precipitating factor in depression onset. Rats require 10 to 21 days of rest to recover from the effects of chronic (21 days) administration of glucocorticoids.15 If this proves to be a directly proportional relationship, then rats would need an estimated 120 days to recover from 6 months of constant glucocorticoid exposure. Considering that the same is true for humans, current depression treatment programs, which average 6 weeks, are not long enough for adequate recovery.
Antidepressants such as selective serotonin reuptake inhibitors, serotonin-norepinephrine reuptake inhibitors, and tricyclics stimulate neurogenesis in the hippocampus via increases in brain-derived neurotrophic factor (BDNF), suggesting that these neurotransmitters play an important role depression.16
Repetitive transcranial magnetic stimulation (rTMS), a noninvasive neuromodulation therapy approved to treat major depression, delivers brief magnetic pulses to the limbic structures. Treatment facilitates focal stimulation, rapidly applying electrical charges to the cortical neurons. TMS targets prefrontal circuits of the brain that are underactive during depressive episodes. Recent animal studies have suggested that bromodeoxyuridine (BrdU)-positive cells (newborn cells) are increased significantly in the dentate gyrus, in turn suggesting that hippocampal neurogenesis might be involved in the antidepressant effects of chronic rTMS.17 Although the underlying therapeutic mechanisms of rTMS treatment of depression remain unclear, it appears that hippocampal neurogenesis might be required to produce the effects of antidepressant treatments, including drugs and electroconvulsive therapy.17
Selective ‘shunting’ of energy occurs during the stress response
Hormones released from the adrenal glands during stress divert glucose to exercising muscles and the brain’s limbic system, which are involved in the fight-or-flight response.18 However, metabolic functions and areas of the brain that are not involved in the stress response, such as the cerebral cortex and hippocampus, are deprived of energy as a consequence of this innate selective shunting (Figure 2).19
Positron-emission tomography (PET) scanning of the resting brain shows that components of the cerebral cortex (prefrontal cortex, hippocampus, striatum) and areas connecting the cerebral cortex to the limbic system exhibit the most energy consumption in the brain during rest (Figure 3).20 PET studies also show that neuronal connections within these energy-demanding areas atrophy more rapidly than in any other area of the brain when their energy supply is reduced or cut off.6
When the supply of oxygen and glucose to certain areas of the brain is reduced—such as in traumatic brain injury or stroke—the excitatory neurotransmitter glutamate accumulates in extracellular fluid and causes nerve-cell death.21 When a conditioned stimulus is presented during fear acquisition, functional magnetic resonance imaging (fMRI) studies of fear-conditioning have consistently reported, in the prefrontal cortex:
- a decrease in the blood oxygen level-dependent signal, below resting baseline
- a reduction in blood flow (Figure 4).22
This discovery adds to evidence that demonstrates a decrease in gray-matter density in the frontal lobes as a result of glutaminergic toxicity (Figure 5).
Activation of L-glutamate, believed to play a significant role in depression and other neuropsychiatric disorders, triggers calcium-dependent intracellular responses that “excite cells to death,” so to speak—thereby causing nerve-cell apoptosis and a reduction in synaptic connections between different areas of the brain responsible for learning and memory.23 Malfunction of these synaptic connections is thought to be partially responsible for depression and other psychiatric disorders.
Excessive activation of N-methyl-d-asparate (NMDA) receptors is thought to be the underlying mechanism that leads to neuronal cell death in glutaminergic toxicity. Therefore, NMDA receptor proteins have become a target in treating neurodegenerative psychiatric illnesses. There is more than one type of NMDA receptor; some of them are excitatory, others are inhibitory. Four compounds have presented as therapeutic candidates for inhibition of NMDA receptor functioning and treatment of depression: those that inhibit glutamate binding, those that block the ion channel, and those that inhibit receptor binding to the terminal regulatory domain.24
Regrettably, these chemical compounds are not receptor-selective, but small structural modifications of these NMDA receptors have been found and lead to significant changes in potency and selectivity. This should serve as a unique starting point for developing highly specific NMDA receptor modulator agents for a variety of neuropsychiatric and neurological conditions. GLYX-13, a derivative of ketamine (an NMDA receptor antagonist), has been implicated for use in treating depression. It has been tested on 2 large phase-II study groups.25
Neuronal circuitry of depression is altered by prolonged stress
Symptoms of depression can be explained by the anatomical circuit shown in Figure 6.15,20 Impaired concentration, diminished ability to process new information, and decline in memory function are associated with decreased nerve density in the hippocampus, which plays a key role in learning, memory, and encoding of emotionally relevant data into memory.26 The hippocampus interacts with the amygdala to provide input about the context in which stimuli occur.
Depressed people often demonstrate impulsivity and have difficulty controlling expression of emotions—traits that are attributed to increased neuronal density in the amygdala and insula, which has been illustrated in PET scans and voxel-based morphometry in depressed patients.27 These brain areas are implicated in subjective emotional experience, processing of emotional reactions, and impulsive decision-making. The amygdala is normally highly regulated by the prefrontal cortex, which uses rational judgment to interpret stimuli and regulate the expression of emotion.
A study involving a facial expression processing task demonstrated reduced connectivity between the amygdala and prefrontal cortex and increased functional connectivity among the amygdala, hippocampus, and caudate-putamen in depressed patients.24 And in a study that measured white matter conduction in various brain areas in depressed patients, the greatest reduction was found in areas connecting the limbic system to the prefrontal cortex and hippocampus—believed to be caused by stress response-induced ischemic glutaminergic neuroapoptosis.21 Such neuroapoptosis might lead to irrational interpretation of stimuli, unchecked expression of emotion, and impulsive thoughts and behavior that are often present in depression and other mood disorders.
Deep brain stimulation (DBS), in which electrodes are implanted in the brain, has proved effective at increasing synaptic connections between the prefrontal cortex and the limbic system when electrodes are placed appropriately.28 Patients with refractory depression who are treated with DBS show increased gray-matter density and functional activity in the prefrontal cortex, hippocampus, and fronto-limbic connections.29 DBS also increases neurotransmission of dopamine, serotonin, and norepinephrine within the fronto-limbic circuitry.30
Identifying risk factors for depression
Genetic risk factors. Forty percent of patients with depression have a first-degree relative with depression, suggesting a strong genetic component.10 Inherited differences in hippocampal volume, synaptic connections between the prefrontal cortex and amygdala, γ-aminobutyric acid (GABA)/glutamate balance, BDNF neurotransmitter receptors, and anatomic positioning of the limbic system in relation to other brain structures might account for the heritability of psychiatric disorders such as depression.
Evidence has been consistent that hippocampal volume is diminished in the brain of depressed persons. However, there is no prospective cohort study to determine whether people who have lower gray-matter hippocampal density or volume, or both, before depression onset develop symptoms later in life. There also is no study to determine the percentage of people who have lower-than-average hippocampal gray-matter density or volume and who have a first-degree relative with depression. Such studies would yield valuable information about anatomic variables that increase the risk of depression.
It has been proposed that low GABA function is an inherited biomarker for depression. Bjork and co-workers found a lower plasma level of GABA in depressed subjects and in their first-degree relatives, confirming that GABAergic tone might be under genetic control.11 Genetic loci studies in mice have linked depressive-like behavior to GABAergic loci on chromosomes 8 and 11, encoding alpha 1, alpha 6, and gamma subunits of GABAA receptors.23
A recent study in humans showed that severe, treatment-resistant depression with anxiety was linked to a mutation in the B1 subunit of the GABAA receptor. Positive genetic associations were found between polymorphism in human GABAA receptor subunit genes.11
GABA metabolizing enzymes also can be considered biological modifiers of depression. For example:
- GABA uptake and metabolism is controlled by the enzyme glutamic acid decarboxylase (GAD); depression has been found to be associated with a polymorphism in the GAD67 gene encoding an isoform of GAD.11
- GABA transaminase (GABA-T) is another key enzyme in GABA turnover.31 It catabolizes GABA.
We can conclude that, to a high degree, depression depends on GABA production and metabolism.
A variant in the human BDNF gene, in which valine is substituted for methionine in position 66 of the pro-domain of the BDNF protein, is associated with
- a decrease in the production of BDNF
- increased susceptibility to neuropsychiatric disorders, including depression, anxiety disorder, and bipolar disorder (Figure 7).32
People with the MM allele have been found to have a small hippocampal neuronal density and poor hippocampus-dependent memory function in neuroimaging studies.23 They also displayed diminished ventromedial prefrontal cortex volume and presented with aversive memory extinction deficit (ie, “holding grudges”).
Another neurotrophic factor, vascular endothelial growth factor (VEGF), is a survival factor for endothelial cells and neurons and a modulator of synaptic transmission. Understanding the molecular and cellular specificity of antidepressant-induced VEGF will be critical to determine its potential as a therapeutic target in depression.33 Delineating the relationship between VEGF and depression has, ultimately, the potential to shed light on the still elusive neural mechanisms that underlie the pathophysiology of depression and the mechanisms by which antidepressants exert their effects.34
Genetic polymorphisms in monoamine receptors (5-HT2A), transporters (SERTPR, 5-HTTLPR, STin2, rs25531, SLC6A4), and regulatory enzymes should not be overlooked.35 There is reproducible evidence that variability in these polymorphisms are associated with variability in:
- vulnerability to depression
- the response to treatment with existing antidepressant medications.1
Most studies that look at changes in neuronal circuitry focus on the integrity of synaptic connections between the frontal cortex and limbic system; few of them have closely examined the importance of the anatomic proximity of the 2 regions. It might be that having an amygdala that is relatively closer to the frontal cortex and the hippocampus reduces a person’s risk of depression, and vice versa. This association needs to be investigated further with imaging studies.
Environmental risk factors. The brain is thought to be plastic until age 30.5 Plasticity diminishes with age after age 7—except for the hippocampus, which can regenerate throughout life.36 Early life experiences play an important role in forming synaptic connections between the frontal cortex and the limbic system, through a process known as fear conditioning.
Children learn early in life which stimuli are to be perceived as threatening or aversive and how to respond to best preserves their safety and internal sense of well-being. Those who grow up in a hostile environment learn to perceive more stimuli as threatening than children who grow up in a nurturing environment.32 It is possible that the amygdala is larger in children who grow up in less-than-ideal circumstances because this region is constantly being recruited—at the expense of the more rational frontal cortex.
Evidence suggests that these conditions reduce hippocampal neurogenesis37:
- increasing age
- substance abuse (opiates and methamphetamines)
- inadequate housing
- minimal physical activity
- little opportunity for social stimulation
- minimal learning experience.
Bottom Line
Depression has been understood as a neurotransmitter deficiency in the brain; treatments were engineered to increase release, or block degradation, of those neurotransmitters. Novel theories—all interconnected—of the neuroanatomical pathophysiology of depression focus more on differences in neuron density in the brain; effects of stress on neurogenesis and neuronal cell apoptosis; alterations in feedback pathways connecting the pre-frontal cortex to the limbic system; and the role of pro-inflammatory mediators evoked during the stress response.
Related Resources
- Fuchs E. Neurogenesis in the adult brain: is there an association with mental disorders? Eur Arch Psychiatry Clin Neurosci. 2007;257(5):247-249.
- Videbech P, Ravnkilde B. Hippocampal volume and depression: a meta-analysis of MRI studies. Am J Psychiatry. 2004; 161(11):1957-1966.
Disclosure
The authors report no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.
Acknowledgement
Anita Rao, second-year medical student, Stritch School of Medicine, Loyola University, Chicago, Illinois, assisted in the preparation of this manuscript.
1. Eley TC, Sugden K, Corsico A, et al. Gene-environment interaction analysis of serotonin system markers with adolescent depression. Mol Psychiatry. 2004;9(10):908-915.
2. Haber SN, Rauch SL. Neurocircuitry: a window into the networks underlying neuropsychiatric disease. Neuropsychopharmacology. 2010;35(1):1-3.
3. Frodl T, Bokde AL, Scheuerecker J, et al. Functional connectivity bias of the orbitofrontal cortex in drug-free patients with major depression. Biol Psychiatry. 2010; 67(2):161-167.
4. Woolley CS, Gould E, McEwen BS. Exposure to excess glucocorticoids alters dendritic morphology of adult hippocampal pyramidal neurons. Brain Res. 1990;531(1-2): 225-231.
5. Heim C, Nemeroff CB. The impact of early adverse experiences on brain systems involved in the pathophysiology of anxiety and affective disorders. Biol Psychiatry. 1999;46(11):1509-1522.
6. Isgor C, Kabbaj M, Akil H, et al. Delayed effects of chronic variable stress during peripubertal-juvenile period on hippocampal morphology and on cognitive and stress axis functions in rats. Hippocampus. 2004;14(5):636-648.
7. De Kloet ER, Vreugdenhil E, Oitzl MS, et al. Brain corticosteroid receptor balance in health and disease. Endocr Rev. 1998;19(3):269-301.
8. Philip AM, Kim SD, Vijayan MM. Cortisol modulates the expression of cytokines and suppressors of cytokine signaling (SOCS) in rainbow trout hepatocytes. Dev Comp Immunol. 2012;38(2):360-367.
9. Coplan JD, Lydiard RB. Brain circuits in panic disorder. Biol Psychiatry. 1998;44(12):1264-1276.
10. Anisman H, Merali Z. Cytokines, stress and depressive illness: brain-immune interactions. Ann Med. 2003;35(1):2-11.
11. Crowley JJ, Lucki I. Opportunities to discover genes regulating depression and antidepressant response from rodent behavioral genetics. Curr Pharm Des. 2005;11(2):157-169.
12. Covington HE 3rd, Vialou V, Nestler EJ. From synapse to nucleus: novel targets for treating depression. Neuropharmacology. 2010;58(4-5):683-693.
13. Videbech P, Ravnkilde B. Hippocampal volume and depression: a meta-analysis of MRI studies. Am J Psychiatry. 2004;161(11):1957-1966.
14. Sandi C. Stress, cognitive impairment and cell adhesion molecules. Nat Rev Neurosci. 2004;5(12):917-930.
15. Hartley CA, Phelps EA. Changing fear: the neurocircuitry of emotion regulation. Neuropsychopharmacology. 2010;35(1): 136-146.
16. Kim DK, Lim SW, Lee S, et al. Serotonin transporter gene polymorphism and antidepressant response. Neuroreport. 2000;11(1):215-219.
17. Ueyama E, Ukai S, Ogawa A, et al, Chronic repetitive transcranial magnetic stimulation increases hippocampal neurogenesis in rats. Psychiatry Clin Neurosci. 2011; 65(1):77-81.
18. Irwin W, Anderle MJ, Abercrombie HC, et al. Amygdalar interhemispheric functional connectivity differs between the non-depressed and depressed human brain. Neuroimage. 2004;21(2):674-686.
19. McEwen BS. Physiology and neurobiology of stress and adaptation: central role of the brain. Physiol Rev. 2007; 87(3):873-904.
20. Gusnard DA, Raichle ME, Raichle ME. Searching for a baseline: functional imaging and the resting human brain. Nat Rev Neurosci. 2001;2(10):685-694.
21. Hulsebosch CE, Hains BC, Crown ED, et al. Mechanisms of chronic central neuropathic pain after spinal cord injury. Brain Res Rev. 2009;60(1):202-213.
22. Gottfried JA, Dolan RJ. Human orbitofrontal cortex mediates extinction learning while accessing conditioned representations of value. Nat Neurosci. 2004;7(10):1144-1152.
23 Arnone D, McKie S, Elliott R, et al. State-dependent changes in hippocampal grey matter in depression. Mol Psychiatry. 2012;1(8):1359-4184.
24. Brunoni AR, Lopes M, Fregni F. A systematic review and meta-analysis of clinical studies on major depression and BDNF levels: implications for the role of neuroplasticity in depression. Int J Neuropsychopharmacol. 2008;11(8):1169-1180.
25. Maeng S, Zarate CA Jr. The role of glutamate in mood disorders: results from the ketamine in major depression study and the presumed cellular mechanism underlying its antidepressant effects. Curr Psychiatry Rep. 2007;9(6):467-474.
26. Vaidya VA, Fernandes K, Jha S. Regulation of adult hippocampal neurogenesis: relevance to depression. Expert Rev Neurother. 2007;7(7):853-864.
27. Lisiecka DM, Carballedo A, Fagan AJ, et al. Altered inhibition of negative emotions in subjects at family risk of major depressive disorder. J Psychiatr Res. 2012;46(2):181-188.
28. Mayberg HS, Lozano AM, Voon V, et al. Deep brain stimulation for treatment-resistant depression. Neuron. 2005;45(5):651-660.
29. Levkovitz Y, Harel EV, Roth Y, et al. Deep transcranial magnetic stimulation over the prefrontal cortex: evaluation of antidepressant and cognitive effects in depressive patients. Brain Stimul. 2009;2(4):188-200.
30. Schlaepfer TE, Lieb K. Deep brain stimulation for treatment of refractory depression. Lancet. 2005;366(9495):1420-1422.
31. Astrup, J. Energy-requiring cell functions in the ischemic brain. Their critical supply and possible inhibition in protective therapy. J Neurosurg. 1982;56(4):482-497.
32. Fletcher JM. Childhood mistreatment and adolescent and young adult depression. Soc Sci Med. 2009;68(5):799-806.
33. Warner-Schmidt JL, Duman R. VEGF as a potential target for therapeutic intervention in depression. Curr Opin Pharmacol. 2008;8(1):14-19.
34. Clark-Raymond A, Halaris A. VEGF and depression: a comprehensive assessment of clinical data. J Psychiatr Res. 2013;47(8):1080-1087.
35. Alonso R, Griebel G, Pavone G, et al. Blockade of CRF(1) or V(1b) receptors reverses stress-induced suppression of neurogenesis in a mouse model of depression. Mol Psychiatry. 2004;9(3):278-286.
36. Thomas RM, Peterson DA. A neurogenic theory of depression gains momentum. Mol Interv. 2003;3(8):441-444.
37. Jacobs BL. Adult brain neurogenesis and depression. Brain Behav Immun. 2002;16(5):602-609.
1. Eley TC, Sugden K, Corsico A, et al. Gene-environment interaction analysis of serotonin system markers with adolescent depression. Mol Psychiatry. 2004;9(10):908-915.
2. Haber SN, Rauch SL. Neurocircuitry: a window into the networks underlying neuropsychiatric disease. Neuropsychopharmacology. 2010;35(1):1-3.
3. Frodl T, Bokde AL, Scheuerecker J, et al. Functional connectivity bias of the orbitofrontal cortex in drug-free patients with major depression. Biol Psychiatry. 2010; 67(2):161-167.
4. Woolley CS, Gould E, McEwen BS. Exposure to excess glucocorticoids alters dendritic morphology of adult hippocampal pyramidal neurons. Brain Res. 1990;531(1-2): 225-231.
5. Heim C, Nemeroff CB. The impact of early adverse experiences on brain systems involved in the pathophysiology of anxiety and affective disorders. Biol Psychiatry. 1999;46(11):1509-1522.
6. Isgor C, Kabbaj M, Akil H, et al. Delayed effects of chronic variable stress during peripubertal-juvenile period on hippocampal morphology and on cognitive and stress axis functions in rats. Hippocampus. 2004;14(5):636-648.
7. De Kloet ER, Vreugdenhil E, Oitzl MS, et al. Brain corticosteroid receptor balance in health and disease. Endocr Rev. 1998;19(3):269-301.
8. Philip AM, Kim SD, Vijayan MM. Cortisol modulates the expression of cytokines and suppressors of cytokine signaling (SOCS) in rainbow trout hepatocytes. Dev Comp Immunol. 2012;38(2):360-367.
9. Coplan JD, Lydiard RB. Brain circuits in panic disorder. Biol Psychiatry. 1998;44(12):1264-1276.
10. Anisman H, Merali Z. Cytokines, stress and depressive illness: brain-immune interactions. Ann Med. 2003;35(1):2-11.
11. Crowley JJ, Lucki I. Opportunities to discover genes regulating depression and antidepressant response from rodent behavioral genetics. Curr Pharm Des. 2005;11(2):157-169.
12. Covington HE 3rd, Vialou V, Nestler EJ. From synapse to nucleus: novel targets for treating depression. Neuropharmacology. 2010;58(4-5):683-693.
13. Videbech P, Ravnkilde B. Hippocampal volume and depression: a meta-analysis of MRI studies. Am J Psychiatry. 2004;161(11):1957-1966.
14. Sandi C. Stress, cognitive impairment and cell adhesion molecules. Nat Rev Neurosci. 2004;5(12):917-930.
15. Hartley CA, Phelps EA. Changing fear: the neurocircuitry of emotion regulation. Neuropsychopharmacology. 2010;35(1): 136-146.
16. Kim DK, Lim SW, Lee S, et al. Serotonin transporter gene polymorphism and antidepressant response. Neuroreport. 2000;11(1):215-219.
17. Ueyama E, Ukai S, Ogawa A, et al, Chronic repetitive transcranial magnetic stimulation increases hippocampal neurogenesis in rats. Psychiatry Clin Neurosci. 2011; 65(1):77-81.
18. Irwin W, Anderle MJ, Abercrombie HC, et al. Amygdalar interhemispheric functional connectivity differs between the non-depressed and depressed human brain. Neuroimage. 2004;21(2):674-686.
19. McEwen BS. Physiology and neurobiology of stress and adaptation: central role of the brain. Physiol Rev. 2007; 87(3):873-904.
20. Gusnard DA, Raichle ME, Raichle ME. Searching for a baseline: functional imaging and the resting human brain. Nat Rev Neurosci. 2001;2(10):685-694.
21. Hulsebosch CE, Hains BC, Crown ED, et al. Mechanisms of chronic central neuropathic pain after spinal cord injury. Brain Res Rev. 2009;60(1):202-213.
22. Gottfried JA, Dolan RJ. Human orbitofrontal cortex mediates extinction learning while accessing conditioned representations of value. Nat Neurosci. 2004;7(10):1144-1152.
23 Arnone D, McKie S, Elliott R, et al. State-dependent changes in hippocampal grey matter in depression. Mol Psychiatry. 2012;1(8):1359-4184.
24. Brunoni AR, Lopes M, Fregni F. A systematic review and meta-analysis of clinical studies on major depression and BDNF levels: implications for the role of neuroplasticity in depression. Int J Neuropsychopharmacol. 2008;11(8):1169-1180.
25. Maeng S, Zarate CA Jr. The role of glutamate in mood disorders: results from the ketamine in major depression study and the presumed cellular mechanism underlying its antidepressant effects. Curr Psychiatry Rep. 2007;9(6):467-474.
26. Vaidya VA, Fernandes K, Jha S. Regulation of adult hippocampal neurogenesis: relevance to depression. Expert Rev Neurother. 2007;7(7):853-864.
27. Lisiecka DM, Carballedo A, Fagan AJ, et al. Altered inhibition of negative emotions in subjects at family risk of major depressive disorder. J Psychiatr Res. 2012;46(2):181-188.
28. Mayberg HS, Lozano AM, Voon V, et al. Deep brain stimulation for treatment-resistant depression. Neuron. 2005;45(5):651-660.
29. Levkovitz Y, Harel EV, Roth Y, et al. Deep transcranial magnetic stimulation over the prefrontal cortex: evaluation of antidepressant and cognitive effects in depressive patients. Brain Stimul. 2009;2(4):188-200.
30. Schlaepfer TE, Lieb K. Deep brain stimulation for treatment of refractory depression. Lancet. 2005;366(9495):1420-1422.
31. Astrup, J. Energy-requiring cell functions in the ischemic brain. Their critical supply and possible inhibition in protective therapy. J Neurosurg. 1982;56(4):482-497.
32. Fletcher JM. Childhood mistreatment and adolescent and young adult depression. Soc Sci Med. 2009;68(5):799-806.
33. Warner-Schmidt JL, Duman R. VEGF as a potential target for therapeutic intervention in depression. Curr Opin Pharmacol. 2008;8(1):14-19.
34. Clark-Raymond A, Halaris A. VEGF and depression: a comprehensive assessment of clinical data. J Psychiatr Res. 2013;47(8):1080-1087.
35. Alonso R, Griebel G, Pavone G, et al. Blockade of CRF(1) or V(1b) receptors reverses stress-induced suppression of neurogenesis in a mouse model of depression. Mol Psychiatry. 2004;9(3):278-286.
36. Thomas RM, Peterson DA. A neurogenic theory of depression gains momentum. Mol Interv. 2003;3(8):441-444.
37. Jacobs BL. Adult brain neurogenesis and depression. Brain Behav Immun. 2002;16(5):602-609.
Expressing yourself can be risky business!
In the adolescent years, it’s all about "YOLO" (You only live once!), which is the premise for many of the behaviors that primary care doctors see during this time. Many teens come into the office covered in piercings and tattoos. My favorite is the boyfriend’s name tattooed across an arm, a leg, or even the buttocks. You can’t help but think to yourself, "You are going to regret that one, for sure!"
Although tattooing and piercing are practiced in many cultures, extensive body art and multiple piercings are practices that are often done by adolescents who also engage in other risky behaviors. ("Tattooing in adults and adolescents," UpToDate, Aug. 29, 2013). One study showed that this population is much more likely to engage in sexual activity, binge drinking, marijuana smoking, and fighting significantly more than were non–tattooed adolescents.
As a primary care doctor, you’re less likely to be asked for advice about getting a tattoo or piercing than you are to be asked to fix the mishaps of these practices, but it is still important to be up to date on the potential risks. Adolescents are at an even a greater risk than are adults for complications because, in most states, children under age 18 years are required to have parental consent to get a tattoo or piercing. Therefore, this age group is more likely to seek out illegal or unlicensed businesses. Where there is a greater risk of substandard protocols and hygiene, the risk of infection increases. Many of the infections come from nonsterile cleaning fluid and water used in the tattooing or piercing procedure. Improper education of the client on aftercare is another contributing factor.
Local infections are the No. 1 complication of tattoos and piercings. Staphylococcus aureus and methicillin-resistant S. aureus (MRSA) are the most common causes of infection, but several other infectious agents have also been identified. It is not uncommon to have outbreaks of a particular infection occur because a certain provider is not using appropriate hygiene or is a carrier of one of the blood-borne illnesses.
Some caution tattoo seekers about blood-borne infections such as HIV, hepatitis C, and hepatitis B. But the number of such infections is currently relatively low. In fact, the research does not show a clear causal relationship between tattoos and piercings. Instead, it shows that, because adolescents who get tattoos are risk takers and are more likely to be intravenous drug users, they are also more likely to become infected with these diseases (Pediatrics 2002;109:1021-7).
A major complication that should not be overlooked is infective endocarditis. Although rare, if a teen presents 1-2 months after a body piercing or tattoo and has unexplained fevers, weakness, arthritis, and malaise, a work-up should be done with infective endocarditis in mind.
The role of the primary care doctor in this situation is to educate patients on appropriate practices so that they will be less inclined to have an inexperienced and unlicensed person perform body art procedures on them. Patients should expect the skin to be cleaned initially with alcohol and iodine, and sterile water and gloves to be used in the procedure. A clear understanding of the potential health risks and life-long complications should also help to deter them unsafe practices.
In the event of a local infection, it may actually be better to leave the piercing in because it allows for drainage. Antibiotic coverage that includes MRSA will also speed recovery.
Tattoos and piercing can be safe when done properly. Being proactive and sharing the appropriate information can help an adolescent make a better decision so that not only do they get to live it up, they can live a healthy long life as well
Dr. Pearce is a pediatrician in Frankfort, Ill. E-mail her at pdnews@frontlinemedcom.com.
In the adolescent years, it’s all about "YOLO" (You only live once!), which is the premise for many of the behaviors that primary care doctors see during this time. Many teens come into the office covered in piercings and tattoos. My favorite is the boyfriend’s name tattooed across an arm, a leg, or even the buttocks. You can’t help but think to yourself, "You are going to regret that one, for sure!"
Although tattooing and piercing are practiced in many cultures, extensive body art and multiple piercings are practices that are often done by adolescents who also engage in other risky behaviors. ("Tattooing in adults and adolescents," UpToDate, Aug. 29, 2013). One study showed that this population is much more likely to engage in sexual activity, binge drinking, marijuana smoking, and fighting significantly more than were non–tattooed adolescents.
As a primary care doctor, you’re less likely to be asked for advice about getting a tattoo or piercing than you are to be asked to fix the mishaps of these practices, but it is still important to be up to date on the potential risks. Adolescents are at an even a greater risk than are adults for complications because, in most states, children under age 18 years are required to have parental consent to get a tattoo or piercing. Therefore, this age group is more likely to seek out illegal or unlicensed businesses. Where there is a greater risk of substandard protocols and hygiene, the risk of infection increases. Many of the infections come from nonsterile cleaning fluid and water used in the tattooing or piercing procedure. Improper education of the client on aftercare is another contributing factor.
Local infections are the No. 1 complication of tattoos and piercings. Staphylococcus aureus and methicillin-resistant S. aureus (MRSA) are the most common causes of infection, but several other infectious agents have also been identified. It is not uncommon to have outbreaks of a particular infection occur because a certain provider is not using appropriate hygiene or is a carrier of one of the blood-borne illnesses.
Some caution tattoo seekers about blood-borne infections such as HIV, hepatitis C, and hepatitis B. But the number of such infections is currently relatively low. In fact, the research does not show a clear causal relationship between tattoos and piercings. Instead, it shows that, because adolescents who get tattoos are risk takers and are more likely to be intravenous drug users, they are also more likely to become infected with these diseases (Pediatrics 2002;109:1021-7).
A major complication that should not be overlooked is infective endocarditis. Although rare, if a teen presents 1-2 months after a body piercing or tattoo and has unexplained fevers, weakness, arthritis, and malaise, a work-up should be done with infective endocarditis in mind.
The role of the primary care doctor in this situation is to educate patients on appropriate practices so that they will be less inclined to have an inexperienced and unlicensed person perform body art procedures on them. Patients should expect the skin to be cleaned initially with alcohol and iodine, and sterile water and gloves to be used in the procedure. A clear understanding of the potential health risks and life-long complications should also help to deter them unsafe practices.
In the event of a local infection, it may actually be better to leave the piercing in because it allows for drainage. Antibiotic coverage that includes MRSA will also speed recovery.
Tattoos and piercing can be safe when done properly. Being proactive and sharing the appropriate information can help an adolescent make a better decision so that not only do they get to live it up, they can live a healthy long life as well
Dr. Pearce is a pediatrician in Frankfort, Ill. E-mail her at pdnews@frontlinemedcom.com.
In the adolescent years, it’s all about "YOLO" (You only live once!), which is the premise for many of the behaviors that primary care doctors see during this time. Many teens come into the office covered in piercings and tattoos. My favorite is the boyfriend’s name tattooed across an arm, a leg, or even the buttocks. You can’t help but think to yourself, "You are going to regret that one, for sure!"
Although tattooing and piercing are practiced in many cultures, extensive body art and multiple piercings are practices that are often done by adolescents who also engage in other risky behaviors. ("Tattooing in adults and adolescents," UpToDate, Aug. 29, 2013). One study showed that this population is much more likely to engage in sexual activity, binge drinking, marijuana smoking, and fighting significantly more than were non–tattooed adolescents.
As a primary care doctor, you’re less likely to be asked for advice about getting a tattoo or piercing than you are to be asked to fix the mishaps of these practices, but it is still important to be up to date on the potential risks. Adolescents are at an even a greater risk than are adults for complications because, in most states, children under age 18 years are required to have parental consent to get a tattoo or piercing. Therefore, this age group is more likely to seek out illegal or unlicensed businesses. Where there is a greater risk of substandard protocols and hygiene, the risk of infection increases. Many of the infections come from nonsterile cleaning fluid and water used in the tattooing or piercing procedure. Improper education of the client on aftercare is another contributing factor.
Local infections are the No. 1 complication of tattoos and piercings. Staphylococcus aureus and methicillin-resistant S. aureus (MRSA) are the most common causes of infection, but several other infectious agents have also been identified. It is not uncommon to have outbreaks of a particular infection occur because a certain provider is not using appropriate hygiene or is a carrier of one of the blood-borne illnesses.
Some caution tattoo seekers about blood-borne infections such as HIV, hepatitis C, and hepatitis B. But the number of such infections is currently relatively low. In fact, the research does not show a clear causal relationship between tattoos and piercings. Instead, it shows that, because adolescents who get tattoos are risk takers and are more likely to be intravenous drug users, they are also more likely to become infected with these diseases (Pediatrics 2002;109:1021-7).
A major complication that should not be overlooked is infective endocarditis. Although rare, if a teen presents 1-2 months after a body piercing or tattoo and has unexplained fevers, weakness, arthritis, and malaise, a work-up should be done with infective endocarditis in mind.
The role of the primary care doctor in this situation is to educate patients on appropriate practices so that they will be less inclined to have an inexperienced and unlicensed person perform body art procedures on them. Patients should expect the skin to be cleaned initially with alcohol and iodine, and sterile water and gloves to be used in the procedure. A clear understanding of the potential health risks and life-long complications should also help to deter them unsafe practices.
In the event of a local infection, it may actually be better to leave the piercing in because it allows for drainage. Antibiotic coverage that includes MRSA will also speed recovery.
Tattoos and piercing can be safe when done properly. Being proactive and sharing the appropriate information can help an adolescent make a better decision so that not only do they get to live it up, they can live a healthy long life as well
Dr. Pearce is a pediatrician in Frankfort, Ill. E-mail her at pdnews@frontlinemedcom.com.
Synthetic lethality: beating cancer at its own game
The primary focus for targeted cancer agents has typically been to counteract the oncogenic signaling that results from genetic defects. A new strategy is emerging that actually seeks to exploit the oncogenic features of tumor cells rather than overcome them. Synthetic lethality (SL) is a situation in which 2 nonlethal mutations become lethal to a cell when they are present simultaneously. If SL were to be exploited for anticancer therapy, it could lead to the development of highly selective, less toxic drugs, while expanding therapeutic targets to include those that have, until now, proven pharmaceutically intractable. Here, we discuss the idea of SL and how it can be applied to cancer therapy.
Click on the PDF icon at the top of this introduction to read the full article.
The primary focus for targeted cancer agents has typically been to counteract the oncogenic signaling that results from genetic defects. A new strategy is emerging that actually seeks to exploit the oncogenic features of tumor cells rather than overcome them. Synthetic lethality (SL) is a situation in which 2 nonlethal mutations become lethal to a cell when they are present simultaneously. If SL were to be exploited for anticancer therapy, it could lead to the development of highly selective, less toxic drugs, while expanding therapeutic targets to include those that have, until now, proven pharmaceutically intractable. Here, we discuss the idea of SL and how it can be applied to cancer therapy.
Click on the PDF icon at the top of this introduction to read the full article.
The primary focus for targeted cancer agents has typically been to counteract the oncogenic signaling that results from genetic defects. A new strategy is emerging that actually seeks to exploit the oncogenic features of tumor cells rather than overcome them. Synthetic lethality (SL) is a situation in which 2 nonlethal mutations become lethal to a cell when they are present simultaneously. If SL were to be exploited for anticancer therapy, it could lead to the development of highly selective, less toxic drugs, while expanding therapeutic targets to include those that have, until now, proven pharmaceutically intractable. Here, we discuss the idea of SL and how it can be applied to cancer therapy.
Click on the PDF icon at the top of this introduction to read the full article.
Biomarker testing for treatment of metastatic colorectal cancer: role of the pathologist in community practice
The past decade has been marked by significant advancements in the treatment of patients with metastatic colorectal cancer (mCRC), including the approval of novel biologic agents such as the angiogenesis inhibitors bevacizumab and afibercept and the epidermal growth factor receptor monoclonal antibodies (mAbs) cetuximab and panitumumab. Cetuximab was recently approved by the US Food and Drug Administration in combination with FOLFIRI (irinotecan, 5-fuorouracil, leucovorin) for the first-line treatment of patients with KRAS mutation-negative (wild-type) tumors as determined by an FDA-approved companion diagnostic. It was the first FDA approval in mCRC requiring use of a diagnostic test that is predictive of response prior to initiation of frontline therapy.
Click on the PDF icon at the top of this introduction to read the full article.
The past decade has been marked by significant advancements in the treatment of patients with metastatic colorectal cancer (mCRC), including the approval of novel biologic agents such as the angiogenesis inhibitors bevacizumab and afibercept and the epidermal growth factor receptor monoclonal antibodies (mAbs) cetuximab and panitumumab. Cetuximab was recently approved by the US Food and Drug Administration in combination with FOLFIRI (irinotecan, 5-fuorouracil, leucovorin) for the first-line treatment of patients with KRAS mutation-negative (wild-type) tumors as determined by an FDA-approved companion diagnostic. It was the first FDA approval in mCRC requiring use of a diagnostic test that is predictive of response prior to initiation of frontline therapy.
Click on the PDF icon at the top of this introduction to read the full article.
The past decade has been marked by significant advancements in the treatment of patients with metastatic colorectal cancer (mCRC), including the approval of novel biologic agents such as the angiogenesis inhibitors bevacizumab and afibercept and the epidermal growth factor receptor monoclonal antibodies (mAbs) cetuximab and panitumumab. Cetuximab was recently approved by the US Food and Drug Administration in combination with FOLFIRI (irinotecan, 5-fuorouracil, leucovorin) for the first-line treatment of patients with KRAS mutation-negative (wild-type) tumors as determined by an FDA-approved companion diagnostic. It was the first FDA approval in mCRC requiring use of a diagnostic test that is predictive of response prior to initiation of frontline therapy.
Click on the PDF icon at the top of this introduction to read the full article.
Current options and future directions in the systemic treatment of metastatic melanoma
Systemic treatment options for metastatic melanoma have historically been limited, with conventional cytotoxic chemotherapies demonstrating only modest benefit. Recent advances, however, have dramatically changed the treatment landscape and can be considered in 2 general categories: immunotherapeutic approaches that enhance antitumor immunity, and targeted therapeutic approaches that block oncogenic driver mutations. Immunotherapy with antibodies that block cytotoxic T-lymphocyte antigen 4 and programmed death-1 receptor can result in durable responses in a subset of patients. These treatments may be considered for patients irrespective of their mutational status, and ongoing research continues to investigate biomarkers associated with clinical outcomes. Side effects of these agents result from immune-mediated reactions involving various organ sites and can include: diarrhea, rash, hepatitis, and endocrinopathies.
Click on the PDF icon at the top of this introduction to read the full article.
Systemic treatment options for metastatic melanoma have historically been limited, with conventional cytotoxic chemotherapies demonstrating only modest benefit. Recent advances, however, have dramatically changed the treatment landscape and can be considered in 2 general categories: immunotherapeutic approaches that enhance antitumor immunity, and targeted therapeutic approaches that block oncogenic driver mutations. Immunotherapy with antibodies that block cytotoxic T-lymphocyte antigen 4 and programmed death-1 receptor can result in durable responses in a subset of patients. These treatments may be considered for patients irrespective of their mutational status, and ongoing research continues to investigate biomarkers associated with clinical outcomes. Side effects of these agents result from immune-mediated reactions involving various organ sites and can include: diarrhea, rash, hepatitis, and endocrinopathies.
Click on the PDF icon at the top of this introduction to read the full article.
Systemic treatment options for metastatic melanoma have historically been limited, with conventional cytotoxic chemotherapies demonstrating only modest benefit. Recent advances, however, have dramatically changed the treatment landscape and can be considered in 2 general categories: immunotherapeutic approaches that enhance antitumor immunity, and targeted therapeutic approaches that block oncogenic driver mutations. Immunotherapy with antibodies that block cytotoxic T-lymphocyte antigen 4 and programmed death-1 receptor can result in durable responses in a subset of patients. These treatments may be considered for patients irrespective of their mutational status, and ongoing research continues to investigate biomarkers associated with clinical outcomes. Side effects of these agents result from immune-mediated reactions involving various organ sites and can include: diarrhea, rash, hepatitis, and endocrinopathies.
Click on the PDF icon at the top of this introduction to read the full article.
Current gout guidelines stress ‘treat to target’
SNOWMASS, COLO. – The current American College of Rheumatology gout guidelines contain a number of recommendations that may come as a surprise to rheumatologists and primary care physicians alike.
The guidelines state, for example, that urate-lowering therapy should be undertaken routinely in any patient with an established diagnosis of gout who has comorbid chronic kidney disease (CKD) that is stage 2 or worse, meaning an estimated glomerular filtration rate of 89 mL/minute per 1.73 m2 or less.
The rationale is that it’s particularly important to try to prevent acute gout attacks in such patients because their renal dysfunction makes it problematic to use colchicine and NSAIDs to quell attacks. Intriguing studies suggest that lowering serum urate may actually slow progression of CKD, Dr. Michael H. Pillinger said at the Winter Rheumatology symposium sponsored by the American College of Rheumatology.
The guidelines name the other indications for urate lowering in gout patients as the presence of a tophus on clinical examination or an imaging study, a history of two or more gout attacks per year, or a history of kidney stones.
Traditionally, urate-lowering therapy has been initiated during quiescent periods, but the ACR guidelines state that it also can be started during an acute attack if effective anti-inflammatory management has been instituted.
"This goes against what I was taught," observed Dr. Pillinger, a rheumatologist and director of the crystal diseases study group at New York University.
The guidelines (Arthritis Care Res. 2012;64:1431-46 and 1447-61) emphasize the importance of a treat-to-target approach.
"The primary care physicians I talk to still don’t know this. The ACR recommends a minimum serum urate target of less than 6.0 mg/dL, but the guidelines are very clear that if 6 isn’t good enough, you keep going. You go below 5. When I see patients with tophaceous gout, my target is never 6. My target is 5 or 4. That’s what I teach my fellows," explained Dr. Pillinger, who served on an expert panel that advised the guideline-writing task force.
The ACR urate-lowering algorithm begins with either allopurinol or febuxostat (Uloric) as first-line therapy. The guideline committee, which expressly excluded cost as a consideration, offered no guidance as to which xanthine oxidase inhibitor is preferred. Dr. Pillinger noted that febuxostat is a more specific xanthine oxidase inhibitor, is simpler to dose, and is far less likely to cause hypersensitivity reactions than is allopurinol. It is also more effective, although not dramatically more so. And it is considerably more expensive.
Febuxostat is approved by the Food and Drug Administration specifically for use in patients with mild to moderate CKD. Allopurinol is not. However, the gout guidelines endorse the use of allopurinol in that setting.
When allopurinol is the initial drug, the guidelines recommend dosing it in a manner that is different from how most physicians have been using it, the rheumatologist said. The recommended starting dose is lower than has been customary: 100 mg/day, and 50 mg/day in patients with stage 4 or 5 CKD. The drug is to be titrated upward every 2-5 weeks as needed to achieve the target urate level. The maximum dose is 800 mg/day, even in patients with comorbid CKD. Although the guidelines don’t provide guidance as to the size of the stepwise dosing increases, Dr. Pillinger usually boosts the allopurinol dose by 100 mg at a time, or 50 mg in patients with CKD.
"Most patients don’t get to target at 300 mg/day. You’ve got to go higher," he said.
An important innovation in the current guidelines is the recommendation for testing for the HLA-B*5801 allele in patients of Korean, Thai, or Han Chinese ancestry who are being considered for allopurinol therapy. The presence of this allele confers a several hundred–fold increased risk of allopurinol hypersensitivity.
Probenecid is endorsed as the alternative first-line urate-lowering agent, but only if at least one xanthine oxidase inhibitor is contraindicated or not tolerated. No other agents get the nod as first-line therapy.
The guidelines state that if a patient’s serum urate is not at target despite maximum-dose therapy with a first-line xanthine oxidase inhibitor, it is not appropriate to switch to the other xanthine oxidase inhibitor. Instead, it is time to add a uricosuric agent: probenecid, losartan, or fenofibrate. If the urate level still is not at target and the patient is generally well, with few gout attacks, then that’s an acceptable result. However, if the patient has moderate tophaceous gout or chronic gouty arthropathy, it’s appropriate to place the patient on pegloticase (Krystexxa) while discontinuing all other urate-lowering agents.
The ACR guidelines stress that it is vital to always try to prevent gout attacks during initiation of urate-lowering therapy. The recommended first-line agents for prophylaxis are low-dose colchicine or a low-dose NSAID, with prednisone at a dose not to exceed 10 mg/day reserved as second-line therapy in the event the first-line agents are not tolerated or are ineffective.
Prophylaxis is supposed to continue as long as a patient has any evidence of disease activity. And once all symptoms and tophi have resolved, all measures needed to keep the serum urate below 6.0 mg/dL are to be continued indefinitely.
"For most patients," Dr. Pillinger concluded, "gout treatment is almost always forever."
He reported having received research grants from Takeda, which markets febuxostat in the United States, and Savient, which markets pegloticase.
SNOWMASS, COLO. – The current American College of Rheumatology gout guidelines contain a number of recommendations that may come as a surprise to rheumatologists and primary care physicians alike.
The guidelines state, for example, that urate-lowering therapy should be undertaken routinely in any patient with an established diagnosis of gout who has comorbid chronic kidney disease (CKD) that is stage 2 or worse, meaning an estimated glomerular filtration rate of 89 mL/minute per 1.73 m2 or less.
The rationale is that it’s particularly important to try to prevent acute gout attacks in such patients because their renal dysfunction makes it problematic to use colchicine and NSAIDs to quell attacks. Intriguing studies suggest that lowering serum urate may actually slow progression of CKD, Dr. Michael H. Pillinger said at the Winter Rheumatology symposium sponsored by the American College of Rheumatology.
The guidelines name the other indications for urate lowering in gout patients as the presence of a tophus on clinical examination or an imaging study, a history of two or more gout attacks per year, or a history of kidney stones.
Traditionally, urate-lowering therapy has been initiated during quiescent periods, but the ACR guidelines state that it also can be started during an acute attack if effective anti-inflammatory management has been instituted.
"This goes against what I was taught," observed Dr. Pillinger, a rheumatologist and director of the crystal diseases study group at New York University.
The guidelines (Arthritis Care Res. 2012;64:1431-46 and 1447-61) emphasize the importance of a treat-to-target approach.
"The primary care physicians I talk to still don’t know this. The ACR recommends a minimum serum urate target of less than 6.0 mg/dL, but the guidelines are very clear that if 6 isn’t good enough, you keep going. You go below 5. When I see patients with tophaceous gout, my target is never 6. My target is 5 or 4. That’s what I teach my fellows," explained Dr. Pillinger, who served on an expert panel that advised the guideline-writing task force.
The ACR urate-lowering algorithm begins with either allopurinol or febuxostat (Uloric) as first-line therapy. The guideline committee, which expressly excluded cost as a consideration, offered no guidance as to which xanthine oxidase inhibitor is preferred. Dr. Pillinger noted that febuxostat is a more specific xanthine oxidase inhibitor, is simpler to dose, and is far less likely to cause hypersensitivity reactions than is allopurinol. It is also more effective, although not dramatically more so. And it is considerably more expensive.
Febuxostat is approved by the Food and Drug Administration specifically for use in patients with mild to moderate CKD. Allopurinol is not. However, the gout guidelines endorse the use of allopurinol in that setting.
When allopurinol is the initial drug, the guidelines recommend dosing it in a manner that is different from how most physicians have been using it, the rheumatologist said. The recommended starting dose is lower than has been customary: 100 mg/day, and 50 mg/day in patients with stage 4 or 5 CKD. The drug is to be titrated upward every 2-5 weeks as needed to achieve the target urate level. The maximum dose is 800 mg/day, even in patients with comorbid CKD. Although the guidelines don’t provide guidance as to the size of the stepwise dosing increases, Dr. Pillinger usually boosts the allopurinol dose by 100 mg at a time, or 50 mg in patients with CKD.
"Most patients don’t get to target at 300 mg/day. You’ve got to go higher," he said.
An important innovation in the current guidelines is the recommendation for testing for the HLA-B*5801 allele in patients of Korean, Thai, or Han Chinese ancestry who are being considered for allopurinol therapy. The presence of this allele confers a several hundred–fold increased risk of allopurinol hypersensitivity.
Probenecid is endorsed as the alternative first-line urate-lowering agent, but only if at least one xanthine oxidase inhibitor is contraindicated or not tolerated. No other agents get the nod as first-line therapy.
The guidelines state that if a patient’s serum urate is not at target despite maximum-dose therapy with a first-line xanthine oxidase inhibitor, it is not appropriate to switch to the other xanthine oxidase inhibitor. Instead, it is time to add a uricosuric agent: probenecid, losartan, or fenofibrate. If the urate level still is not at target and the patient is generally well, with few gout attacks, then that’s an acceptable result. However, if the patient has moderate tophaceous gout or chronic gouty arthropathy, it’s appropriate to place the patient on pegloticase (Krystexxa) while discontinuing all other urate-lowering agents.
The ACR guidelines stress that it is vital to always try to prevent gout attacks during initiation of urate-lowering therapy. The recommended first-line agents for prophylaxis are low-dose colchicine or a low-dose NSAID, with prednisone at a dose not to exceed 10 mg/day reserved as second-line therapy in the event the first-line agents are not tolerated or are ineffective.
Prophylaxis is supposed to continue as long as a patient has any evidence of disease activity. And once all symptoms and tophi have resolved, all measures needed to keep the serum urate below 6.0 mg/dL are to be continued indefinitely.
"For most patients," Dr. Pillinger concluded, "gout treatment is almost always forever."
He reported having received research grants from Takeda, which markets febuxostat in the United States, and Savient, which markets pegloticase.
SNOWMASS, COLO. – The current American College of Rheumatology gout guidelines contain a number of recommendations that may come as a surprise to rheumatologists and primary care physicians alike.
The guidelines state, for example, that urate-lowering therapy should be undertaken routinely in any patient with an established diagnosis of gout who has comorbid chronic kidney disease (CKD) that is stage 2 or worse, meaning an estimated glomerular filtration rate of 89 mL/minute per 1.73 m2 or less.
The rationale is that it’s particularly important to try to prevent acute gout attacks in such patients because their renal dysfunction makes it problematic to use colchicine and NSAIDs to quell attacks. Intriguing studies suggest that lowering serum urate may actually slow progression of CKD, Dr. Michael H. Pillinger said at the Winter Rheumatology symposium sponsored by the American College of Rheumatology.
The guidelines name the other indications for urate lowering in gout patients as the presence of a tophus on clinical examination or an imaging study, a history of two or more gout attacks per year, or a history of kidney stones.
Traditionally, urate-lowering therapy has been initiated during quiescent periods, but the ACR guidelines state that it also can be started during an acute attack if effective anti-inflammatory management has been instituted.
"This goes against what I was taught," observed Dr. Pillinger, a rheumatologist and director of the crystal diseases study group at New York University.
The guidelines (Arthritis Care Res. 2012;64:1431-46 and 1447-61) emphasize the importance of a treat-to-target approach.
"The primary care physicians I talk to still don’t know this. The ACR recommends a minimum serum urate target of less than 6.0 mg/dL, but the guidelines are very clear that if 6 isn’t good enough, you keep going. You go below 5. When I see patients with tophaceous gout, my target is never 6. My target is 5 or 4. That’s what I teach my fellows," explained Dr. Pillinger, who served on an expert panel that advised the guideline-writing task force.
The ACR urate-lowering algorithm begins with either allopurinol or febuxostat (Uloric) as first-line therapy. The guideline committee, which expressly excluded cost as a consideration, offered no guidance as to which xanthine oxidase inhibitor is preferred. Dr. Pillinger noted that febuxostat is a more specific xanthine oxidase inhibitor, is simpler to dose, and is far less likely to cause hypersensitivity reactions than is allopurinol. It is also more effective, although not dramatically more so. And it is considerably more expensive.
Febuxostat is approved by the Food and Drug Administration specifically for use in patients with mild to moderate CKD. Allopurinol is not. However, the gout guidelines endorse the use of allopurinol in that setting.
When allopurinol is the initial drug, the guidelines recommend dosing it in a manner that is different from how most physicians have been using it, the rheumatologist said. The recommended starting dose is lower than has been customary: 100 mg/day, and 50 mg/day in patients with stage 4 or 5 CKD. The drug is to be titrated upward every 2-5 weeks as needed to achieve the target urate level. The maximum dose is 800 mg/day, even in patients with comorbid CKD. Although the guidelines don’t provide guidance as to the size of the stepwise dosing increases, Dr. Pillinger usually boosts the allopurinol dose by 100 mg at a time, or 50 mg in patients with CKD.
"Most patients don’t get to target at 300 mg/day. You’ve got to go higher," he said.
An important innovation in the current guidelines is the recommendation for testing for the HLA-B*5801 allele in patients of Korean, Thai, or Han Chinese ancestry who are being considered for allopurinol therapy. The presence of this allele confers a several hundred–fold increased risk of allopurinol hypersensitivity.
Probenecid is endorsed as the alternative first-line urate-lowering agent, but only if at least one xanthine oxidase inhibitor is contraindicated or not tolerated. No other agents get the nod as first-line therapy.
The guidelines state that if a patient’s serum urate is not at target despite maximum-dose therapy with a first-line xanthine oxidase inhibitor, it is not appropriate to switch to the other xanthine oxidase inhibitor. Instead, it is time to add a uricosuric agent: probenecid, losartan, or fenofibrate. If the urate level still is not at target and the patient is generally well, with few gout attacks, then that’s an acceptable result. However, if the patient has moderate tophaceous gout or chronic gouty arthropathy, it’s appropriate to place the patient on pegloticase (Krystexxa) while discontinuing all other urate-lowering agents.
The ACR guidelines stress that it is vital to always try to prevent gout attacks during initiation of urate-lowering therapy. The recommended first-line agents for prophylaxis are low-dose colchicine or a low-dose NSAID, with prednisone at a dose not to exceed 10 mg/day reserved as second-line therapy in the event the first-line agents are not tolerated or are ineffective.
Prophylaxis is supposed to continue as long as a patient has any evidence of disease activity. And once all symptoms and tophi have resolved, all measures needed to keep the serum urate below 6.0 mg/dL are to be continued indefinitely.
"For most patients," Dr. Pillinger concluded, "gout treatment is almost always forever."
He reported having received research grants from Takeda, which markets febuxostat in the United States, and Savient, which markets pegloticase.
EXPERT ANALYSIS FROM THE ACR WINTER RHEUMATOLOGY SYMPOSIUM
Consortium study falls short of expectations
SAN FRANCISCO—A group’s effort to identify optimal front-line treatment for peripheral T-cell lymphomas (PTCLs) was not as successful as researchers anticipated.
The North American PTCL Consortium set out to find a treatment that could best CHOP (cyclophosphamide, hydroxydaunorubicin, vincristine, and prednisone), as studies have suggested this regimen is inadequate for patients with PTCL.
So the group organized a trial testing
a potentially more promising regimen: cyclophosphamide, etoposide, vincristine, and prednisone, alternating with pralatrexate (CEOP-P).
However, CEOP-P elicited a complete response (CR) rate comparable to rates historically seen with CHOP, and progression-free survival rates with the new regimen were “not particularly encouraging.”
Ranjana Advani, MD, of Stanford University Medical Center in California, discussed this trial’s inception, execution, and results at the 6th Annual T-cell Lymphoma Forum.
Trial inception
It all began with the first meeting of the North American PTCL Consortium, which took place at the 2006 ASH Annual Meeting. Physicians from 17 centers gathered to discuss the state of PTCL research in North America.
The group realized there were too many open studies for such a rare disease, and efforts should be more focused. However, they could not agree publicly as to which studies should get priority, so they used an anonymous survey to obtain a consensus.
Survey responses were “all over the map,” Dr Advani said. But ultimately, the consensus was that ongoing trials were not sufficient, and a new trial was necessary.
The group decided to first lend their support to ongoing trials and then launch a new study. At the fourth and fifth meetings of the North American PTCL Consortium (both in 2009), they drafted the concept of a front-line trial testing CEOP-P.
“We decided to use [CEOP] as a backbone because there were reservations about anthracyclines having a role in PTCL, and there was data . . . in patients [with B-cell lymphomas] who were not anthracycline-eligible and did reasonably well when etoposide was substituted [for hydroxydaunorubicin],” Dr Advani said.
As for for the second “P” in CEOP-P, pralatrexate was the first drug approved for patients with relapsed PTCL, which provided the rationale for evaluating it in the front-line setting.
Execution and results
The primary aim of this study was to improve the CR rate from 40% to 60% with CEOP-P followed by optional transplant. A literature review had revealed that CRs with CHOP have been in the range of 40% to 50%.
The researchers enrolled a total of 34 patients, but 1 withdrew consent. Twenty-seven patients received at least 2 cycles of CEOP-P. Of the 6 patients who received a single cycle, 4 discontinued treatment due to early disease progression, and 2 discontinued because of adverse events.
Grade 3-4 adverse events associated with CEOP-P included anemia, thrombocytopenia, febrile neutropenia, mucositis, sepsis, increased creatinine, and liver transaminases.
The researchers had used a 2-stage Simon design (alpha=0.10, 90% power) to test the null hypothesis that the CR rate would be 40% or greater.
For the first stage of 20 evaluable patients, the trial would be terminated if 8 or fewer patients experienced a CR after course 2B of chemotherapy. For the second stage, 34 patients were required, and at least 17 had to achieve a CR at the end of therapy for the regimen to be considered useful.
At the end of stage 1, 50% of the patients (10/20) had achieved a CR. Ultimately, 52% of all patients (n=17) achieved a CR.
This suggests CEOP-P is a useful regimen, according to the study design. But the primary aim of improving CR from 40% to 60% was not met.
Furthermore, the estimated 1-year and 2-year progression-free survival rates were “not particularly encouraging,” according to Dr Advani. The rates were 50% and 34%, respectively. And the estimated 1-year and 2-year overall survival rate was 64%.
“So this was a lesson in working together and getting a trial from ground zero, to up and running, to a presentation, and publication underway,” Dr Advani said.
“And even though it took in all the ingredients of what everybody thought was important . . . , it’s not a regimen which has that much promise to move to a randomized setting. And so defining the optimal front-line therapy in PTCL continues to be a challenge and an unmet need.”
Now, the North American PTCL Consortium is working on a second front-line trial testing cyclophosphamide, hydroxydaunorubicin, vincristine, etoposide, and prednisone (CHOEP) plus lenalidomide in stage II, III, and IV PTCL. The final protocol has been circulated, and the group anticipates the first patient will be enrolled by June or July of this year.
Dr Advani and her colleagues also presented results of the CEOP-P trial at the 2013 ASH Annual Meeting as abstract 3044. (Information in the abstract differs from that presented at the T-cell Lymphoma Forum.)
SAN FRANCISCO—A group’s effort to identify optimal front-line treatment for peripheral T-cell lymphomas (PTCLs) was not as successful as researchers anticipated.
The North American PTCL Consortium set out to find a treatment that could best CHOP (cyclophosphamide, hydroxydaunorubicin, vincristine, and prednisone), as studies have suggested this regimen is inadequate for patients with PTCL.
So the group organized a trial testing
a potentially more promising regimen: cyclophosphamide, etoposide, vincristine, and prednisone, alternating with pralatrexate (CEOP-P).
However, CEOP-P elicited a complete response (CR) rate comparable to rates historically seen with CHOP, and progression-free survival rates with the new regimen were “not particularly encouraging.”
Ranjana Advani, MD, of Stanford University Medical Center in California, discussed this trial’s inception, execution, and results at the 6th Annual T-cell Lymphoma Forum.
Trial inception
It all began with the first meeting of the North American PTCL Consortium, which took place at the 2006 ASH Annual Meeting. Physicians from 17 centers gathered to discuss the state of PTCL research in North America.
The group realized there were too many open studies for such a rare disease, and efforts should be more focused. However, they could not agree publicly as to which studies should get priority, so they used an anonymous survey to obtain a consensus.
Survey responses were “all over the map,” Dr Advani said. But ultimately, the consensus was that ongoing trials were not sufficient, and a new trial was necessary.
The group decided to first lend their support to ongoing trials and then launch a new study. At the fourth and fifth meetings of the North American PTCL Consortium (both in 2009), they drafted the concept of a front-line trial testing CEOP-P.
“We decided to use [CEOP] as a backbone because there were reservations about anthracyclines having a role in PTCL, and there was data . . . in patients [with B-cell lymphomas] who were not anthracycline-eligible and did reasonably well when etoposide was substituted [for hydroxydaunorubicin],” Dr Advani said.
As for for the second “P” in CEOP-P, pralatrexate was the first drug approved for patients with relapsed PTCL, which provided the rationale for evaluating it in the front-line setting.
Execution and results
The primary aim of this study was to improve the CR rate from 40% to 60% with CEOP-P followed by optional transplant. A literature review had revealed that CRs with CHOP have been in the range of 40% to 50%.
The researchers enrolled a total of 34 patients, but 1 withdrew consent. Twenty-seven patients received at least 2 cycles of CEOP-P. Of the 6 patients who received a single cycle, 4 discontinued treatment due to early disease progression, and 2 discontinued because of adverse events.
Grade 3-4 adverse events associated with CEOP-P included anemia, thrombocytopenia, febrile neutropenia, mucositis, sepsis, increased creatinine, and liver transaminases.
The researchers had used a 2-stage Simon design (alpha=0.10, 90% power) to test the null hypothesis that the CR rate would be 40% or greater.
For the first stage of 20 evaluable patients, the trial would be terminated if 8 or fewer patients experienced a CR after course 2B of chemotherapy. For the second stage, 34 patients were required, and at least 17 had to achieve a CR at the end of therapy for the regimen to be considered useful.
At the end of stage 1, 50% of the patients (10/20) had achieved a CR. Ultimately, 52% of all patients (n=17) achieved a CR.
This suggests CEOP-P is a useful regimen, according to the study design. But the primary aim of improving CR from 40% to 60% was not met.
Furthermore, the estimated 1-year and 2-year progression-free survival rates were “not particularly encouraging,” according to Dr Advani. The rates were 50% and 34%, respectively. And the estimated 1-year and 2-year overall survival rate was 64%.
“So this was a lesson in working together and getting a trial from ground zero, to up and running, to a presentation, and publication underway,” Dr Advani said.
“And even though it took in all the ingredients of what everybody thought was important . . . , it’s not a regimen which has that much promise to move to a randomized setting. And so defining the optimal front-line therapy in PTCL continues to be a challenge and an unmet need.”
Now, the North American PTCL Consortium is working on a second front-line trial testing cyclophosphamide, hydroxydaunorubicin, vincristine, etoposide, and prednisone (CHOEP) plus lenalidomide in stage II, III, and IV PTCL. The final protocol has been circulated, and the group anticipates the first patient will be enrolled by June or July of this year.
Dr Advani and her colleagues also presented results of the CEOP-P trial at the 2013 ASH Annual Meeting as abstract 3044. (Information in the abstract differs from that presented at the T-cell Lymphoma Forum.)
SAN FRANCISCO—A group’s effort to identify optimal front-line treatment for peripheral T-cell lymphomas (PTCLs) was not as successful as researchers anticipated.
The North American PTCL Consortium set out to find a treatment that could best CHOP (cyclophosphamide, hydroxydaunorubicin, vincristine, and prednisone), as studies have suggested this regimen is inadequate for patients with PTCL.
So the group organized a trial testing
a potentially more promising regimen: cyclophosphamide, etoposide, vincristine, and prednisone, alternating with pralatrexate (CEOP-P).
However, CEOP-P elicited a complete response (CR) rate comparable to rates historically seen with CHOP, and progression-free survival rates with the new regimen were “not particularly encouraging.”
Ranjana Advani, MD, of Stanford University Medical Center in California, discussed this trial’s inception, execution, and results at the 6th Annual T-cell Lymphoma Forum.
Trial inception
It all began with the first meeting of the North American PTCL Consortium, which took place at the 2006 ASH Annual Meeting. Physicians from 17 centers gathered to discuss the state of PTCL research in North America.
The group realized there were too many open studies for such a rare disease, and efforts should be more focused. However, they could not agree publicly as to which studies should get priority, so they used an anonymous survey to obtain a consensus.
Survey responses were “all over the map,” Dr Advani said. But ultimately, the consensus was that ongoing trials were not sufficient, and a new trial was necessary.
The group decided to first lend their support to ongoing trials and then launch a new study. At the fourth and fifth meetings of the North American PTCL Consortium (both in 2009), they drafted the concept of a front-line trial testing CEOP-P.
“We decided to use [CEOP] as a backbone because there were reservations about anthracyclines having a role in PTCL, and there was data . . . in patients [with B-cell lymphomas] who were not anthracycline-eligible and did reasonably well when etoposide was substituted [for hydroxydaunorubicin],” Dr Advani said.
As for for the second “P” in CEOP-P, pralatrexate was the first drug approved for patients with relapsed PTCL, which provided the rationale for evaluating it in the front-line setting.
Execution and results
The primary aim of this study was to improve the CR rate from 40% to 60% with CEOP-P followed by optional transplant. A literature review had revealed that CRs with CHOP have been in the range of 40% to 50%.
The researchers enrolled a total of 34 patients, but 1 withdrew consent. Twenty-seven patients received at least 2 cycles of CEOP-P. Of the 6 patients who received a single cycle, 4 discontinued treatment due to early disease progression, and 2 discontinued because of adverse events.
Grade 3-4 adverse events associated with CEOP-P included anemia, thrombocytopenia, febrile neutropenia, mucositis, sepsis, increased creatinine, and liver transaminases.
The researchers had used a 2-stage Simon design (alpha=0.10, 90% power) to test the null hypothesis that the CR rate would be 40% or greater.
For the first stage of 20 evaluable patients, the trial would be terminated if 8 or fewer patients experienced a CR after course 2B of chemotherapy. For the second stage, 34 patients were required, and at least 17 had to achieve a CR at the end of therapy for the regimen to be considered useful.
At the end of stage 1, 50% of the patients (10/20) had achieved a CR. Ultimately, 52% of all patients (n=17) achieved a CR.
This suggests CEOP-P is a useful regimen, according to the study design. But the primary aim of improving CR from 40% to 60% was not met.
Furthermore, the estimated 1-year and 2-year progression-free survival rates were “not particularly encouraging,” according to Dr Advani. The rates were 50% and 34%, respectively. And the estimated 1-year and 2-year overall survival rate was 64%.
“So this was a lesson in working together and getting a trial from ground zero, to up and running, to a presentation, and publication underway,” Dr Advani said.
“And even though it took in all the ingredients of what everybody thought was important . . . , it’s not a regimen which has that much promise to move to a randomized setting. And so defining the optimal front-line therapy in PTCL continues to be a challenge and an unmet need.”
Now, the North American PTCL Consortium is working on a second front-line trial testing cyclophosphamide, hydroxydaunorubicin, vincristine, etoposide, and prednisone (CHOEP) plus lenalidomide in stage II, III, and IV PTCL. The final protocol has been circulated, and the group anticipates the first patient will be enrolled by June or July of this year.
Dr Advani and her colleagues also presented results of the CEOP-P trial at the 2013 ASH Annual Meeting as abstract 3044. (Information in the abstract differs from that presented at the T-cell Lymphoma Forum.)
Researcher status affects paper popularity, study suggests
Credit: Rhoda Baer
New research indicates that author status affects how frequently scientific papers are cited, but the size of that effect depends on a number of other factors.
Investigators found that, overall, citations increased 12% above the expected level when authors were awarded “prestigious investigator status” at the Howard Hughes Medical Institute (HHMI).
However, certain kinds of research papers benefitted more than others by this increased prestige.
“We find much more of an effect on recent papers published in a short window before the prize,” said study author Pierre Azoulay, PhD, of the MIT Sloan School of Management in Cambridge.
And the greatest gains came for papers in new areas of research and for papers published in lower-profile journals. Younger researchers who had lower profiles prior to receiving the HHMI award were more likely to see a change as well.
“The effect was much more pronounced when there was more reason to be uncertain about the quality of the science or the scientist before the prize,” Dr Azoulay noted.
This paper, titled “Matthew: Effect or Fable?,” was published in Management Science.
The “Matthew Effect” is a term coined by sociologist Robert K. Merton to describe the possibility that the work of those with high status receives greater attention than equivalent work by those who are not as well known.
Positively identifying this phenomenon in scientific paper citations is difficult, however, because it is hard to separate the status of the author from the quality of the paper. It is possible, after all, that better-known researchers are simply producing higher-quality papers, which get more attention as a result.
But Dr Azoulay and his colleagues said they’ve found a way to address this issue. They looked at papers first published before the authors became HHMI investigators, then examined the citation rates for those papers after the HHMI appointments occurred, compared to a baseline of similar papers whose authors did not receive HHMI appointments.
More specifically, each paper in the study was paired with what Dr Azoulay called a “fraternal twin,” that is, another paper published in the same journal, at the same time, with the same initial citation pattern. For good measure, the authors of the papers in this comparison group were all scientists who had received other early career awards.
In all, from 1984 through 2003, 443 scientists were named HHMI investigators. Dr Azoulay and his colleagues examined 3636 papers written by 424 of those scientists, comparing them to 3636 papers in the control group.
“You couldn’t tell [the 2 sets of papers] apart in terms of citation trajectories, up until the time of the prize,” Dr Azoulay said.
Beyond the overall 12% increase in citations, the effect was nearly twice as great for papers published in lower-profile journals.
Alternately, Dr Azoulay pointed out, “If your paper was published in Cell or Nature or Science, the HHMI [award] doesn’t add a lot.”
Credit: Rhoda Baer
New research indicates that author status affects how frequently scientific papers are cited, but the size of that effect depends on a number of other factors.
Investigators found that, overall, citations increased 12% above the expected level when authors were awarded “prestigious investigator status” at the Howard Hughes Medical Institute (HHMI).
However, certain kinds of research papers benefitted more than others by this increased prestige.
“We find much more of an effect on recent papers published in a short window before the prize,” said study author Pierre Azoulay, PhD, of the MIT Sloan School of Management in Cambridge.
And the greatest gains came for papers in new areas of research and for papers published in lower-profile journals. Younger researchers who had lower profiles prior to receiving the HHMI award were more likely to see a change as well.
“The effect was much more pronounced when there was more reason to be uncertain about the quality of the science or the scientist before the prize,” Dr Azoulay noted.
This paper, titled “Matthew: Effect or Fable?,” was published in Management Science.
The “Matthew Effect” is a term coined by sociologist Robert K. Merton to describe the possibility that the work of those with high status receives greater attention than equivalent work by those who are not as well known.
Positively identifying this phenomenon in scientific paper citations is difficult, however, because it is hard to separate the status of the author from the quality of the paper. It is possible, after all, that better-known researchers are simply producing higher-quality papers, which get more attention as a result.
But Dr Azoulay and his colleagues said they’ve found a way to address this issue. They looked at papers first published before the authors became HHMI investigators, then examined the citation rates for those papers after the HHMI appointments occurred, compared to a baseline of similar papers whose authors did not receive HHMI appointments.
More specifically, each paper in the study was paired with what Dr Azoulay called a “fraternal twin,” that is, another paper published in the same journal, at the same time, with the same initial citation pattern. For good measure, the authors of the papers in this comparison group were all scientists who had received other early career awards.
In all, from 1984 through 2003, 443 scientists were named HHMI investigators. Dr Azoulay and his colleagues examined 3636 papers written by 424 of those scientists, comparing them to 3636 papers in the control group.
“You couldn’t tell [the 2 sets of papers] apart in terms of citation trajectories, up until the time of the prize,” Dr Azoulay said.
Beyond the overall 12% increase in citations, the effect was nearly twice as great for papers published in lower-profile journals.
Alternately, Dr Azoulay pointed out, “If your paper was published in Cell or Nature or Science, the HHMI [award] doesn’t add a lot.”
Credit: Rhoda Baer
New research indicates that author status affects how frequently scientific papers are cited, but the size of that effect depends on a number of other factors.
Investigators found that, overall, citations increased 12% above the expected level when authors were awarded “prestigious investigator status” at the Howard Hughes Medical Institute (HHMI).
However, certain kinds of research papers benefitted more than others by this increased prestige.
“We find much more of an effect on recent papers published in a short window before the prize,” said study author Pierre Azoulay, PhD, of the MIT Sloan School of Management in Cambridge.
And the greatest gains came for papers in new areas of research and for papers published in lower-profile journals. Younger researchers who had lower profiles prior to receiving the HHMI award were more likely to see a change as well.
“The effect was much more pronounced when there was more reason to be uncertain about the quality of the science or the scientist before the prize,” Dr Azoulay noted.
This paper, titled “Matthew: Effect or Fable?,” was published in Management Science.
The “Matthew Effect” is a term coined by sociologist Robert K. Merton to describe the possibility that the work of those with high status receives greater attention than equivalent work by those who are not as well known.
Positively identifying this phenomenon in scientific paper citations is difficult, however, because it is hard to separate the status of the author from the quality of the paper. It is possible, after all, that better-known researchers are simply producing higher-quality papers, which get more attention as a result.
But Dr Azoulay and his colleagues said they’ve found a way to address this issue. They looked at papers first published before the authors became HHMI investigators, then examined the citation rates for those papers after the HHMI appointments occurred, compared to a baseline of similar papers whose authors did not receive HHMI appointments.
More specifically, each paper in the study was paired with what Dr Azoulay called a “fraternal twin,” that is, another paper published in the same journal, at the same time, with the same initial citation pattern. For good measure, the authors of the papers in this comparison group were all scientists who had received other early career awards.
In all, from 1984 through 2003, 443 scientists were named HHMI investigators. Dr Azoulay and his colleagues examined 3636 papers written by 424 of those scientists, comparing them to 3636 papers in the control group.
“You couldn’t tell [the 2 sets of papers] apart in terms of citation trajectories, up until the time of the prize,” Dr Azoulay said.
Beyond the overall 12% increase in citations, the effect was nearly twice as great for papers published in lower-profile journals.
Alternately, Dr Azoulay pointed out, “If your paper was published in Cell or Nature or Science, the HHMI [award] doesn’t add a lot.”
Methylation patterns can predict survival in AML, team says
Credit: Lance Liotta
Researchers have found evidence to suggest that methylation patterns in hematopoietic stem cells (HSCs) can be used to determine prognosis in patients with acute myeloid leukemia (AML).
The team discovered that patients with methylation patterns resembling those of healthy individuals lived longer than patients with substantially different patterns.
If validated in clinical trials, this finding could be used to help physicians tailor treatment according to a patient’s needs.
Ulrich Steidl, MD, PhD, of the Albert Einstein College of Medicine in New York, and his colleagues described this research in The Journal of Clinical Investigation.
The investigators knew that aberrations in HSC methylation can prevent the cells from differentiating into mature blood cells, which leads to AML.
So they speculated that comparing how closely the methylation patterns in cells from AML patients resemble the patterns found in healthy individuals’ HSCs might foretell the patients’ response to treatment.
To find out, the researchers first looked at methylation patterns in HSCs from healthy individuals. The team found that most cytosines are methylated in healthy HSCs.
And where demethylation occurs, it’s mainly limited to one particular stage of HSC differentiation—the commitment step from short-term HSC to common myeloid progenitor.
The investigators then set out to identify loci with the most significant methylation changes across differentiation stages. Their analysis revealed a set of 561 loci that distinguished between the 4 stages of HSC development they investigated.
The team next wanted to determine whether the methylation status of these loci was affected in AML. So they developed an epigenetic signature score based on loci methylation. A patient’s score increased the more his methylation pattern differed from that of a healthy individual.
The researchers tested their scoring method using data from 3 cohorts of AML patients. In each of these groups, patients with low scores had approximately twice the median survival time of patients with high scores.
Specifically, the investigators evaluated AML patients in a trial testing 2 different doses of daunorubicin (Fernandez et al, NEJM 2009).
Among patients receiving lower-dose daunorubicin, those with lower epigenetic signature scores had a median overall survival (OS) of 19 months, compared with 10.8 months for patients with higher scores (P=0.0165).
The researchers observed similar results in the patients receiving a higher dose of daunorubicin. The median OS in the group with low epigenetic signature scores was 25.4 months, compared with 13.2 months in the group with high scores (P=0.0062).
Likewise, in a third cohort of AML patients, those with a low epigenetic signature score had significantly better OS than those with a high score—a median of 28.1 months and 14.9 months, respectively (P=0.0150).
The investigators performed the same analyses using a commitment-associated gene-expression signature. And they found their epigenetic signature was more effective at predicting patient survival.
Dr Steidl and his colleagues are now studying the genes found in the aberrant epigenetic signatures to determine if they play a role in causing AML.
Credit: Lance Liotta
Researchers have found evidence to suggest that methylation patterns in hematopoietic stem cells (HSCs) can be used to determine prognosis in patients with acute myeloid leukemia (AML).
The team discovered that patients with methylation patterns resembling those of healthy individuals lived longer than patients with substantially different patterns.
If validated in clinical trials, this finding could be used to help physicians tailor treatment according to a patient’s needs.
Ulrich Steidl, MD, PhD, of the Albert Einstein College of Medicine in New York, and his colleagues described this research in The Journal of Clinical Investigation.
The investigators knew that aberrations in HSC methylation can prevent the cells from differentiating into mature blood cells, which leads to AML.
So they speculated that comparing how closely the methylation patterns in cells from AML patients resemble the patterns found in healthy individuals’ HSCs might foretell the patients’ response to treatment.
To find out, the researchers first looked at methylation patterns in HSCs from healthy individuals. The team found that most cytosines are methylated in healthy HSCs.
And where demethylation occurs, it’s mainly limited to one particular stage of HSC differentiation—the commitment step from short-term HSC to common myeloid progenitor.
The investigators then set out to identify loci with the most significant methylation changes across differentiation stages. Their analysis revealed a set of 561 loci that distinguished between the 4 stages of HSC development they investigated.
The team next wanted to determine whether the methylation status of these loci was affected in AML. So they developed an epigenetic signature score based on loci methylation. A patient’s score increased the more his methylation pattern differed from that of a healthy individual.
The researchers tested their scoring method using data from 3 cohorts of AML patients. In each of these groups, patients with low scores had approximately twice the median survival time of patients with high scores.
Specifically, the investigators evaluated AML patients in a trial testing 2 different doses of daunorubicin (Fernandez et al, NEJM 2009).
Among patients receiving lower-dose daunorubicin, those with lower epigenetic signature scores had a median overall survival (OS) of 19 months, compared with 10.8 months for patients with higher scores (P=0.0165).
The researchers observed similar results in the patients receiving a higher dose of daunorubicin. The median OS in the group with low epigenetic signature scores was 25.4 months, compared with 13.2 months in the group with high scores (P=0.0062).
Likewise, in a third cohort of AML patients, those with a low epigenetic signature score had significantly better OS than those with a high score—a median of 28.1 months and 14.9 months, respectively (P=0.0150).
The investigators performed the same analyses using a commitment-associated gene-expression signature. And they found their epigenetic signature was more effective at predicting patient survival.
Dr Steidl and his colleagues are now studying the genes found in the aberrant epigenetic signatures to determine if they play a role in causing AML.
Credit: Lance Liotta
Researchers have found evidence to suggest that methylation patterns in hematopoietic stem cells (HSCs) can be used to determine prognosis in patients with acute myeloid leukemia (AML).
The team discovered that patients with methylation patterns resembling those of healthy individuals lived longer than patients with substantially different patterns.
If validated in clinical trials, this finding could be used to help physicians tailor treatment according to a patient’s needs.
Ulrich Steidl, MD, PhD, of the Albert Einstein College of Medicine in New York, and his colleagues described this research in The Journal of Clinical Investigation.
The investigators knew that aberrations in HSC methylation can prevent the cells from differentiating into mature blood cells, which leads to AML.
So they speculated that comparing how closely the methylation patterns in cells from AML patients resemble the patterns found in healthy individuals’ HSCs might foretell the patients’ response to treatment.
To find out, the researchers first looked at methylation patterns in HSCs from healthy individuals. The team found that most cytosines are methylated in healthy HSCs.
And where demethylation occurs, it’s mainly limited to one particular stage of HSC differentiation—the commitment step from short-term HSC to common myeloid progenitor.
The investigators then set out to identify loci with the most significant methylation changes across differentiation stages. Their analysis revealed a set of 561 loci that distinguished between the 4 stages of HSC development they investigated.
The team next wanted to determine whether the methylation status of these loci was affected in AML. So they developed an epigenetic signature score based on loci methylation. A patient’s score increased the more his methylation pattern differed from that of a healthy individual.
The researchers tested their scoring method using data from 3 cohorts of AML patients. In each of these groups, patients with low scores had approximately twice the median survival time of patients with high scores.
Specifically, the investigators evaluated AML patients in a trial testing 2 different doses of daunorubicin (Fernandez et al, NEJM 2009).
Among patients receiving lower-dose daunorubicin, those with lower epigenetic signature scores had a median overall survival (OS) of 19 months, compared with 10.8 months for patients with higher scores (P=0.0165).
The researchers observed similar results in the patients receiving a higher dose of daunorubicin. The median OS in the group with low epigenetic signature scores was 25.4 months, compared with 13.2 months in the group with high scores (P=0.0062).
Likewise, in a third cohort of AML patients, those with a low epigenetic signature score had significantly better OS than those with a high score—a median of 28.1 months and 14.9 months, respectively (P=0.0150).
The investigators performed the same analyses using a commitment-associated gene-expression signature. And they found their epigenetic signature was more effective at predicting patient survival.
Dr Steidl and his colleagues are now studying the genes found in the aberrant epigenetic signatures to determine if they play a role in causing AML.
FDA approves system for GVHD prophylaxis
Credit: Miltenyi Biotec
The US Food and Drug Administration (FDA) has granted approval for a device system that can prevent graft-vs-host disease (GVHD).
The CliniMACS CD34 Reagent System is intended for use in patients with acute myeloid leukemia who are in first complete remission and undergoing stem cell transplant (SCT) from a matched, related donor.
This in vitro system enriches CD34+ hematopoietic stem cells from a donated apheresis product, while depleting other cells that can cause GVHD.
The system employs a reagent consisting of a CD34 antibody conjugated to an iron-containing nanoparticle. It enriches CD34+ cells by passing the antibody/nanoparticle-labeled cell suspension through a magnetic separation column, which is provided as part of a single-use, disposable tubing set.
Magnetically labeled CD34+ target cells are retained within the separation column, while the unlabeled cells flow through. The CD34+ cells can be recovered by removing the magnetic field and eluting the targeted CD34+ cells into a collection bag.
The FDA’s approval of this system was based on data from a phase 2 study (BMT CTN 0303) conducted by the Blood and Marrow Transplant Clinical Trials Network (Pasquini et al, JCO 2012).
The trial included 128 patients undergoing SCT from a matched, sibling donor. Forty-four patients received grafts that were T-cell depleted (TCD) using the CliniMACS system as the sole form of immune suppression. The other 84 patients received T-cell-replete grafts and pharmacologic immune suppression therapy (IST).
The 2 groups were largely similar, although more patients in the TCD arm received treatment regimens that included radiation—100% vs 50%.
Neutrophil engraftment was similar between the 2 groups. At 28 days, 96% of patients in the IST arm and 100% in the TCD arm had achieved engraftment.
Patients in the TCD arm had a significantly lower rate of chronic GVHD than those in the IST arm. The TCD patients also had a lower rate of acute GVHD, but the difference was not significant.
At 100 days, the rates of grade 2-4, acute GVHD were 39% with IST and 23% with TCD grafts (P=0.07). At 2 years, the rates of chronic GVHD were 19% with TCD grafts and 50% with IST (P<0.001).
There were no significant differences between the 2 groups with regard to graft rejection, leukemia relapse, treatment-related mortality, disease-free survival, or overall survival. However, patients in the TCD arm had a higher rate of GVHD-free survival at 2 years—41% vs 19% (P=0.006).
The CliniMACS CD34 Reagent System is manufactured by Miltenyi Biotec. For more information on the system, see the company’s website.
Credit: Miltenyi Biotec
The US Food and Drug Administration (FDA) has granted approval for a device system that can prevent graft-vs-host disease (GVHD).
The CliniMACS CD34 Reagent System is intended for use in patients with acute myeloid leukemia who are in first complete remission and undergoing stem cell transplant (SCT) from a matched, related donor.
This in vitro system enriches CD34+ hematopoietic stem cells from a donated apheresis product, while depleting other cells that can cause GVHD.
The system employs a reagent consisting of a CD34 antibody conjugated to an iron-containing nanoparticle. It enriches CD34+ cells by passing the antibody/nanoparticle-labeled cell suspension through a magnetic separation column, which is provided as part of a single-use, disposable tubing set.
Magnetically labeled CD34+ target cells are retained within the separation column, while the unlabeled cells flow through. The CD34+ cells can be recovered by removing the magnetic field and eluting the targeted CD34+ cells into a collection bag.
The FDA’s approval of this system was based on data from a phase 2 study (BMT CTN 0303) conducted by the Blood and Marrow Transplant Clinical Trials Network (Pasquini et al, JCO 2012).
The trial included 128 patients undergoing SCT from a matched, sibling donor. Forty-four patients received grafts that were T-cell depleted (TCD) using the CliniMACS system as the sole form of immune suppression. The other 84 patients received T-cell-replete grafts and pharmacologic immune suppression therapy (IST).
The 2 groups were largely similar, although more patients in the TCD arm received treatment regimens that included radiation—100% vs 50%.
Neutrophil engraftment was similar between the 2 groups. At 28 days, 96% of patients in the IST arm and 100% in the TCD arm had achieved engraftment.
Patients in the TCD arm had a significantly lower rate of chronic GVHD than those in the IST arm. The TCD patients also had a lower rate of acute GVHD, but the difference was not significant.
At 100 days, the rates of grade 2-4, acute GVHD were 39% with IST and 23% with TCD grafts (P=0.07). At 2 years, the rates of chronic GVHD were 19% with TCD grafts and 50% with IST (P<0.001).
There were no significant differences between the 2 groups with regard to graft rejection, leukemia relapse, treatment-related mortality, disease-free survival, or overall survival. However, patients in the TCD arm had a higher rate of GVHD-free survival at 2 years—41% vs 19% (P=0.006).
The CliniMACS CD34 Reagent System is manufactured by Miltenyi Biotec. For more information on the system, see the company’s website.
Credit: Miltenyi Biotec
The US Food and Drug Administration (FDA) has granted approval for a device system that can prevent graft-vs-host disease (GVHD).
The CliniMACS CD34 Reagent System is intended for use in patients with acute myeloid leukemia who are in first complete remission and undergoing stem cell transplant (SCT) from a matched, related donor.
This in vitro system enriches CD34+ hematopoietic stem cells from a donated apheresis product, while depleting other cells that can cause GVHD.
The system employs a reagent consisting of a CD34 antibody conjugated to an iron-containing nanoparticle. It enriches CD34+ cells by passing the antibody/nanoparticle-labeled cell suspension through a magnetic separation column, which is provided as part of a single-use, disposable tubing set.
Magnetically labeled CD34+ target cells are retained within the separation column, while the unlabeled cells flow through. The CD34+ cells can be recovered by removing the magnetic field and eluting the targeted CD34+ cells into a collection bag.
The FDA’s approval of this system was based on data from a phase 2 study (BMT CTN 0303) conducted by the Blood and Marrow Transplant Clinical Trials Network (Pasquini et al, JCO 2012).
The trial included 128 patients undergoing SCT from a matched, sibling donor. Forty-four patients received grafts that were T-cell depleted (TCD) using the CliniMACS system as the sole form of immune suppression. The other 84 patients received T-cell-replete grafts and pharmacologic immune suppression therapy (IST).
The 2 groups were largely similar, although more patients in the TCD arm received treatment regimens that included radiation—100% vs 50%.
Neutrophil engraftment was similar between the 2 groups. At 28 days, 96% of patients in the IST arm and 100% in the TCD arm had achieved engraftment.
Patients in the TCD arm had a significantly lower rate of chronic GVHD than those in the IST arm. The TCD patients also had a lower rate of acute GVHD, but the difference was not significant.
At 100 days, the rates of grade 2-4, acute GVHD were 39% with IST and 23% with TCD grafts (P=0.07). At 2 years, the rates of chronic GVHD were 19% with TCD grafts and 50% with IST (P<0.001).
There were no significant differences between the 2 groups with regard to graft rejection, leukemia relapse, treatment-related mortality, disease-free survival, or overall survival. However, patients in the TCD arm had a higher rate of GVHD-free survival at 2 years—41% vs 19% (P=0.006).
The CliniMACS CD34 Reagent System is manufactured by Miltenyi Biotec. For more information on the system, see the company’s website.