User login
Antidepressants, TMS, and the risk of affective switch in bipolar depression
Because treatment resistance is a pervasive problem in bipolar depression, the use of neuromodulation treatments such as transcranial magnetic stimulation (TMS) is increasing for patients with this disorder.1-7 Patients with bipolar disorder tend to spend the majority of the time with depressive symptoms, which underscores the importance of providing effective treatment for bipolar depression, especially given the chronicity of this disease.2,3,5 Only a few medications are FDA-approved for treating bipolar depression (Table).
In this article, we describe the case of a patient with treatment-resistant bipolar depression undergoing adjunctive TMS treatment who experienced an affective switch from depression to mania. We also discuss evidence regarding the likelihood of treatment-emergent mania for antidepressants vs TMS in bipolar depression.
CASE
Ms. W, a 60-year-old White female with a history of bipolar I disorder and attention-deficit/hyperactivity disorder (ADHD), presented for TMS evaluation during a depressive episode. Throughout her life, she had experienced numerous manic episodes, but as she got older she noted an increasing frequency of depressive episodes. Over the course of her illness, she had completed adequate trials at therapeutic doses of many medications, including second-generation antipsychotics (SGAs) (aripiprazole, lurasidone, olanzapine, quetiapine), mood stabilizers (lamotrigine, lithium), and antidepressants (bupropion, venlafaxine, fluoxetine, mirtazapine, trazodone). A course of electroconvulsive therapy was not effective. Ms. W had a long-standing diagnosis of ADHD and had been treated with stimulants for >10 years, although it was unclear whether formal neuropsychological testing had been conducted to confirm this diagnosis. She had >10 suicide attempts and multiple psychiatric hospitalizations.
At her initial evaluation for TMS, Ms. W said she had depressive symptoms predominating for the past 2 years, including low mood, hopelessness, poor sleep, poor appetite, anhedonia, and suicidal ideation without a plan. At the time, she was taking clonazepam, 0.5 mg twice a day; lurasidone, 40 mg/d at bedtime; fluoxetine, 60 mg/d; trazodone, 50 mg/d at bedtime; and methylphenidate, 40 mg/d, and was participating in psychotherapy consistently.
After Ms. W and her clinicians discussed alternatives, risks, benefits, and adverse effects, she consented to adjunctive TMS treatment and provided written informed consent. The treatment plan was outlined as 6 weeks of daily TMS therapy (NeuroStar; Neuronetics, Malvern, PA), 1 treatment per day, 5 days a week. Her clinical status was assessed weekly using the Quick Inventory of Depressive Symptomatology (QIDS) for depression, Generalized Anxiety Disorder 7-item scale (GAD-7) for anxiety, and Young Mania Rating Scale (YMRS) for mania. The Figure shows the trends in Ms. W’s QIDS, GAD-7, and YMRS scores over the course of TMS treatment.
Prior to initiating TMS, her baseline scores were QIDS: 25, GAD-7: 9, and YMRS: 7, indicating very severe depression, mild anxiety, and the absence of mania. Ms. W’s psychotropic regimen remained unchanged throughout the course of her TMS treatment. After her motor threshold was determined, her TMS treatment began at 80% of motor threshold and was titrated up to 95% at the first treatment. By the second treatment, it was titrated up to 110%. By the third treatment, it was titrated up to 120% of motor threshold, which is the percentage used for the remaining treatments.
Initially, Ms. W reported some improvement in her depression, but this improvement was short-lived, and she continued to have elevated QIDS scores throughout treatment. By treatment #21, her QIDS and GAD-7 scores remained elevated, and her YMRS score had increased to 12. Due to this increase in YMRS score, the YMRS was repeated on the next 2 treatment days (#22 and #23), and her score was 6 on both days. When Ms. W presented for treatment #25, she was disorganized, irritable, and endorsed racing thoughts and decreased sleep. She was involuntarily hospitalized for mania, and TMS was discontinued. Unfortunately, she did not complete any clinical scales on that day. Upon admission to the hospital, Ms. W reported that at approximately the time of treatment #21, she had a fluctuation in her mood that consisted of increased goal-directed activity, decreased need for sleep, racing thoughts, and increased frivolous spending. She was treated with lithium, 300 mg twice a day. Lurasidone was increased to 80 mg/d at bedtime, and she continued clonazepam, trazodone, and methylphenidate at the previous doses. Over 14 days, Ms. W’s mania gradually resolved, and she was discharged home.
Continue to: Mixed evidence on the risk of switching
Mixed evidence on the risk of switching
Currently, several TMS devices are FDA-cleared for treating unipolar major depressive disorder, obsessive-compulsive disorder, and certain types of migraine. In March 2020, the FDA granted Breakthrough Device Designation for one TMS device, the NeuroStar Advanced Therapy System, for the treatment of bipolar depression.8 This designation created an expedited pathway for prioritized FDA review of the NeuroStar Advanced Therapy clinical trial program.
Few published clinical studies have evaluated using TMS to treat patients with bipolar depression.9-15 As with any antidepressant treatment for bipolar depression, there is a risk of affective switch from depression to mania when using TMS. Most of the literature available regarding the treatment of bipolar depression focuses on the risk of antidepressant medications to induce an affective switch. This risk depends on the class of the antidepressant,16 and there is a paucity of studies examining the risk of switch with TMS.
Interpretation of available literature is limited due to inconsistencies in the definition of an affective switch, the variable length of treatment with antidepressants, the use of concurrent medications such as mood stabilizers, and confounders such as the natural course of switching in bipolar disorder.17 Overall, the evidence for treatment-emergent mania related to antidepressant use is mixed, and the reported rate of treatment-emergent mania varies. In a systematic review and meta-analysis of >20 randomized controlled trials that included 1,316 patients with bipolar disorder who received antidepressants, Fornaro et al18 found that the incidence of treatment-emergent mania was 11.8%. It is generally recommended that if antidepressants are used to treat patients with bipolar disorder, they should be given with a traditional mood stabilizer to prevent affective switches, although whether mood stabilizers can prevent such switches is unproven.19
In a literature review by Xia et al,20 the affective switch rate in patients with bipolar depression who were treated with TMS was 3.1%, which was not statistically different from the affective switch rate with sham treatment.However, most of the patients included in this analysis were receiving other medications concurrently, and the length of treatment was 2 weeks, which is shorter than the average length of TMS treatment in clinical practice. In a recent literature review by Rachid,21 TMS was found to possibly induce manic episodes when used as monotherapy or in combination with antidepressants in patients with bipolar depression. To reduce the risk of treatment-emergent mania, current recommendations advise the use of a mood stabilizer for a minimum of 2 weeks before initiating TMS.1
In our case, Ms. W was receiving antidepressants (fluoxetine and trazodone), lurasidone (an SGA that is FDA-approved for bipolar depression), and methylphenidate before starting TMS treatment. Fluoxetine, trazodone, and methylphenidate may possibly contribute to an increased risk of an affective switch.1,22 Further studies are needed to clarify whether mood stabilizers or SGAs can prevent the development of mania in patients with bipolar depression who undergo TMS treatment.20
Continue to: Because bipolar depression poses...
Because bipolar depression poses a major clinical challenge,23,24 it is imperative to consider alternate treatments. When evaluating alternative treatment strategies, one may consider TMS in conjunction with a traditional mood stabilizer because this regimen may have a lower risk of treatment-emergent mania compared with antidepressants.1,25
Acknowledgment
The authors thank Dr. Sy Saeed for his expertise and guidance on this article.
Bottom Line
For patients with bipolar depression, treatment with transcranial magnetic stimulation in conjunction with a mood stabilizer may have lower rates of treatment-emergent mania than treatment with antidepressants.
Related Resources
- Transcranial magnetic stimulation: clinical applications for psychiatric practice. Bermudes RA, Lanocha K, Janicak PG, eds. American Psychiatric Association Publishing; 2017.
- Gold AK, Ornelas AC, Cirillo P, et al. Clinical applications of transcranial magnetic stimulation in bipolar disorder. Brain Behav. 2019;9(10):e01419. doi: 10.1002/brb3.1419
Drug Brand Names
Aripiprazole • Abilify
Bupropion • Wellbutrin
Cariprazine • Vraylar
Clonazepam • Klonopin
Fluoxetine • Prozac
Lamotrigine • Lamictal
Lithium • Eskalith, Lithobid
Lurasidone • Latuda
Methylphenidate • Ritalin, Concerta
Mirtazapine • Remeron
Olanzapine • Zyprexa
Olanzapine-fluoxetine • Symbyax
Quetiapine • Seroquel
Trazodone • Desyrel
Venlafaxine • Effexor
1. Aaronson ST, Croarkin PE. Transcranial magnetic stimulation for the treatment of other mood disorders. In: Bermudes RA, Lanocha K, Janicak PG, eds. Transcranial magnetic stimulation: clinical applications for psychiatric practice. American Psychiatric Association Publishing; 2017:127-156.
2. Geddes JR, Miklowitz DJ. Treatment of bipolar disorder. Lancet. 2013;381(9878):1672-1682.
3. Gitlin M. Treatment-resistant bipolar disorder. Molecular Psychiatry. 2006;11(3):227-240.
4. Harrison PJ, Geddes JR, Tunbridge EM. The emerging neurobiology of bipolar disorder. Trends Neurosci. 2018;41(1):18-30.
5. Merikangas KR, Jin R, He JP, et al. Prevalence and correlates of bipolar spectrum disorder in the World Mental Health Survey Initiative. Arch Gen Psychiatry. 2011;68(3):241-251.
6. Myczkowski ML, Fernandes A, Moreno M, et al. Cognitive outcomes of TMS treatment in bipolar depression: safety data from a randomized controlled trial. J Affect Disord. 2018;235: 20-26.
7. Tavares DF, Myczkowski ML, Alberto RL, et al. Treatment of bipolar depression with deep TMS: results from a double-blind, randomized, parallel group, sham-controlled clinical trial. Neuropsychopharmacology. 2017;42(13):2593-2601.
8. Neuronetics. FDA grants NeuroStar® Advanced Therapy System Breakthrough Device Designation to treat bipolar depression. Accessed February 2, 2021. https://www.globenewswire.com/news-release/2020/03/06/1996447/0/en/FDA-Grants-NeuroStar-Advanced-Therapy-System-Breakthrough-Device-Designation-to-Treat-Bipolar-Depression.html
9. Cohen RB, Brunoni AR, Boggio PS, et al. Clinical predictors associated with duration of repetitive transcranial magnetic stimulation treatment for remission in bipolar depression: a naturalistic study. J Nerv Ment Dis. 2010;198(9):679-681.
10. Connolly KR, Helmer A, Cristancho MA, et al. Effectiveness of transcranial magnetic stimulation in clinical practice post-FDA approval in the United States: results observed with the first 100 consecutive cases of depression at an academic medical center. J Clin Psychiatry. 2012;73(4):e567-e573.
11. Dell’osso B, D’Urso N, Castellano F, et al. Long-term efficacy after acute augmentative repetitive transcranial magnetic stimulation in bipolar depression: a 1-year follow-up study. J ECT. 2011;27(2):141-144.
12. Dell’Osso B, Mundo E, D’Urso N, et al. Augmentative repetitive navigated transcranial magnetic stimulation (rTMS) in drug-resistant bipolar depression. Bipolar Disord. 2009;11(1):76-81.
13. Harel EV, Zangen A, Roth Y, et al. H-coil repetitive transcranial magnetic stimulation for the treatment of bipolar depression: an add-on, safety and feasibility study. World J Biol Psychiatry. 2011;12(2):119-126.
14. Nahas Z, Kozel FA, Li X, et al. Left prefrontal transcranial magnetic stimulation (TMS) treatment of depression in bipolar affective disorder: a pilot study of acute safety and efficacy. Bipolar Disord. 2003;5(1):40-47.
15. Tamas RL, Menkes D, El-Mallakh RS. Stimulating research: a prospective, randomized, double-blind, sham-controlled study of slow transcranial magnetic stimulation in depressed bipolar patients. J Neuropsychiatry Clin Neurosci. 2007;19(2):198-199.
16. Tundo A, Cavalieri P, Navari S, et al. Treating bipolar depression - antidepressants and alternatives: a critical review of the literature. Acta Neuropsychiatrica. 2011:23(3):94-105.
17. Gijsman HJ, Geddes JR, Rendell JM, et al. Antidepressants for bipolar depression: a systematic review of randomized, controlled trials. Am J Psychiatry. 2004;161(9):1537-1547.
18. Fornaro M, Anastasia A, Novello S, et al. Incidence, prevalence and clinical correlates of antidepressant‐emergent mania in bipolar depression: a systematic review and meta‐analysis. Bipolar Disord. 2018;20(3):195-227.
19. Pacchiarotti I, Bond DJ, Baldessarini RJ, et al. The International Society for Bipolar Disorders (ISBD) task force report on antidepressant use in bipolar disorders. Am J Psychiatry. 2013;170(11):1249-1262.
20. Xia G, Gajwani P, Muzina DJ, et al. Treatment-emergent mania in unipolar and bipolar depression: focus on repetitive transcranial magnetic stimulation. Int J Neuropsychopharmacol. 2008;11(1):119-130.
21. Rachid F. Repetitive transcranial magnetic stimulation and treatment-emergent mania and hypomania: a review of the literature. J Psychiatr Pract. 2017;23(2):150-159.
22. Victorin A, Rydén E, Thase M, et al. The risk of treatment-emergent mania with methylphenidate in bipolar disorder. Am J Psychiatry. 2017;174(4):341-348.
23. Hidalgo-Mazzei D, Berk M, Cipriani A, et al. Treatment-resistant and multi-therapy-resistant criteria for bipolar depression: consensus definition. Br J Psychiatry. 2019;214(1):27-35.
24. Baldessarini RJ, Vázquez GH, Tondo L. Bipolar depression: a major unsolved challenge. Int J Bipolar Disord. 2020;8(1):1.
25. Phillips AL, Burr RL, Dunner DL. Repetitive transcranial magnetic stimulation in the treatment of bipolar depression: Experience from a clinical setting. J Psychiatr Pract. 2020;26(1):37-45.
Because treatment resistance is a pervasive problem in bipolar depression, the use of neuromodulation treatments such as transcranial magnetic stimulation (TMS) is increasing for patients with this disorder.1-7 Patients with bipolar disorder tend to spend the majority of the time with depressive symptoms, which underscores the importance of providing effective treatment for bipolar depression, especially given the chronicity of this disease.2,3,5 Only a few medications are FDA-approved for treating bipolar depression (Table).
In this article, we describe the case of a patient with treatment-resistant bipolar depression undergoing adjunctive TMS treatment who experienced an affective switch from depression to mania. We also discuss evidence regarding the likelihood of treatment-emergent mania for antidepressants vs TMS in bipolar depression.
CASE
Ms. W, a 60-year-old White female with a history of bipolar I disorder and attention-deficit/hyperactivity disorder (ADHD), presented for TMS evaluation during a depressive episode. Throughout her life, she had experienced numerous manic episodes, but as she got older she noted an increasing frequency of depressive episodes. Over the course of her illness, she had completed adequate trials at therapeutic doses of many medications, including second-generation antipsychotics (SGAs) (aripiprazole, lurasidone, olanzapine, quetiapine), mood stabilizers (lamotrigine, lithium), and antidepressants (bupropion, venlafaxine, fluoxetine, mirtazapine, trazodone). A course of electroconvulsive therapy was not effective. Ms. W had a long-standing diagnosis of ADHD and had been treated with stimulants for >10 years, although it was unclear whether formal neuropsychological testing had been conducted to confirm this diagnosis. She had >10 suicide attempts and multiple psychiatric hospitalizations.
At her initial evaluation for TMS, Ms. W said she had depressive symptoms predominating for the past 2 years, including low mood, hopelessness, poor sleep, poor appetite, anhedonia, and suicidal ideation without a plan. At the time, she was taking clonazepam, 0.5 mg twice a day; lurasidone, 40 mg/d at bedtime; fluoxetine, 60 mg/d; trazodone, 50 mg/d at bedtime; and methylphenidate, 40 mg/d, and was participating in psychotherapy consistently.
After Ms. W and her clinicians discussed alternatives, risks, benefits, and adverse effects, she consented to adjunctive TMS treatment and provided written informed consent. The treatment plan was outlined as 6 weeks of daily TMS therapy (NeuroStar; Neuronetics, Malvern, PA), 1 treatment per day, 5 days a week. Her clinical status was assessed weekly using the Quick Inventory of Depressive Symptomatology (QIDS) for depression, Generalized Anxiety Disorder 7-item scale (GAD-7) for anxiety, and Young Mania Rating Scale (YMRS) for mania. The Figure shows the trends in Ms. W’s QIDS, GAD-7, and YMRS scores over the course of TMS treatment.
Prior to initiating TMS, her baseline scores were QIDS: 25, GAD-7: 9, and YMRS: 7, indicating very severe depression, mild anxiety, and the absence of mania. Ms. W’s psychotropic regimen remained unchanged throughout the course of her TMS treatment. After her motor threshold was determined, her TMS treatment began at 80% of motor threshold and was titrated up to 95% at the first treatment. By the second treatment, it was titrated up to 110%. By the third treatment, it was titrated up to 120% of motor threshold, which is the percentage used for the remaining treatments.
Initially, Ms. W reported some improvement in her depression, but this improvement was short-lived, and she continued to have elevated QIDS scores throughout treatment. By treatment #21, her QIDS and GAD-7 scores remained elevated, and her YMRS score had increased to 12. Due to this increase in YMRS score, the YMRS was repeated on the next 2 treatment days (#22 and #23), and her score was 6 on both days. When Ms. W presented for treatment #25, she was disorganized, irritable, and endorsed racing thoughts and decreased sleep. She was involuntarily hospitalized for mania, and TMS was discontinued. Unfortunately, she did not complete any clinical scales on that day. Upon admission to the hospital, Ms. W reported that at approximately the time of treatment #21, she had a fluctuation in her mood that consisted of increased goal-directed activity, decreased need for sleep, racing thoughts, and increased frivolous spending. She was treated with lithium, 300 mg twice a day. Lurasidone was increased to 80 mg/d at bedtime, and she continued clonazepam, trazodone, and methylphenidate at the previous doses. Over 14 days, Ms. W’s mania gradually resolved, and she was discharged home.
Continue to: Mixed evidence on the risk of switching
Mixed evidence on the risk of switching
Currently, several TMS devices are FDA-cleared for treating unipolar major depressive disorder, obsessive-compulsive disorder, and certain types of migraine. In March 2020, the FDA granted Breakthrough Device Designation for one TMS device, the NeuroStar Advanced Therapy System, for the treatment of bipolar depression.8 This designation created an expedited pathway for prioritized FDA review of the NeuroStar Advanced Therapy clinical trial program.
Few published clinical studies have evaluated using TMS to treat patients with bipolar depression.9-15 As with any antidepressant treatment for bipolar depression, there is a risk of affective switch from depression to mania when using TMS. Most of the literature available regarding the treatment of bipolar depression focuses on the risk of antidepressant medications to induce an affective switch. This risk depends on the class of the antidepressant,16 and there is a paucity of studies examining the risk of switch with TMS.
Interpretation of available literature is limited due to inconsistencies in the definition of an affective switch, the variable length of treatment with antidepressants, the use of concurrent medications such as mood stabilizers, and confounders such as the natural course of switching in bipolar disorder.17 Overall, the evidence for treatment-emergent mania related to antidepressant use is mixed, and the reported rate of treatment-emergent mania varies. In a systematic review and meta-analysis of >20 randomized controlled trials that included 1,316 patients with bipolar disorder who received antidepressants, Fornaro et al18 found that the incidence of treatment-emergent mania was 11.8%. It is generally recommended that if antidepressants are used to treat patients with bipolar disorder, they should be given with a traditional mood stabilizer to prevent affective switches, although whether mood stabilizers can prevent such switches is unproven.19
In a literature review by Xia et al,20 the affective switch rate in patients with bipolar depression who were treated with TMS was 3.1%, which was not statistically different from the affective switch rate with sham treatment.However, most of the patients included in this analysis were receiving other medications concurrently, and the length of treatment was 2 weeks, which is shorter than the average length of TMS treatment in clinical practice. In a recent literature review by Rachid,21 TMS was found to possibly induce manic episodes when used as monotherapy or in combination with antidepressants in patients with bipolar depression. To reduce the risk of treatment-emergent mania, current recommendations advise the use of a mood stabilizer for a minimum of 2 weeks before initiating TMS.1
In our case, Ms. W was receiving antidepressants (fluoxetine and trazodone), lurasidone (an SGA that is FDA-approved for bipolar depression), and methylphenidate before starting TMS treatment. Fluoxetine, trazodone, and methylphenidate may possibly contribute to an increased risk of an affective switch.1,22 Further studies are needed to clarify whether mood stabilizers or SGAs can prevent the development of mania in patients with bipolar depression who undergo TMS treatment.20
Continue to: Because bipolar depression poses...
Because bipolar depression poses a major clinical challenge,23,24 it is imperative to consider alternate treatments. When evaluating alternative treatment strategies, one may consider TMS in conjunction with a traditional mood stabilizer because this regimen may have a lower risk of treatment-emergent mania compared with antidepressants.1,25
Acknowledgment
The authors thank Dr. Sy Saeed for his expertise and guidance on this article.
Bottom Line
For patients with bipolar depression, treatment with transcranial magnetic stimulation in conjunction with a mood stabilizer may have lower rates of treatment-emergent mania than treatment with antidepressants.
Related Resources
- Transcranial magnetic stimulation: clinical applications for psychiatric practice. Bermudes RA, Lanocha K, Janicak PG, eds. American Psychiatric Association Publishing; 2017.
- Gold AK, Ornelas AC, Cirillo P, et al. Clinical applications of transcranial magnetic stimulation in bipolar disorder. Brain Behav. 2019;9(10):e01419. doi: 10.1002/brb3.1419
Drug Brand Names
Aripiprazole • Abilify
Bupropion • Wellbutrin
Cariprazine • Vraylar
Clonazepam • Klonopin
Fluoxetine • Prozac
Lamotrigine • Lamictal
Lithium • Eskalith, Lithobid
Lurasidone • Latuda
Methylphenidate • Ritalin, Concerta
Mirtazapine • Remeron
Olanzapine • Zyprexa
Olanzapine-fluoxetine • Symbyax
Quetiapine • Seroquel
Trazodone • Desyrel
Venlafaxine • Effexor
Because treatment resistance is a pervasive problem in bipolar depression, the use of neuromodulation treatments such as transcranial magnetic stimulation (TMS) is increasing for patients with this disorder.1-7 Patients with bipolar disorder tend to spend the majority of the time with depressive symptoms, which underscores the importance of providing effective treatment for bipolar depression, especially given the chronicity of this disease.2,3,5 Only a few medications are FDA-approved for treating bipolar depression (Table).
In this article, we describe the case of a patient with treatment-resistant bipolar depression undergoing adjunctive TMS treatment who experienced an affective switch from depression to mania. We also discuss evidence regarding the likelihood of treatment-emergent mania for antidepressants vs TMS in bipolar depression.
CASE
Ms. W, a 60-year-old White female with a history of bipolar I disorder and attention-deficit/hyperactivity disorder (ADHD), presented for TMS evaluation during a depressive episode. Throughout her life, she had experienced numerous manic episodes, but as she got older she noted an increasing frequency of depressive episodes. Over the course of her illness, she had completed adequate trials at therapeutic doses of many medications, including second-generation antipsychotics (SGAs) (aripiprazole, lurasidone, olanzapine, quetiapine), mood stabilizers (lamotrigine, lithium), and antidepressants (bupropion, venlafaxine, fluoxetine, mirtazapine, trazodone). A course of electroconvulsive therapy was not effective. Ms. W had a long-standing diagnosis of ADHD and had been treated with stimulants for >10 years, although it was unclear whether formal neuropsychological testing had been conducted to confirm this diagnosis. She had >10 suicide attempts and multiple psychiatric hospitalizations.
At her initial evaluation for TMS, Ms. W said she had depressive symptoms predominating for the past 2 years, including low mood, hopelessness, poor sleep, poor appetite, anhedonia, and suicidal ideation without a plan. At the time, she was taking clonazepam, 0.5 mg twice a day; lurasidone, 40 mg/d at bedtime; fluoxetine, 60 mg/d; trazodone, 50 mg/d at bedtime; and methylphenidate, 40 mg/d, and was participating in psychotherapy consistently.
After Ms. W and her clinicians discussed alternatives, risks, benefits, and adverse effects, she consented to adjunctive TMS treatment and provided written informed consent. The treatment plan was outlined as 6 weeks of daily TMS therapy (NeuroStar; Neuronetics, Malvern, PA), 1 treatment per day, 5 days a week. Her clinical status was assessed weekly using the Quick Inventory of Depressive Symptomatology (QIDS) for depression, Generalized Anxiety Disorder 7-item scale (GAD-7) for anxiety, and Young Mania Rating Scale (YMRS) for mania. The Figure shows the trends in Ms. W’s QIDS, GAD-7, and YMRS scores over the course of TMS treatment.
Prior to initiating TMS, her baseline scores were QIDS: 25, GAD-7: 9, and YMRS: 7, indicating very severe depression, mild anxiety, and the absence of mania. Ms. W’s psychotropic regimen remained unchanged throughout the course of her TMS treatment. After her motor threshold was determined, her TMS treatment began at 80% of motor threshold and was titrated up to 95% at the first treatment. By the second treatment, it was titrated up to 110%. By the third treatment, it was titrated up to 120% of motor threshold, which is the percentage used for the remaining treatments.
Initially, Ms. W reported some improvement in her depression, but this improvement was short-lived, and she continued to have elevated QIDS scores throughout treatment. By treatment #21, her QIDS and GAD-7 scores remained elevated, and her YMRS score had increased to 12. Due to this increase in YMRS score, the YMRS was repeated on the next 2 treatment days (#22 and #23), and her score was 6 on both days. When Ms. W presented for treatment #25, she was disorganized, irritable, and endorsed racing thoughts and decreased sleep. She was involuntarily hospitalized for mania, and TMS was discontinued. Unfortunately, she did not complete any clinical scales on that day. Upon admission to the hospital, Ms. W reported that at approximately the time of treatment #21, she had a fluctuation in her mood that consisted of increased goal-directed activity, decreased need for sleep, racing thoughts, and increased frivolous spending. She was treated with lithium, 300 mg twice a day. Lurasidone was increased to 80 mg/d at bedtime, and she continued clonazepam, trazodone, and methylphenidate at the previous doses. Over 14 days, Ms. W’s mania gradually resolved, and she was discharged home.
Continue to: Mixed evidence on the risk of switching
Mixed evidence on the risk of switching
Currently, several TMS devices are FDA-cleared for treating unipolar major depressive disorder, obsessive-compulsive disorder, and certain types of migraine. In March 2020, the FDA granted Breakthrough Device Designation for one TMS device, the NeuroStar Advanced Therapy System, for the treatment of bipolar depression.8 This designation created an expedited pathway for prioritized FDA review of the NeuroStar Advanced Therapy clinical trial program.
Few published clinical studies have evaluated using TMS to treat patients with bipolar depression.9-15 As with any antidepressant treatment for bipolar depression, there is a risk of affective switch from depression to mania when using TMS. Most of the literature available regarding the treatment of bipolar depression focuses on the risk of antidepressant medications to induce an affective switch. This risk depends on the class of the antidepressant,16 and there is a paucity of studies examining the risk of switch with TMS.
Interpretation of available literature is limited due to inconsistencies in the definition of an affective switch, the variable length of treatment with antidepressants, the use of concurrent medications such as mood stabilizers, and confounders such as the natural course of switching in bipolar disorder.17 Overall, the evidence for treatment-emergent mania related to antidepressant use is mixed, and the reported rate of treatment-emergent mania varies. In a systematic review and meta-analysis of >20 randomized controlled trials that included 1,316 patients with bipolar disorder who received antidepressants, Fornaro et al18 found that the incidence of treatment-emergent mania was 11.8%. It is generally recommended that if antidepressants are used to treat patients with bipolar disorder, they should be given with a traditional mood stabilizer to prevent affective switches, although whether mood stabilizers can prevent such switches is unproven.19
In a literature review by Xia et al,20 the affective switch rate in patients with bipolar depression who were treated with TMS was 3.1%, which was not statistically different from the affective switch rate with sham treatment.However, most of the patients included in this analysis were receiving other medications concurrently, and the length of treatment was 2 weeks, which is shorter than the average length of TMS treatment in clinical practice. In a recent literature review by Rachid,21 TMS was found to possibly induce manic episodes when used as monotherapy or in combination with antidepressants in patients with bipolar depression. To reduce the risk of treatment-emergent mania, current recommendations advise the use of a mood stabilizer for a minimum of 2 weeks before initiating TMS.1
In our case, Ms. W was receiving antidepressants (fluoxetine and trazodone), lurasidone (an SGA that is FDA-approved for bipolar depression), and methylphenidate before starting TMS treatment. Fluoxetine, trazodone, and methylphenidate may possibly contribute to an increased risk of an affective switch.1,22 Further studies are needed to clarify whether mood stabilizers or SGAs can prevent the development of mania in patients with bipolar depression who undergo TMS treatment.20
Continue to: Because bipolar depression poses...
Because bipolar depression poses a major clinical challenge,23,24 it is imperative to consider alternate treatments. When evaluating alternative treatment strategies, one may consider TMS in conjunction with a traditional mood stabilizer because this regimen may have a lower risk of treatment-emergent mania compared with antidepressants.1,25
Acknowledgment
The authors thank Dr. Sy Saeed for his expertise and guidance on this article.
Bottom Line
For patients with bipolar depression, treatment with transcranial magnetic stimulation in conjunction with a mood stabilizer may have lower rates of treatment-emergent mania than treatment with antidepressants.
Related Resources
- Transcranial magnetic stimulation: clinical applications for psychiatric practice. Bermudes RA, Lanocha K, Janicak PG, eds. American Psychiatric Association Publishing; 2017.
- Gold AK, Ornelas AC, Cirillo P, et al. Clinical applications of transcranial magnetic stimulation in bipolar disorder. Brain Behav. 2019;9(10):e01419. doi: 10.1002/brb3.1419
Drug Brand Names
Aripiprazole • Abilify
Bupropion • Wellbutrin
Cariprazine • Vraylar
Clonazepam • Klonopin
Fluoxetine • Prozac
Lamotrigine • Lamictal
Lithium • Eskalith, Lithobid
Lurasidone • Latuda
Methylphenidate • Ritalin, Concerta
Mirtazapine • Remeron
Olanzapine • Zyprexa
Olanzapine-fluoxetine • Symbyax
Quetiapine • Seroquel
Trazodone • Desyrel
Venlafaxine • Effexor
1. Aaronson ST, Croarkin PE. Transcranial magnetic stimulation for the treatment of other mood disorders. In: Bermudes RA, Lanocha K, Janicak PG, eds. Transcranial magnetic stimulation: clinical applications for psychiatric practice. American Psychiatric Association Publishing; 2017:127-156.
2. Geddes JR, Miklowitz DJ. Treatment of bipolar disorder. Lancet. 2013;381(9878):1672-1682.
3. Gitlin M. Treatment-resistant bipolar disorder. Molecular Psychiatry. 2006;11(3):227-240.
4. Harrison PJ, Geddes JR, Tunbridge EM. The emerging neurobiology of bipolar disorder. Trends Neurosci. 2018;41(1):18-30.
5. Merikangas KR, Jin R, He JP, et al. Prevalence and correlates of bipolar spectrum disorder in the World Mental Health Survey Initiative. Arch Gen Psychiatry. 2011;68(3):241-251.
6. Myczkowski ML, Fernandes A, Moreno M, et al. Cognitive outcomes of TMS treatment in bipolar depression: safety data from a randomized controlled trial. J Affect Disord. 2018;235: 20-26.
7. Tavares DF, Myczkowski ML, Alberto RL, et al. Treatment of bipolar depression with deep TMS: results from a double-blind, randomized, parallel group, sham-controlled clinical trial. Neuropsychopharmacology. 2017;42(13):2593-2601.
8. Neuronetics. FDA grants NeuroStar® Advanced Therapy System Breakthrough Device Designation to treat bipolar depression. Accessed February 2, 2021. https://www.globenewswire.com/news-release/2020/03/06/1996447/0/en/FDA-Grants-NeuroStar-Advanced-Therapy-System-Breakthrough-Device-Designation-to-Treat-Bipolar-Depression.html
9. Cohen RB, Brunoni AR, Boggio PS, et al. Clinical predictors associated with duration of repetitive transcranial magnetic stimulation treatment for remission in bipolar depression: a naturalistic study. J Nerv Ment Dis. 2010;198(9):679-681.
10. Connolly KR, Helmer A, Cristancho MA, et al. Effectiveness of transcranial magnetic stimulation in clinical practice post-FDA approval in the United States: results observed with the first 100 consecutive cases of depression at an academic medical center. J Clin Psychiatry. 2012;73(4):e567-e573.
11. Dell’osso B, D’Urso N, Castellano F, et al. Long-term efficacy after acute augmentative repetitive transcranial magnetic stimulation in bipolar depression: a 1-year follow-up study. J ECT. 2011;27(2):141-144.
12. Dell’Osso B, Mundo E, D’Urso N, et al. Augmentative repetitive navigated transcranial magnetic stimulation (rTMS) in drug-resistant bipolar depression. Bipolar Disord. 2009;11(1):76-81.
13. Harel EV, Zangen A, Roth Y, et al. H-coil repetitive transcranial magnetic stimulation for the treatment of bipolar depression: an add-on, safety and feasibility study. World J Biol Psychiatry. 2011;12(2):119-126.
14. Nahas Z, Kozel FA, Li X, et al. Left prefrontal transcranial magnetic stimulation (TMS) treatment of depression in bipolar affective disorder: a pilot study of acute safety and efficacy. Bipolar Disord. 2003;5(1):40-47.
15. Tamas RL, Menkes D, El-Mallakh RS. Stimulating research: a prospective, randomized, double-blind, sham-controlled study of slow transcranial magnetic stimulation in depressed bipolar patients. J Neuropsychiatry Clin Neurosci. 2007;19(2):198-199.
16. Tundo A, Cavalieri P, Navari S, et al. Treating bipolar depression - antidepressants and alternatives: a critical review of the literature. Acta Neuropsychiatrica. 2011:23(3):94-105.
17. Gijsman HJ, Geddes JR, Rendell JM, et al. Antidepressants for bipolar depression: a systematic review of randomized, controlled trials. Am J Psychiatry. 2004;161(9):1537-1547.
18. Fornaro M, Anastasia A, Novello S, et al. Incidence, prevalence and clinical correlates of antidepressant‐emergent mania in bipolar depression: a systematic review and meta‐analysis. Bipolar Disord. 2018;20(3):195-227.
19. Pacchiarotti I, Bond DJ, Baldessarini RJ, et al. The International Society for Bipolar Disorders (ISBD) task force report on antidepressant use in bipolar disorders. Am J Psychiatry. 2013;170(11):1249-1262.
20. Xia G, Gajwani P, Muzina DJ, et al. Treatment-emergent mania in unipolar and bipolar depression: focus on repetitive transcranial magnetic stimulation. Int J Neuropsychopharmacol. 2008;11(1):119-130.
21. Rachid F. Repetitive transcranial magnetic stimulation and treatment-emergent mania and hypomania: a review of the literature. J Psychiatr Pract. 2017;23(2):150-159.
22. Victorin A, Rydén E, Thase M, et al. The risk of treatment-emergent mania with methylphenidate in bipolar disorder. Am J Psychiatry. 2017;174(4):341-348.
23. Hidalgo-Mazzei D, Berk M, Cipriani A, et al. Treatment-resistant and multi-therapy-resistant criteria for bipolar depression: consensus definition. Br J Psychiatry. 2019;214(1):27-35.
24. Baldessarini RJ, Vázquez GH, Tondo L. Bipolar depression: a major unsolved challenge. Int J Bipolar Disord. 2020;8(1):1.
25. Phillips AL, Burr RL, Dunner DL. Repetitive transcranial magnetic stimulation in the treatment of bipolar depression: Experience from a clinical setting. J Psychiatr Pract. 2020;26(1):37-45.
1. Aaronson ST, Croarkin PE. Transcranial magnetic stimulation for the treatment of other mood disorders. In: Bermudes RA, Lanocha K, Janicak PG, eds. Transcranial magnetic stimulation: clinical applications for psychiatric practice. American Psychiatric Association Publishing; 2017:127-156.
2. Geddes JR, Miklowitz DJ. Treatment of bipolar disorder. Lancet. 2013;381(9878):1672-1682.
3. Gitlin M. Treatment-resistant bipolar disorder. Molecular Psychiatry. 2006;11(3):227-240.
4. Harrison PJ, Geddes JR, Tunbridge EM. The emerging neurobiology of bipolar disorder. Trends Neurosci. 2018;41(1):18-30.
5. Merikangas KR, Jin R, He JP, et al. Prevalence and correlates of bipolar spectrum disorder in the World Mental Health Survey Initiative. Arch Gen Psychiatry. 2011;68(3):241-251.
6. Myczkowski ML, Fernandes A, Moreno M, et al. Cognitive outcomes of TMS treatment in bipolar depression: safety data from a randomized controlled trial. J Affect Disord. 2018;235: 20-26.
7. Tavares DF, Myczkowski ML, Alberto RL, et al. Treatment of bipolar depression with deep TMS: results from a double-blind, randomized, parallel group, sham-controlled clinical trial. Neuropsychopharmacology. 2017;42(13):2593-2601.
8. Neuronetics. FDA grants NeuroStar® Advanced Therapy System Breakthrough Device Designation to treat bipolar depression. Accessed February 2, 2021. https://www.globenewswire.com/news-release/2020/03/06/1996447/0/en/FDA-Grants-NeuroStar-Advanced-Therapy-System-Breakthrough-Device-Designation-to-Treat-Bipolar-Depression.html
9. Cohen RB, Brunoni AR, Boggio PS, et al. Clinical predictors associated with duration of repetitive transcranial magnetic stimulation treatment for remission in bipolar depression: a naturalistic study. J Nerv Ment Dis. 2010;198(9):679-681.
10. Connolly KR, Helmer A, Cristancho MA, et al. Effectiveness of transcranial magnetic stimulation in clinical practice post-FDA approval in the United States: results observed with the first 100 consecutive cases of depression at an academic medical center. J Clin Psychiatry. 2012;73(4):e567-e573.
11. Dell’osso B, D’Urso N, Castellano F, et al. Long-term efficacy after acute augmentative repetitive transcranial magnetic stimulation in bipolar depression: a 1-year follow-up study. J ECT. 2011;27(2):141-144.
12. Dell’Osso B, Mundo E, D’Urso N, et al. Augmentative repetitive navigated transcranial magnetic stimulation (rTMS) in drug-resistant bipolar depression. Bipolar Disord. 2009;11(1):76-81.
13. Harel EV, Zangen A, Roth Y, et al. H-coil repetitive transcranial magnetic stimulation for the treatment of bipolar depression: an add-on, safety and feasibility study. World J Biol Psychiatry. 2011;12(2):119-126.
14. Nahas Z, Kozel FA, Li X, et al. Left prefrontal transcranial magnetic stimulation (TMS) treatment of depression in bipolar affective disorder: a pilot study of acute safety and efficacy. Bipolar Disord. 2003;5(1):40-47.
15. Tamas RL, Menkes D, El-Mallakh RS. Stimulating research: a prospective, randomized, double-blind, sham-controlled study of slow transcranial magnetic stimulation in depressed bipolar patients. J Neuropsychiatry Clin Neurosci. 2007;19(2):198-199.
16. Tundo A, Cavalieri P, Navari S, et al. Treating bipolar depression - antidepressants and alternatives: a critical review of the literature. Acta Neuropsychiatrica. 2011:23(3):94-105.
17. Gijsman HJ, Geddes JR, Rendell JM, et al. Antidepressants for bipolar depression: a systematic review of randomized, controlled trials. Am J Psychiatry. 2004;161(9):1537-1547.
18. Fornaro M, Anastasia A, Novello S, et al. Incidence, prevalence and clinical correlates of antidepressant‐emergent mania in bipolar depression: a systematic review and meta‐analysis. Bipolar Disord. 2018;20(3):195-227.
19. Pacchiarotti I, Bond DJ, Baldessarini RJ, et al. The International Society for Bipolar Disorders (ISBD) task force report on antidepressant use in bipolar disorders. Am J Psychiatry. 2013;170(11):1249-1262.
20. Xia G, Gajwani P, Muzina DJ, et al. Treatment-emergent mania in unipolar and bipolar depression: focus on repetitive transcranial magnetic stimulation. Int J Neuropsychopharmacol. 2008;11(1):119-130.
21. Rachid F. Repetitive transcranial magnetic stimulation and treatment-emergent mania and hypomania: a review of the literature. J Psychiatr Pract. 2017;23(2):150-159.
22. Victorin A, Rydén E, Thase M, et al. The risk of treatment-emergent mania with methylphenidate in bipolar disorder. Am J Psychiatry. 2017;174(4):341-348.
23. Hidalgo-Mazzei D, Berk M, Cipriani A, et al. Treatment-resistant and multi-therapy-resistant criteria for bipolar depression: consensus definition. Br J Psychiatry. 2019;214(1):27-35.
24. Baldessarini RJ, Vázquez GH, Tondo L. Bipolar depression: a major unsolved challenge. Int J Bipolar Disord. 2020;8(1):1.
25. Phillips AL, Burr RL, Dunner DL. Repetitive transcranial magnetic stimulation in the treatment of bipolar depression: Experience from a clinical setting. J Psychiatr Pract. 2020;26(1):37-45.
More than just 3 dogs: Is burnout getting in the way of knowing our patients?
Do you ever leave work thinking “Why do I always feel so tired after my shift?” “How can I overcome this fatigue?” “Is this what I expected?” “How can I get over the dread of so much administrative work when I want more time for my patients?” As clinicians, we face these and many other questions every day. These questions are the result of feeling entrapped in a health care system that has forgotten that clinicians need enough time to get to know and connect with their patients. Burnout is real, and relying on wellness activities is not sufficient to overcome it. Instead, taking the time for some introspection and self-reflection can help to overcome these difficulties.
A valuable lesson
Ten months into my intern year as a psychiatry resident, while on a busy night shift at the psychiatry emergency unit, an 86-year-old man arrived alone, hopeless, and with persistent death wishes. He needed to be heard and comforted by someone. Although he understood the nonnegotiable need to be hospitalized, he was extremely hesitant. But why? After all, he expressed wanting to get better and feared going back home alone, yet he was unwilling to be admitted to the hospital for acute care.
I knew I had to address the reason behind my patient’s ambivalence by further exploring his history. Nonetheless, my physician-in-training mind was battling feelings of stress secondary to what at the time seemed to be a never-ending to-do list full of nurses’ requests and patient-related tasks. Despite an unconscious temptation to rush through the history to please my go, go, go! trainee mind, I do not regret having taken the time to ask and address the often-feared “why.” Why was my patient reluctant to accept my recommendation?
To my surprise, it turned out to be an important matter. He said, “I have 3 dogs back home I don’t want to leave alone. They are the only living memory of my wife, who passed away 5 months ago. They help me stay alive.” I was struck by a feeling of empathy, but also guilt for having almost rushed through the history and not being thorough enough to ask why.
Take time to explore ‘why’
Do we really recognize the importance of being inquisitive in our history-taking? What might seem a simple matter to us (in my patient’s case, his 3 dogs were his main support system) can be a significant cause of a patient’s distress. A patients’ hesitancy to accept our recommendations can be secondary to reasons that unfortunately at times we only partially explore, or do not explore at all. Asking why can open Pandora’s box. It can uncover feelings and emotions such as frustration, anger, anxiety, and sorrow. It can also reveal uncertainties regarding topics such as race, gender identity, sexual orientation, socioeconomic status, and religion. We should be driven by humble curiosity, and tailor the interview by purposefully asking questions with the goal of learning and understanding our patients’ concerns. This practice serves to cultivate honest and trustworthy physician-patient relationships founded on empathy and respect.
If we know that obtaining an in-depth history is crucial for formulating a patient’s treatment plan, why do we sometimes fall in the trap of obtaining superficial ones, at times limiting ourselves to checklists? Reasons for not delving into our patients’ histories include (but are not limited to) an overload of patients, time constraints, a physician’s personal style, unconscious bias, suboptimal mentoring, and burnout. Of all these reasons, I worry the most about burnout. Physicians face insurmountable academic, institutional, and administrative demands. These constraints inarguably contribute to feeling rushed, and eventually possibly burned out.
Using self-reflection to prevent burnout
Physician burnout—as well as attempts to define, identify, target, and prevent it—has been on the rise in the past decades. If burnout affects the physician-patient relationship, we should make efforts to mitigate it. One should try to rely on internal, rather than external, influences to positively influence our delivery of care. One way to do this is by really getting to know the patient in front of us: a father, mother, brother, sister, member of the community, etc. Understanding our patient’s needs and concerns promotes empathy and connectedness. Another way is to exercise self-reflection by asking ourselves: How do I feel about the care I delivered today? Did I make an effort to fully understand my patients’ concerns? Did I make each patient feel understood? Was I rushing through the day, or was I mindful of the person in front of me? Did I deliver the care I wish I had received?
Although there are innumerable ways to target physician burnout, these self-reflections are quick, simple exercises that easily can be woven into a clinician’s busy schedule. The goal is to be mindful of improving the quality of our interactions with patients to ultimately cultivate our own well-being by potentiating a sense of fulfilment and satisfaction with our profession. I encourage clinicians to always go after the “why.” After all, why not? Thankfully, after some persuasion, my patient accepted voluntary admission, and arranged with neighbors to take care of his 3 dogs.
Do you ever leave work thinking “Why do I always feel so tired after my shift?” “How can I overcome this fatigue?” “Is this what I expected?” “How can I get over the dread of so much administrative work when I want more time for my patients?” As clinicians, we face these and many other questions every day. These questions are the result of feeling entrapped in a health care system that has forgotten that clinicians need enough time to get to know and connect with their patients. Burnout is real, and relying on wellness activities is not sufficient to overcome it. Instead, taking the time for some introspection and self-reflection can help to overcome these difficulties.
A valuable lesson
Ten months into my intern year as a psychiatry resident, while on a busy night shift at the psychiatry emergency unit, an 86-year-old man arrived alone, hopeless, and with persistent death wishes. He needed to be heard and comforted by someone. Although he understood the nonnegotiable need to be hospitalized, he was extremely hesitant. But why? After all, he expressed wanting to get better and feared going back home alone, yet he was unwilling to be admitted to the hospital for acute care.
I knew I had to address the reason behind my patient’s ambivalence by further exploring his history. Nonetheless, my physician-in-training mind was battling feelings of stress secondary to what at the time seemed to be a never-ending to-do list full of nurses’ requests and patient-related tasks. Despite an unconscious temptation to rush through the history to please my go, go, go! trainee mind, I do not regret having taken the time to ask and address the often-feared “why.” Why was my patient reluctant to accept my recommendation?
To my surprise, it turned out to be an important matter. He said, “I have 3 dogs back home I don’t want to leave alone. They are the only living memory of my wife, who passed away 5 months ago. They help me stay alive.” I was struck by a feeling of empathy, but also guilt for having almost rushed through the history and not being thorough enough to ask why.
Take time to explore ‘why’
Do we really recognize the importance of being inquisitive in our history-taking? What might seem a simple matter to us (in my patient’s case, his 3 dogs were his main support system) can be a significant cause of a patient’s distress. A patients’ hesitancy to accept our recommendations can be secondary to reasons that unfortunately at times we only partially explore, or do not explore at all. Asking why can open Pandora’s box. It can uncover feelings and emotions such as frustration, anger, anxiety, and sorrow. It can also reveal uncertainties regarding topics such as race, gender identity, sexual orientation, socioeconomic status, and religion. We should be driven by humble curiosity, and tailor the interview by purposefully asking questions with the goal of learning and understanding our patients’ concerns. This practice serves to cultivate honest and trustworthy physician-patient relationships founded on empathy and respect.
If we know that obtaining an in-depth history is crucial for formulating a patient’s treatment plan, why do we sometimes fall in the trap of obtaining superficial ones, at times limiting ourselves to checklists? Reasons for not delving into our patients’ histories include (but are not limited to) an overload of patients, time constraints, a physician’s personal style, unconscious bias, suboptimal mentoring, and burnout. Of all these reasons, I worry the most about burnout. Physicians face insurmountable academic, institutional, and administrative demands. These constraints inarguably contribute to feeling rushed, and eventually possibly burned out.
Using self-reflection to prevent burnout
Physician burnout—as well as attempts to define, identify, target, and prevent it—has been on the rise in the past decades. If burnout affects the physician-patient relationship, we should make efforts to mitigate it. One should try to rely on internal, rather than external, influences to positively influence our delivery of care. One way to do this is by really getting to know the patient in front of us: a father, mother, brother, sister, member of the community, etc. Understanding our patient’s needs and concerns promotes empathy and connectedness. Another way is to exercise self-reflection by asking ourselves: How do I feel about the care I delivered today? Did I make an effort to fully understand my patients’ concerns? Did I make each patient feel understood? Was I rushing through the day, or was I mindful of the person in front of me? Did I deliver the care I wish I had received?
Although there are innumerable ways to target physician burnout, these self-reflections are quick, simple exercises that easily can be woven into a clinician’s busy schedule. The goal is to be mindful of improving the quality of our interactions with patients to ultimately cultivate our own well-being by potentiating a sense of fulfilment and satisfaction with our profession. I encourage clinicians to always go after the “why.” After all, why not? Thankfully, after some persuasion, my patient accepted voluntary admission, and arranged with neighbors to take care of his 3 dogs.
Do you ever leave work thinking “Why do I always feel so tired after my shift?” “How can I overcome this fatigue?” “Is this what I expected?” “How can I get over the dread of so much administrative work when I want more time for my patients?” As clinicians, we face these and many other questions every day. These questions are the result of feeling entrapped in a health care system that has forgotten that clinicians need enough time to get to know and connect with their patients. Burnout is real, and relying on wellness activities is not sufficient to overcome it. Instead, taking the time for some introspection and self-reflection can help to overcome these difficulties.
A valuable lesson
Ten months into my intern year as a psychiatry resident, while on a busy night shift at the psychiatry emergency unit, an 86-year-old man arrived alone, hopeless, and with persistent death wishes. He needed to be heard and comforted by someone. Although he understood the nonnegotiable need to be hospitalized, he was extremely hesitant. But why? After all, he expressed wanting to get better and feared going back home alone, yet he was unwilling to be admitted to the hospital for acute care.
I knew I had to address the reason behind my patient’s ambivalence by further exploring his history. Nonetheless, my physician-in-training mind was battling feelings of stress secondary to what at the time seemed to be a never-ending to-do list full of nurses’ requests and patient-related tasks. Despite an unconscious temptation to rush through the history to please my go, go, go! trainee mind, I do not regret having taken the time to ask and address the often-feared “why.” Why was my patient reluctant to accept my recommendation?
To my surprise, it turned out to be an important matter. He said, “I have 3 dogs back home I don’t want to leave alone. They are the only living memory of my wife, who passed away 5 months ago. They help me stay alive.” I was struck by a feeling of empathy, but also guilt for having almost rushed through the history and not being thorough enough to ask why.
Take time to explore ‘why’
Do we really recognize the importance of being inquisitive in our history-taking? What might seem a simple matter to us (in my patient’s case, his 3 dogs were his main support system) can be a significant cause of a patient’s distress. A patients’ hesitancy to accept our recommendations can be secondary to reasons that unfortunately at times we only partially explore, or do not explore at all. Asking why can open Pandora’s box. It can uncover feelings and emotions such as frustration, anger, anxiety, and sorrow. It can also reveal uncertainties regarding topics such as race, gender identity, sexual orientation, socioeconomic status, and religion. We should be driven by humble curiosity, and tailor the interview by purposefully asking questions with the goal of learning and understanding our patients’ concerns. This practice serves to cultivate honest and trustworthy physician-patient relationships founded on empathy and respect.
If we know that obtaining an in-depth history is crucial for formulating a patient’s treatment plan, why do we sometimes fall in the trap of obtaining superficial ones, at times limiting ourselves to checklists? Reasons for not delving into our patients’ histories include (but are not limited to) an overload of patients, time constraints, a physician’s personal style, unconscious bias, suboptimal mentoring, and burnout. Of all these reasons, I worry the most about burnout. Physicians face insurmountable academic, institutional, and administrative demands. These constraints inarguably contribute to feeling rushed, and eventually possibly burned out.
Using self-reflection to prevent burnout
Physician burnout—as well as attempts to define, identify, target, and prevent it—has been on the rise in the past decades. If burnout affects the physician-patient relationship, we should make efforts to mitigate it. One should try to rely on internal, rather than external, influences to positively influence our delivery of care. One way to do this is by really getting to know the patient in front of us: a father, mother, brother, sister, member of the community, etc. Understanding our patient’s needs and concerns promotes empathy and connectedness. Another way is to exercise self-reflection by asking ourselves: How do I feel about the care I delivered today? Did I make an effort to fully understand my patients’ concerns? Did I make each patient feel understood? Was I rushing through the day, or was I mindful of the person in front of me? Did I deliver the care I wish I had received?
Although there are innumerable ways to target physician burnout, these self-reflections are quick, simple exercises that easily can be woven into a clinician’s busy schedule. The goal is to be mindful of improving the quality of our interactions with patients to ultimately cultivate our own well-being by potentiating a sense of fulfilment and satisfaction with our profession. I encourage clinicians to always go after the “why.” After all, why not? Thankfully, after some persuasion, my patient accepted voluntary admission, and arranged with neighbors to take care of his 3 dogs.
A resident’s guide to lithium
Lithium has been used in psychiatry for more than half a century and is considered the gold standard for treating acute mania and maintenance treatment of bipolar disorder.1 Evidence supports its use to reduce suicidal behavior and as an adjunctive treatment for major depressive disorder.2 However, lithium has fallen out of favor because of its narrow therapeutic index as well as the introduction of newer psychotropic medications that have a quicker onset of action and do not require strict blood monitoring. For residents early in their training, keeping track of the laboratory monitoring and medical screening can be confusing. Different institutions and countries have specific guidelines and recommendations for monitoring patients receiving lithium, which adds to the confusion.
We completed a literature review to develop clear and concise recommendations for lithium monitoring for residents in our psychiatry residency program. These recommendations outline screening at baseline and after patients treated with lithium achieve stability. Table 13-11 outlines medical screening parameters, including bloodwork, that should be completed before initiating treatment, and how often such screening should be repeated. Table 2 incorporates these parameters into progress notes in the electronic medical record to keep track of the laboratory values and when they were last drawn. Our aim is to help residents stay organized and prevent missed screenings.
How often should lithium levels be monitored?
After starting a patient on lithium, check the level within 5 to 7 days, and 5 to 7 days after each dose change. Draw the lithium level 10 to 14 hours after the patient’s last dose (12 hours is best).1 Because of dosage changes, lithium levels usually are monitored more frequently during the first 3 months of treatment until therapeutic levels are reached or symptoms are controlled. It is recommended to monitor lithium levels every 3 months for the first year and every 6 months after the first year of treatment once the patient is stable and considering age, medical health, and how consistently a patient reports symptoms/adverse effects.3,5 Continue monitoring levels every 3 months in older adults; in patients with renal dysfunction, thyroid dysfunction, hypercalcemia, or other significant medical comorbidities; and in those who are taking medications that affect lithium, such as pain medications (nonsteroidal anti-inflammatory drugs can raise lithium levels), certain antihypertensives (angiotensin-converting-enzyme inhibitors can raise lithium levels), and diuretics (thiazide diuretics can raise lithium levels; osmotic diuretics and carbonic anhydrase inhibitors can reduce lithium levels).1,3,5
Lithium levels could vary by up to 0.5 mEq/L during transition between manic, euthymic, and depressive states.12 On a consistent dosage, lithium levels decrease during mania because of hemodilution, and increase during depression secondary to physiological effects specific to these episodes.13,14
Recommendations for plasma lithium levels (trough levels)
Mania. Lithium levels of 0.8 to 1.2 mEq/L often are needed to achieve symptom control during manic episodes.15 As levels approach 1.5 mEq/L, patients are at increased risk for intolerable adverse effects (eg, nausea and vomiting) and toxicity.16,17 Adverse effects at higher levels may result in patients abruptly discontinuing lithium. Patients who experience mania before a depressive episode at illness onsettend to have a better treatment response with lithium.18 Lithium monotherapy has been shown to be less effective for acute mania than antipsychotics or combination therapies.19 Consider combining lithium with valproate or antipsychotics for patients who have tolerated lithium in the past and plan to use lithium for maintenance treatment.20
Maintenance. In adults, the lithium level should be 0.60 to 80mEq/L, but consider levels of 0.40 to 0.60 mEq/L in patients who have a good response to lithium but develop adverse effects at higher levels.21 For patients who do not respond to treatment, such as those with severe mania, maintenance levels can be increased to 0.76 to 0.90 mEq/L.22 These same recommendations for maintenance levels can be used for children and adolescents. In older adults, aim for maintenance levels of 0.4 to 0.6 mEq/L. For patients age 65 to 79, the maximum level is 0.7 to 0.8 mEq/L, and should not exceed 0.7 mEq/L in patients age >80. Lithium levels <0.4 mEq/L do not appear to be effective.21
Depression. Aim for a lithium level of 0.6 to 1.0 mEq/L for patients with depression.11
Continue to: Renal function monitoring frequency
Renal function monitoring frequency
Obtain a basic metabolic panel or comprehensive metabolic panel to establish baseline levels of creatinine, blood urea nitrogen (BUN), and estimated glomerular filtration rate (eGFR). Repeat testing at Week 12 and at 6 months to detect any changes. Renal function can be monitored every 6 to 12 months in stable patients, but should be closely watched when a patient’s clinical status changes.3 A new lower eGFR value after starting lithium therapy should be investigated with a repeat test in 2 weeks.23 Mild elevations in creatinine should be monitored, and further medical workup with a nephrologist is recommended for patients with a creatinine level ≥1.6 mg/dL.24 It is important to note that creatinine might remain within normal limits if there is considerable reduction in glomerular function. Creatinine levels could vary because of body mass and diet. Creatinine levels can be low in nonmuscular patients and elevated in patients who consume large amounts of protein.23,25
Ordering a basic metabolic panel also allows electrolyte monitoring. Hyponatremia and dehydration can lead to elevated lithium levels and result in toxicity; hypokalemia might increase the risk of lithium-induced cardiac toxicity. Monitor calcium (corrected serum calcium) because hypercalcemia has been seen in patients treated with lithium.
Thyroid function monitoring frequency
Obtain levels of thyroid-stimulating hormone with reflex to free T4 at baseline, 12 weeks, and 6 months. Monitor thyroid function every 6 to 12 months in stable patients and when a patient’s clinical status changes, such as with new reports of medical or psychiatric symptoms and when there is concern for thyroid dysfunction.3
Lithium and neurotoxicity
Lithium is known to have neurotoxic effects, such as effects on fast-acting neurons leading to dyscoordination or tremor, even at therapeutic levels.26 This is especially the case when lithium is combined with an antipsychotic,26,27 a combination that is used to treat bipolar I disorder with psychotic features. Older adults are at greater risk for neurotoxicity because of physiological changes associated with increasing age.28
Educate patients about adherence, diet, and exercise
Patients might stop taking their psychotropic medications when they start feeling better. Instruct patients to discuss discontinuation with the prescribing clinician before they stop any medication. Educate patients that rapidly discontinuing lithium therapy puts them at high risk of relapse29 and increases the risk of developing treatment-refractory symptoms.23,30 Emphasize the importance of staying hydrated and maintaining adequate sodium in their diet.17,31 Consuming excessive sodium can reduce lithium levels.17,32 Lithium levels could increase when patients experience excessive sweating, such as during exercise or being outside on warm days, because of sodium and volume loss.17,33
1. Tondo L, Alda M, Bauer M, et al. Clinical use of lithium salts: guide for users and prescribers. Int J Bipolar Disord. 2019;7(1):16. doi:10.1186/s40345-019-0151-2
2. Azab AN, Shnaider A, Osher Y, et al. Lithium nephrotoxicity. Int J Bipolar Disord. 2015;3(1):28. doi:10.1186/s40345-015-0028-y
3. American Psychiatric Association. Practice guideline for the treatment of patients with bipolar disorder (revision). Am J Psychiatry. 2002;159:1-50.
4. Yatham LN, Kennedy SH, Parikh SV, et al. Canadian Network for Mood and Anxiety Treatments (CANMAT) and International Society for Bipolar Disorders (ISBD) collaborative update of CANMAT guidelines for the management of patients with bipolar disorder: update 2013. Bipolar Disord. 2013;15:1‐44. doi:10.1111/bdi.12025
5. National Collaborating Center for Mental Health (UK). Bipolar disorder: the NICE guideline on the assessment and management of bipolar disorder in adults, children and young people in primary and secondary care. The British Psychological Society and The Royal College of Psychiatrists; 2014.
6. Kupka R, Goossens P, van Bendegem M, et al. Multidisciplinaire richtlijn bipolaire stoornissen. Nederlandse Vereniging voor Psychiatrie (NVvP); 2015. Accessed August 10, 2020. http://www.nvvp.net/stream/richtlijn-bipolaire-stoornissen-2015
7. Malhi GS, Bassett D, Boyce P, et al. Royal Australian and New Zealand College of Psychiatrists clinical practice guidelines for mood disorders. Aust N Z J Psychiatry. 2015;49:1087‐1206. doi:10.1177/0004867415617657
8. Nederlof M, Heerdink ER, Egberts ACG, et al. Monitoring of patients treated with lithium for bipolar disorder: an international survey. Int J Bipolar Disord. 2018;6(1):12. doi:10.1186/s40345-018-0120-1
9. Leo RJ, Sharma M, Chrostowski DA. A case of lithium-induced symptomatic hypercalcemia. Prim Care Companion J Clin Psychiatry. 2010;12(4):PCC.09l00917. doi:10.4088/PCC.09l00917yel
10. McHenry CR, Lee K. Lithium therapy and disorders of the parathyroid glands. Endocr Pract. 1996;2(2):103-109. doi:10.4158/EP.2.2.103
11. Stahl SM. The prescribers guide: Stahl’s essential psychopharmacology. 6th ed. Cambridge University Press; 2017.
12. Kukopulos A, Reginaldi D. Variations of serum lithium concentrations correlated with the phases of manic-depressive psychosis. Agressologie. 1978;19(D):219-222.
13. Rittmannsberger H, Malsiner-Walli G. Mood-dependent changes of serum lithium concentration in a rapid cycling patient maintained on stable doses of lithium carbonate. Bipolar Disord. 2013;15(3):333-337. doi:10.1111/bdi.12066
14. Hochman E, Weizman A, Valevski A, et al. Association between bipolar episodes and fluid and electrolyte homeostasis: a retrospective longitudinal study. Bipolar Disord. 2014;16(8):781-789. doi:10.1111/bdi.12248
15. Volkmann C, Bschor T, Köhler S. Lithium treatment over the lifespan in bipolar disorders. Front Psychiatry. 2020;11:377. doi: 10.3389/fpsyt.2020.00377
16. Boltan DD, Fenves AZ. Effectiveness of normal saline diuresis in treating lithium overdose. Proc (Bayl Univ Med Cent). 2008;21(3):261-263. doi:10.1080/08998280.2008.11928407
17. Sadock BJ, Saddock VA, Ruiz P. Kaplan and Sadock’s synopsis of psychiatry. 11th ed. Wolters Kluwer; 2014.
18. Tighe SK, Mahon PB, Potash JB. Predictors of lithium response in bipolar disorder. Ther Adv Chronic Dis. 2011;2(3):209-226. doi:10.1177/2040622311399173
19. Cipriani A, Barbui C, Salanti G, et al. Comparative efficacy and acceptability of antimanic drugs in acute mania: a multiple-treatments meta-analysis. Lancet. 2011;378(9799):1306-1315. doi:10.1016/S0140-6736(11)60873-8
20. Smith LA, Cornelius V, Tacchi MJ, et al. Acute bipolar mania: a systematic review and meta-analysis of co-therapy vs monotherapy. Acta Psychiatr Scand. 2016;115(1):12-20. doi:10.1111/j.1600-0447.2006.00912.x
21. Nolen WA, Licht RW, Young AH, et al; ISBD/IGSLI Task Force on the treatment with lithium. What is the optimal serum level for lithium in the maintenance treatment of bipolar disorder? A systematic review and recommendations from the ISBD/IGSLI Task Force on treatment with lithium. Bipolar Disord. 2019;21(5):394-409. doi:10.1111/bdi.12805
22. Maj M, Starace F, Nolfe G, et al. Minimum plasma lithium levels required for effective prophylaxis in DSM III bipolar disorder: a prospective study. Pharmacopsychiatry. 1986;19(6):420-423. doi:10.1055/s-2007-1017280
23. Gupta S, Kripalani M, Khastgir U, et al. Management of the renal adverse effects of lithium. Advances in Psychiatric Treatment. 2013;19(6):457-466. doi:10.1192/apt.bp.112.010306
24. Gitlin M. Lithium and the kidney: an updated review. Drug Saf. 1999;20(3):231-243. doi:10.2165/00002018-199920030-00004
25. Jefferson JW. A clinician’s guide to monitoring kidney function in lithium-treated patients. J Clin Psychiatry. 2010;71(9):1153-1157. doi:10.4088/JCP.09m05917yel
26. Shah VC, Kayathi P, Singh G, et al. Enhance your understanding of lithium neurotoxicity. Prim Care Companion CNS Disord. 2015;17(3):10.4088/PCC.14l01767. doi:10.4088/PCC.14l01767
27. Netto I, Phutane VH. Reversible lithium neurotoxicity: review of the literature. Prim Care Companion CNS Disord. 2012;14(1):PCC.11r01197. doi:10.4088/PCC.11r01197
28. Mohandas E, Rajmohan V. Lithium use in special populations. Indian J Psychiatry. 2007;49(3):211-218. doi:10.4103/0019-5545.37325
29. Gupta S, Khastgir U. Drug information update. Lithium and chronic kidney disease: debates and dilemmas. BJPsych Bull. 2017;41(4):216-220. doi:10.1192/pb.bp.116.054031
30. Post RM. Preventing the malignant transformation of bipolar disorder. JAMA. 2018;319(12):1197-1198. doi:10.1001/jama.2018.0322
31. Timmer RT, Sands JM. Lithium intoxication. J Am Soc Nephrol. 1999;10(3):666-674.
32. Demers RG, Heninger GR. Sodium intake and lithium treatment in mania. Am J Psychiatry. 1971;128(1):100-104. doi:10.1176/ajp.128.1.100
33. Hedya SA, Avula A, Swoboda HD. Lithium toxicity. In: StatPearls. StatPearls Publishing; 2020.
Lithium has been used in psychiatry for more than half a century and is considered the gold standard for treating acute mania and maintenance treatment of bipolar disorder.1 Evidence supports its use to reduce suicidal behavior and as an adjunctive treatment for major depressive disorder.2 However, lithium has fallen out of favor because of its narrow therapeutic index as well as the introduction of newer psychotropic medications that have a quicker onset of action and do not require strict blood monitoring. For residents early in their training, keeping track of the laboratory monitoring and medical screening can be confusing. Different institutions and countries have specific guidelines and recommendations for monitoring patients receiving lithium, which adds to the confusion.
We completed a literature review to develop clear and concise recommendations for lithium monitoring for residents in our psychiatry residency program. These recommendations outline screening at baseline and after patients treated with lithium achieve stability. Table 13-11 outlines medical screening parameters, including bloodwork, that should be completed before initiating treatment, and how often such screening should be repeated. Table 2 incorporates these parameters into progress notes in the electronic medical record to keep track of the laboratory values and when they were last drawn. Our aim is to help residents stay organized and prevent missed screenings.
How often should lithium levels be monitored?
After starting a patient on lithium, check the level within 5 to 7 days, and 5 to 7 days after each dose change. Draw the lithium level 10 to 14 hours after the patient’s last dose (12 hours is best).1 Because of dosage changes, lithium levels usually are monitored more frequently during the first 3 months of treatment until therapeutic levels are reached or symptoms are controlled. It is recommended to monitor lithium levels every 3 months for the first year and every 6 months after the first year of treatment once the patient is stable and considering age, medical health, and how consistently a patient reports symptoms/adverse effects.3,5 Continue monitoring levels every 3 months in older adults; in patients with renal dysfunction, thyroid dysfunction, hypercalcemia, or other significant medical comorbidities; and in those who are taking medications that affect lithium, such as pain medications (nonsteroidal anti-inflammatory drugs can raise lithium levels), certain antihypertensives (angiotensin-converting-enzyme inhibitors can raise lithium levels), and diuretics (thiazide diuretics can raise lithium levels; osmotic diuretics and carbonic anhydrase inhibitors can reduce lithium levels).1,3,5
Lithium levels could vary by up to 0.5 mEq/L during transition between manic, euthymic, and depressive states.12 On a consistent dosage, lithium levels decrease during mania because of hemodilution, and increase during depression secondary to physiological effects specific to these episodes.13,14
Recommendations for plasma lithium levels (trough levels)
Mania. Lithium levels of 0.8 to 1.2 mEq/L often are needed to achieve symptom control during manic episodes.15 As levels approach 1.5 mEq/L, patients are at increased risk for intolerable adverse effects (eg, nausea and vomiting) and toxicity.16,17 Adverse effects at higher levels may result in patients abruptly discontinuing lithium. Patients who experience mania before a depressive episode at illness onsettend to have a better treatment response with lithium.18 Lithium monotherapy has been shown to be less effective for acute mania than antipsychotics or combination therapies.19 Consider combining lithium with valproate or antipsychotics for patients who have tolerated lithium in the past and plan to use lithium for maintenance treatment.20
Maintenance. In adults, the lithium level should be 0.60 to 80mEq/L, but consider levels of 0.40 to 0.60 mEq/L in patients who have a good response to lithium but develop adverse effects at higher levels.21 For patients who do not respond to treatment, such as those with severe mania, maintenance levels can be increased to 0.76 to 0.90 mEq/L.22 These same recommendations for maintenance levels can be used for children and adolescents. In older adults, aim for maintenance levels of 0.4 to 0.6 mEq/L. For patients age 65 to 79, the maximum level is 0.7 to 0.8 mEq/L, and should not exceed 0.7 mEq/L in patients age >80. Lithium levels <0.4 mEq/L do not appear to be effective.21
Depression. Aim for a lithium level of 0.6 to 1.0 mEq/L for patients with depression.11
Continue to: Renal function monitoring frequency
Renal function monitoring frequency
Obtain a basic metabolic panel or comprehensive metabolic panel to establish baseline levels of creatinine, blood urea nitrogen (BUN), and estimated glomerular filtration rate (eGFR). Repeat testing at Week 12 and at 6 months to detect any changes. Renal function can be monitored every 6 to 12 months in stable patients, but should be closely watched when a patient’s clinical status changes.3 A new lower eGFR value after starting lithium therapy should be investigated with a repeat test in 2 weeks.23 Mild elevations in creatinine should be monitored, and further medical workup with a nephrologist is recommended for patients with a creatinine level ≥1.6 mg/dL.24 It is important to note that creatinine might remain within normal limits if there is considerable reduction in glomerular function. Creatinine levels could vary because of body mass and diet. Creatinine levels can be low in nonmuscular patients and elevated in patients who consume large amounts of protein.23,25
Ordering a basic metabolic panel also allows electrolyte monitoring. Hyponatremia and dehydration can lead to elevated lithium levels and result in toxicity; hypokalemia might increase the risk of lithium-induced cardiac toxicity. Monitor calcium (corrected serum calcium) because hypercalcemia has been seen in patients treated with lithium.
Thyroid function monitoring frequency
Obtain levels of thyroid-stimulating hormone with reflex to free T4 at baseline, 12 weeks, and 6 months. Monitor thyroid function every 6 to 12 months in stable patients and when a patient’s clinical status changes, such as with new reports of medical or psychiatric symptoms and when there is concern for thyroid dysfunction.3
Lithium and neurotoxicity
Lithium is known to have neurotoxic effects, such as effects on fast-acting neurons leading to dyscoordination or tremor, even at therapeutic levels.26 This is especially the case when lithium is combined with an antipsychotic,26,27 a combination that is used to treat bipolar I disorder with psychotic features. Older adults are at greater risk for neurotoxicity because of physiological changes associated with increasing age.28
Educate patients about adherence, diet, and exercise
Patients might stop taking their psychotropic medications when they start feeling better. Instruct patients to discuss discontinuation with the prescribing clinician before they stop any medication. Educate patients that rapidly discontinuing lithium therapy puts them at high risk of relapse29 and increases the risk of developing treatment-refractory symptoms.23,30 Emphasize the importance of staying hydrated and maintaining adequate sodium in their diet.17,31 Consuming excessive sodium can reduce lithium levels.17,32 Lithium levels could increase when patients experience excessive sweating, such as during exercise or being outside on warm days, because of sodium and volume loss.17,33
Lithium has been used in psychiatry for more than half a century and is considered the gold standard for treating acute mania and maintenance treatment of bipolar disorder.1 Evidence supports its use to reduce suicidal behavior and as an adjunctive treatment for major depressive disorder.2 However, lithium has fallen out of favor because of its narrow therapeutic index as well as the introduction of newer psychotropic medications that have a quicker onset of action and do not require strict blood monitoring. For residents early in their training, keeping track of the laboratory monitoring and medical screening can be confusing. Different institutions and countries have specific guidelines and recommendations for monitoring patients receiving lithium, which adds to the confusion.
We completed a literature review to develop clear and concise recommendations for lithium monitoring for residents in our psychiatry residency program. These recommendations outline screening at baseline and after patients treated with lithium achieve stability. Table 13-11 outlines medical screening parameters, including bloodwork, that should be completed before initiating treatment, and how often such screening should be repeated. Table 2 incorporates these parameters into progress notes in the electronic medical record to keep track of the laboratory values and when they were last drawn. Our aim is to help residents stay organized and prevent missed screenings.
How often should lithium levels be monitored?
After starting a patient on lithium, check the level within 5 to 7 days, and 5 to 7 days after each dose change. Draw the lithium level 10 to 14 hours after the patient’s last dose (12 hours is best).1 Because of dosage changes, lithium levels usually are monitored more frequently during the first 3 months of treatment until therapeutic levels are reached or symptoms are controlled. It is recommended to monitor lithium levels every 3 months for the first year and every 6 months after the first year of treatment once the patient is stable and considering age, medical health, and how consistently a patient reports symptoms/adverse effects.3,5 Continue monitoring levels every 3 months in older adults; in patients with renal dysfunction, thyroid dysfunction, hypercalcemia, or other significant medical comorbidities; and in those who are taking medications that affect lithium, such as pain medications (nonsteroidal anti-inflammatory drugs can raise lithium levels), certain antihypertensives (angiotensin-converting-enzyme inhibitors can raise lithium levels), and diuretics (thiazide diuretics can raise lithium levels; osmotic diuretics and carbonic anhydrase inhibitors can reduce lithium levels).1,3,5
Lithium levels could vary by up to 0.5 mEq/L during transition between manic, euthymic, and depressive states.12 On a consistent dosage, lithium levels decrease during mania because of hemodilution, and increase during depression secondary to physiological effects specific to these episodes.13,14
Recommendations for plasma lithium levels (trough levels)
Mania. Lithium levels of 0.8 to 1.2 mEq/L often are needed to achieve symptom control during manic episodes.15 As levels approach 1.5 mEq/L, patients are at increased risk for intolerable adverse effects (eg, nausea and vomiting) and toxicity.16,17 Adverse effects at higher levels may result in patients abruptly discontinuing lithium. Patients who experience mania before a depressive episode at illness onsettend to have a better treatment response with lithium.18 Lithium monotherapy has been shown to be less effective for acute mania than antipsychotics or combination therapies.19 Consider combining lithium with valproate or antipsychotics for patients who have tolerated lithium in the past and plan to use lithium for maintenance treatment.20
Maintenance. In adults, the lithium level should be 0.60 to 80mEq/L, but consider levels of 0.40 to 0.60 mEq/L in patients who have a good response to lithium but develop adverse effects at higher levels.21 For patients who do not respond to treatment, such as those with severe mania, maintenance levels can be increased to 0.76 to 0.90 mEq/L.22 These same recommendations for maintenance levels can be used for children and adolescents. In older adults, aim for maintenance levels of 0.4 to 0.6 mEq/L. For patients age 65 to 79, the maximum level is 0.7 to 0.8 mEq/L, and should not exceed 0.7 mEq/L in patients age >80. Lithium levels <0.4 mEq/L do not appear to be effective.21
Depression. Aim for a lithium level of 0.6 to 1.0 mEq/L for patients with depression.11
Continue to: Renal function monitoring frequency
Renal function monitoring frequency
Obtain a basic metabolic panel or comprehensive metabolic panel to establish baseline levels of creatinine, blood urea nitrogen (BUN), and estimated glomerular filtration rate (eGFR). Repeat testing at Week 12 and at 6 months to detect any changes. Renal function can be monitored every 6 to 12 months in stable patients, but should be closely watched when a patient’s clinical status changes.3 A new lower eGFR value after starting lithium therapy should be investigated with a repeat test in 2 weeks.23 Mild elevations in creatinine should be monitored, and further medical workup with a nephrologist is recommended for patients with a creatinine level ≥1.6 mg/dL.24 It is important to note that creatinine might remain within normal limits if there is considerable reduction in glomerular function. Creatinine levels could vary because of body mass and diet. Creatinine levels can be low in nonmuscular patients and elevated in patients who consume large amounts of protein.23,25
Ordering a basic metabolic panel also allows electrolyte monitoring. Hyponatremia and dehydration can lead to elevated lithium levels and result in toxicity; hypokalemia might increase the risk of lithium-induced cardiac toxicity. Monitor calcium (corrected serum calcium) because hypercalcemia has been seen in patients treated with lithium.
Thyroid function monitoring frequency
Obtain levels of thyroid-stimulating hormone with reflex to free T4 at baseline, 12 weeks, and 6 months. Monitor thyroid function every 6 to 12 months in stable patients and when a patient’s clinical status changes, such as with new reports of medical or psychiatric symptoms and when there is concern for thyroid dysfunction.3
Lithium and neurotoxicity
Lithium is known to have neurotoxic effects, such as effects on fast-acting neurons leading to dyscoordination or tremor, even at therapeutic levels.26 This is especially the case when lithium is combined with an antipsychotic,26,27 a combination that is used to treat bipolar I disorder with psychotic features. Older adults are at greater risk for neurotoxicity because of physiological changes associated with increasing age.28
Educate patients about adherence, diet, and exercise
Patients might stop taking their psychotropic medications when they start feeling better. Instruct patients to discuss discontinuation with the prescribing clinician before they stop any medication. Educate patients that rapidly discontinuing lithium therapy puts them at high risk of relapse29 and increases the risk of developing treatment-refractory symptoms.23,30 Emphasize the importance of staying hydrated and maintaining adequate sodium in their diet.17,31 Consuming excessive sodium can reduce lithium levels.17,32 Lithium levels could increase when patients experience excessive sweating, such as during exercise or being outside on warm days, because of sodium and volume loss.17,33
1. Tondo L, Alda M, Bauer M, et al. Clinical use of lithium salts: guide for users and prescribers. Int J Bipolar Disord. 2019;7(1):16. doi:10.1186/s40345-019-0151-2
2. Azab AN, Shnaider A, Osher Y, et al. Lithium nephrotoxicity. Int J Bipolar Disord. 2015;3(1):28. doi:10.1186/s40345-015-0028-y
3. American Psychiatric Association. Practice guideline for the treatment of patients with bipolar disorder (revision). Am J Psychiatry. 2002;159:1-50.
4. Yatham LN, Kennedy SH, Parikh SV, et al. Canadian Network for Mood and Anxiety Treatments (CANMAT) and International Society for Bipolar Disorders (ISBD) collaborative update of CANMAT guidelines for the management of patients with bipolar disorder: update 2013. Bipolar Disord. 2013;15:1‐44. doi:10.1111/bdi.12025
5. National Collaborating Center for Mental Health (UK). Bipolar disorder: the NICE guideline on the assessment and management of bipolar disorder in adults, children and young people in primary and secondary care. The British Psychological Society and The Royal College of Psychiatrists; 2014.
6. Kupka R, Goossens P, van Bendegem M, et al. Multidisciplinaire richtlijn bipolaire stoornissen. Nederlandse Vereniging voor Psychiatrie (NVvP); 2015. Accessed August 10, 2020. http://www.nvvp.net/stream/richtlijn-bipolaire-stoornissen-2015
7. Malhi GS, Bassett D, Boyce P, et al. Royal Australian and New Zealand College of Psychiatrists clinical practice guidelines for mood disorders. Aust N Z J Psychiatry. 2015;49:1087‐1206. doi:10.1177/0004867415617657
8. Nederlof M, Heerdink ER, Egberts ACG, et al. Monitoring of patients treated with lithium for bipolar disorder: an international survey. Int J Bipolar Disord. 2018;6(1):12. doi:10.1186/s40345-018-0120-1
9. Leo RJ, Sharma M, Chrostowski DA. A case of lithium-induced symptomatic hypercalcemia. Prim Care Companion J Clin Psychiatry. 2010;12(4):PCC.09l00917. doi:10.4088/PCC.09l00917yel
10. McHenry CR, Lee K. Lithium therapy and disorders of the parathyroid glands. Endocr Pract. 1996;2(2):103-109. doi:10.4158/EP.2.2.103
11. Stahl SM. The prescribers guide: Stahl’s essential psychopharmacology. 6th ed. Cambridge University Press; 2017.
12. Kukopulos A, Reginaldi D. Variations of serum lithium concentrations correlated with the phases of manic-depressive psychosis. Agressologie. 1978;19(D):219-222.
13. Rittmannsberger H, Malsiner-Walli G. Mood-dependent changes of serum lithium concentration in a rapid cycling patient maintained on stable doses of lithium carbonate. Bipolar Disord. 2013;15(3):333-337. doi:10.1111/bdi.12066
14. Hochman E, Weizman A, Valevski A, et al. Association between bipolar episodes and fluid and electrolyte homeostasis: a retrospective longitudinal study. Bipolar Disord. 2014;16(8):781-789. doi:10.1111/bdi.12248
15. Volkmann C, Bschor T, Köhler S. Lithium treatment over the lifespan in bipolar disorders. Front Psychiatry. 2020;11:377. doi: 10.3389/fpsyt.2020.00377
16. Boltan DD, Fenves AZ. Effectiveness of normal saline diuresis in treating lithium overdose. Proc (Bayl Univ Med Cent). 2008;21(3):261-263. doi:10.1080/08998280.2008.11928407
17. Sadock BJ, Saddock VA, Ruiz P. Kaplan and Sadock’s synopsis of psychiatry. 11th ed. Wolters Kluwer; 2014.
18. Tighe SK, Mahon PB, Potash JB. Predictors of lithium response in bipolar disorder. Ther Adv Chronic Dis. 2011;2(3):209-226. doi:10.1177/2040622311399173
19. Cipriani A, Barbui C, Salanti G, et al. Comparative efficacy and acceptability of antimanic drugs in acute mania: a multiple-treatments meta-analysis. Lancet. 2011;378(9799):1306-1315. doi:10.1016/S0140-6736(11)60873-8
20. Smith LA, Cornelius V, Tacchi MJ, et al. Acute bipolar mania: a systematic review and meta-analysis of co-therapy vs monotherapy. Acta Psychiatr Scand. 2016;115(1):12-20. doi:10.1111/j.1600-0447.2006.00912.x
21. Nolen WA, Licht RW, Young AH, et al; ISBD/IGSLI Task Force on the treatment with lithium. What is the optimal serum level for lithium in the maintenance treatment of bipolar disorder? A systematic review and recommendations from the ISBD/IGSLI Task Force on treatment with lithium. Bipolar Disord. 2019;21(5):394-409. doi:10.1111/bdi.12805
22. Maj M, Starace F, Nolfe G, et al. Minimum plasma lithium levels required for effective prophylaxis in DSM III bipolar disorder: a prospective study. Pharmacopsychiatry. 1986;19(6):420-423. doi:10.1055/s-2007-1017280
23. Gupta S, Kripalani M, Khastgir U, et al. Management of the renal adverse effects of lithium. Advances in Psychiatric Treatment. 2013;19(6):457-466. doi:10.1192/apt.bp.112.010306
24. Gitlin M. Lithium and the kidney: an updated review. Drug Saf. 1999;20(3):231-243. doi:10.2165/00002018-199920030-00004
25. Jefferson JW. A clinician’s guide to monitoring kidney function in lithium-treated patients. J Clin Psychiatry. 2010;71(9):1153-1157. doi:10.4088/JCP.09m05917yel
26. Shah VC, Kayathi P, Singh G, et al. Enhance your understanding of lithium neurotoxicity. Prim Care Companion CNS Disord. 2015;17(3):10.4088/PCC.14l01767. doi:10.4088/PCC.14l01767
27. Netto I, Phutane VH. Reversible lithium neurotoxicity: review of the literature. Prim Care Companion CNS Disord. 2012;14(1):PCC.11r01197. doi:10.4088/PCC.11r01197
28. Mohandas E, Rajmohan V. Lithium use in special populations. Indian J Psychiatry. 2007;49(3):211-218. doi:10.4103/0019-5545.37325
29. Gupta S, Khastgir U. Drug information update. Lithium and chronic kidney disease: debates and dilemmas. BJPsych Bull. 2017;41(4):216-220. doi:10.1192/pb.bp.116.054031
30. Post RM. Preventing the malignant transformation of bipolar disorder. JAMA. 2018;319(12):1197-1198. doi:10.1001/jama.2018.0322
31. Timmer RT, Sands JM. Lithium intoxication. J Am Soc Nephrol. 1999;10(3):666-674.
32. Demers RG, Heninger GR. Sodium intake and lithium treatment in mania. Am J Psychiatry. 1971;128(1):100-104. doi:10.1176/ajp.128.1.100
33. Hedya SA, Avula A, Swoboda HD. Lithium toxicity. In: StatPearls. StatPearls Publishing; 2020.
1. Tondo L, Alda M, Bauer M, et al. Clinical use of lithium salts: guide for users and prescribers. Int J Bipolar Disord. 2019;7(1):16. doi:10.1186/s40345-019-0151-2
2. Azab AN, Shnaider A, Osher Y, et al. Lithium nephrotoxicity. Int J Bipolar Disord. 2015;3(1):28. doi:10.1186/s40345-015-0028-y
3. American Psychiatric Association. Practice guideline for the treatment of patients with bipolar disorder (revision). Am J Psychiatry. 2002;159:1-50.
4. Yatham LN, Kennedy SH, Parikh SV, et al. Canadian Network for Mood and Anxiety Treatments (CANMAT) and International Society for Bipolar Disorders (ISBD) collaborative update of CANMAT guidelines for the management of patients with bipolar disorder: update 2013. Bipolar Disord. 2013;15:1‐44. doi:10.1111/bdi.12025
5. National Collaborating Center for Mental Health (UK). Bipolar disorder: the NICE guideline on the assessment and management of bipolar disorder in adults, children and young people in primary and secondary care. The British Psychological Society and The Royal College of Psychiatrists; 2014.
6. Kupka R, Goossens P, van Bendegem M, et al. Multidisciplinaire richtlijn bipolaire stoornissen. Nederlandse Vereniging voor Psychiatrie (NVvP); 2015. Accessed August 10, 2020. http://www.nvvp.net/stream/richtlijn-bipolaire-stoornissen-2015
7. Malhi GS, Bassett D, Boyce P, et al. Royal Australian and New Zealand College of Psychiatrists clinical practice guidelines for mood disorders. Aust N Z J Psychiatry. 2015;49:1087‐1206. doi:10.1177/0004867415617657
8. Nederlof M, Heerdink ER, Egberts ACG, et al. Monitoring of patients treated with lithium for bipolar disorder: an international survey. Int J Bipolar Disord. 2018;6(1):12. doi:10.1186/s40345-018-0120-1
9. Leo RJ, Sharma M, Chrostowski DA. A case of lithium-induced symptomatic hypercalcemia. Prim Care Companion J Clin Psychiatry. 2010;12(4):PCC.09l00917. doi:10.4088/PCC.09l00917yel
10. McHenry CR, Lee K. Lithium therapy and disorders of the parathyroid glands. Endocr Pract. 1996;2(2):103-109. doi:10.4158/EP.2.2.103
11. Stahl SM. The prescribers guide: Stahl’s essential psychopharmacology. 6th ed. Cambridge University Press; 2017.
12. Kukopulos A, Reginaldi D. Variations of serum lithium concentrations correlated with the phases of manic-depressive psychosis. Agressologie. 1978;19(D):219-222.
13. Rittmannsberger H, Malsiner-Walli G. Mood-dependent changes of serum lithium concentration in a rapid cycling patient maintained on stable doses of lithium carbonate. Bipolar Disord. 2013;15(3):333-337. doi:10.1111/bdi.12066
14. Hochman E, Weizman A, Valevski A, et al. Association between bipolar episodes and fluid and electrolyte homeostasis: a retrospective longitudinal study. Bipolar Disord. 2014;16(8):781-789. doi:10.1111/bdi.12248
15. Volkmann C, Bschor T, Köhler S. Lithium treatment over the lifespan in bipolar disorders. Front Psychiatry. 2020;11:377. doi: 10.3389/fpsyt.2020.00377
16. Boltan DD, Fenves AZ. Effectiveness of normal saline diuresis in treating lithium overdose. Proc (Bayl Univ Med Cent). 2008;21(3):261-263. doi:10.1080/08998280.2008.11928407
17. Sadock BJ, Saddock VA, Ruiz P. Kaplan and Sadock’s synopsis of psychiatry. 11th ed. Wolters Kluwer; 2014.
18. Tighe SK, Mahon PB, Potash JB. Predictors of lithium response in bipolar disorder. Ther Adv Chronic Dis. 2011;2(3):209-226. doi:10.1177/2040622311399173
19. Cipriani A, Barbui C, Salanti G, et al. Comparative efficacy and acceptability of antimanic drugs in acute mania: a multiple-treatments meta-analysis. Lancet. 2011;378(9799):1306-1315. doi:10.1016/S0140-6736(11)60873-8
20. Smith LA, Cornelius V, Tacchi MJ, et al. Acute bipolar mania: a systematic review and meta-analysis of co-therapy vs monotherapy. Acta Psychiatr Scand. 2016;115(1):12-20. doi:10.1111/j.1600-0447.2006.00912.x
21. Nolen WA, Licht RW, Young AH, et al; ISBD/IGSLI Task Force on the treatment with lithium. What is the optimal serum level for lithium in the maintenance treatment of bipolar disorder? A systematic review and recommendations from the ISBD/IGSLI Task Force on treatment with lithium. Bipolar Disord. 2019;21(5):394-409. doi:10.1111/bdi.12805
22. Maj M, Starace F, Nolfe G, et al. Minimum plasma lithium levels required for effective prophylaxis in DSM III bipolar disorder: a prospective study. Pharmacopsychiatry. 1986;19(6):420-423. doi:10.1055/s-2007-1017280
23. Gupta S, Kripalani M, Khastgir U, et al. Management of the renal adverse effects of lithium. Advances in Psychiatric Treatment. 2013;19(6):457-466. doi:10.1192/apt.bp.112.010306
24. Gitlin M. Lithium and the kidney: an updated review. Drug Saf. 1999;20(3):231-243. doi:10.2165/00002018-199920030-00004
25. Jefferson JW. A clinician’s guide to monitoring kidney function in lithium-treated patients. J Clin Psychiatry. 2010;71(9):1153-1157. doi:10.4088/JCP.09m05917yel
26. Shah VC, Kayathi P, Singh G, et al. Enhance your understanding of lithium neurotoxicity. Prim Care Companion CNS Disord. 2015;17(3):10.4088/PCC.14l01767. doi:10.4088/PCC.14l01767
27. Netto I, Phutane VH. Reversible lithium neurotoxicity: review of the literature. Prim Care Companion CNS Disord. 2012;14(1):PCC.11r01197. doi:10.4088/PCC.11r01197
28. Mohandas E, Rajmohan V. Lithium use in special populations. Indian J Psychiatry. 2007;49(3):211-218. doi:10.4103/0019-5545.37325
29. Gupta S, Khastgir U. Drug information update. Lithium and chronic kidney disease: debates and dilemmas. BJPsych Bull. 2017;41(4):216-220. doi:10.1192/pb.bp.116.054031
30. Post RM. Preventing the malignant transformation of bipolar disorder. JAMA. 2018;319(12):1197-1198. doi:10.1001/jama.2018.0322
31. Timmer RT, Sands JM. Lithium intoxication. J Am Soc Nephrol. 1999;10(3):666-674.
32. Demers RG, Heninger GR. Sodium intake and lithium treatment in mania. Am J Psychiatry. 1971;128(1):100-104. doi:10.1176/ajp.128.1.100
33. Hedya SA, Avula A, Swoboda HD. Lithium toxicity. In: StatPearls. StatPearls Publishing; 2020.
Today’s psychiatric neuroscience advances were science fiction during my residency
During my residency training years, I had many rosy and bold dreams about the future of psychiatry, hoping for many breakthroughs.
Early on, I decided to pursue an academic career, and specifically to focus on the neurobiology of schizophrenia, bipolar disorder, and other psychoses. I secured a neuroscience mentor, conducted a research project, and presented my findings at the American Psychiatric Association Annual Meeting. Although at the time everyone used the term “functional” to describe mental illnesses, I was convinced that they were all neurologic conditions, with prominent psychiatric manifestations. And I have been proven right.
After my residency, I eagerly pursued a neuroscience fellowship at the National Institutes of Health. My fantasy was that during my career as a psychiatric neuroscientist, brain exploration would uncover the many mysteries of psychiatric disorders. I was insightful enough to recognize that what I envisioned for the future of psychiatry qualified as science fiction, but I never stopped dreaming.
Today, the advances in psychiatric neuroscience that were unimaginable during my residency have become dazzling discoveries. My journey as a psychiatric neuroscientist has been more thrilling than I ever imagined. I recall doing postmortem research on the brains of hundreds of deceased psychiatric patients, noticing sulci widening and ventricular dilatation, and wondering whether one day we would be able to detect those atrophic changes while the patients were alive. Although I measured those changes in postmortem brains, I was cognizant that due to preservation artifacts, such measurements were less reliable than measurements of living brains.
And then the advent of neuroimaging fulfilled my fantasies. This began towards the end of my fellowship, and has exploded with neurobiologic findings throughout my academic career. Then came dramatic methodologies to probe brain molecular and cellular pathologies, followed by breakthrough clinical advances. Entirely new vistas of research into psychiatric brain disorders are opening every day. The exhilaration will never end!
From science fiction to clinical reality
Here is a quick outline of some of the “science fiction” of psychiatry that has come true since my training days. Back then, these discoveries were completely absent from the radar screen of psychiatry, when it was still a fledgling medical specialty struggling to emerge from the dominant yet nonempirical era of psychoanalysis.
Brain exploration methods. Unprecedented breakthroughs in computer technology have allowed psychiatric neuroscientists to create a new field of neuroimaging research that includes:
- cerebral blood flow (CBF)
- position emission tomography (PET)
- single photon emission computed tomography (SPECT).
Continue to: These functional neuroimaging...
These functional neuroimaging methods (using ionizing radiation) have enabled clinicians to see abnormal blood flow patterns in the brains of living patients. One of the earliest findings was hypofrontality in patients with schizophrenia, implicating frontal pathology in this severe brain disorder. PET was also used for dopamine and serotonin receptor imaging.
Computerized axia tomography. Compared with skull X-rays, CT (“CAT”) scans provided a more detailed view of brain tissue, and began a structural neuroimaging revolution that enriched psychiatric research, but also was applied to organs other than the brain.
Magnetic resonance imaging (MRI) became the “big kahuna” of neuroimaging when arrived in the early 1980s and quickly supplanted CT research because it is safer (no ionizing radiation, and it can be repeated multiple times with or without tasks). It also provided exquisite neuroanatomical details of brain tissue with stunning fidelity. Subsequently, several MRI techniques/software programs were developed that advanced research in psychiatry to multiple new frontiers, including:
- Morphological neuroimaging with MRI
- Magnetic resonance spectroscopy (MRS), which acts like a living, noninvasive biopsy of several chemicals (such as choline, lactate, glutamine, adenosine triphosphate, and the neuronal marker N-acetylcysteine) in a small volume (≤1 cc) of neural tissue in various regions
- Functional MRI (fMRI), which measures blood flow changes during actual or imagined tasks in the brains of patients vs healthy controls
- Diffusion tensor imaging (DTI), which evaluates the integrity of white matter (60% of brain volume, including 137,000 miles of myelinated fibers) by measuring the flow of water inside myelinated fibers (anisotropy and diffusivity). DTI of the corpus callosum, the largest brain commissure that is comprised of 200 million interhemispheric fibers, has revealed many abnormalities. This was one of the structures I investigated during my fellowship, including a histopathological study.1
All 4 of these neuroimaging techniques continue to generate a wealth of data about brain structure and function in psychosis, mood disorders, anxiety disorders, borderline personality disorder, obsessive-compulsive disorder, eating disorders, and substance use disorders. All these discoveries were utterly impossible to predict during my residency. I am proud to have published the first reports in the literature of ventricular enlargement in patients with bipolar disorder,2 cortical atrophy in schizophrenia and mania,3 reductions of hippocampal volume in patients with schizophrenia using MRS,4 and progressive brain atrophy in patients with schizophrenia.5 It is especially gratifying that I played a small role in translating my science fiction fantasies into clinical reality!
Other breakthrough methodologies that are advancing psychiatric neuroscience today but were science fiction during my residency days include:
- Pluripotent stem cells, which enable the de-differentiation of adult skin cells and then re-differentiating them into any type of cell, including neurons. This allows researchers to conduct studies on any patient’s brain cells without needing to do an invasive, high-risk brain biopsy. As a young resident, I would never have predicted that this virtual brain biopsy would be possible!
- Optogenetics, which enables controlling cell behavior using light and genetically encoded light-sensitive proteins. This triggered a cornucopia of neuroscience discoveries by using optogenetics to modulate cell-signaling cascades to understand cellular biology. Halorhodopsin and bacteriorhodopsin are used as tools to turn neurons off or on rapidly and safely.
- Genome-wide association studies (GWAS) have revolutionized the field of molecular neurogenetics and are enabling clinicians to detect risk genes by comparing the DNA samples of thousands of psychiatric patients with thousands of healthy controls. This is how several hundred risk genes have been identified for schizophrenia, bipolar disorder, autism spectrum disorder, and more to come.
- Clustered regularly interspaced short palindromic repeats (CRISPR) is a remarkable genetic “scissors” (that earned its inventors the 2020 Nobel Prize) that allows splicing out a disease gene and splicing in a normal gene. This will have an enormous future application in preventing an adulthood illness at its roots during fetal life. The future medical implications for psychiatric disorders are prodigious!
Continue to: Clinical advances
Clinical advances. Many therapies or approaches that did not exist during my residency (and how I dreamed about them back then!) are available to today’s clinicians. These include:
- Rapid-acting antidepressants that reverse severe and chronic depression and suicidal urges within a few hours or a couple of days. As a resident, I waited for weeks or months to see patients with depression reach the full remission that is now achieved practically the same day with IV ketamine, intranasal esketamine, IV scopolamine, and inhalable nitrous oxide. During my residency, the closest thing we had to a rapid-acting treatment for depression was electroconvulsive therapy (ECT), but that usually took 2 to 3 weeks. Psychiatric clinicians should never cease to appreciate how an intractable, treatment-refractory depression can rapidly be turned off like a light switch, restoring normal mood to desperately ill persons.
- Neuromodulation techniques are flourishing. Beyond ECT, transcranial magnetic stimulation (TMS), vagus nerve stimulation (VNS), transcranial direct current stimulation (tDCS), deep brain stimulation (DBS), low field magnetic stimulation (LFMS), magnetic seizure therapy (MST), near-infrared radiation (NIR), and focused ultrasound (FUS) are approved or under development, offering millions of patients with various neuropsychiatric disorders potential recovery not with pharmacotherapy, but via a brain-targeted approach.
- Telepsychiatry. Now taken for granted during the COVID-19 pandemic, telepsychiatry was completely unimaginable during my residency. Yes, we had phones, but not smartphones! The only “zoom” we knew was the furious sound of a sports car engine! To be able to see and evaluate a patient literally anywhere in the world was science fiction personified! Increased remote access to psychiatric care by patients everywhere is a truly remarkable advance that helped avoid a disastrous lack of psychiatric treatment during the current pandemic that brought in-person interactions between psychiatric physicians and their patients to a screeching halt.
- Neurobiologic effects of psychotherapy. Viewing psychotherapy as a neurobiologic treatment was totally unknown and unimaginable during my residency. I was heavily trained in various types of psychotherapies, but not once did any of my supervisors mention experiential neuroplasticity as a brain-altering process, or that psychotherapy changes brain structure, induces experimental neuroplasticity, and induces billions of dendritic spines in patients’ cortex and limbic structures, helping them connect the dots and develop new insights. No one knew that psychotherapy can mimic the neural effects of pharmacotherapy.
- Immunomodulatory effects of psychotherapy. It was completely unknown that psychotherapies such as cognitive-behavioral therapy can lower levels of inflammatory biomarkers in patients’ CSF and serum. Back then, no one imagined that psychotherapy had immunomodulatory effects. These discoveries are revolutionary for us psychiatrists and confirm the neurobiologic mechanisms of psychotherapy for every patient we treat.
- Epigenetics. This was rarely, if ever, mentioned when I was a resident. We knew from clinical studies that children who were abused or neglected often develop severe mood or psychotic disorders in adulthood. But we did not know that trauma modifies some genes via under- or overexpression, and that such epigenetic changes alter brain development towards psychopathology. The mysteries of psychiatric brain disorders generated by childhood trauma have been clarified by advances in epigenetics.
Aspirational, futuristic therapies. Even now, as a seasoned psychiatric neuroscientist, I continue to dream. Research is providing many clues for potentially radical psychiatric treatments that go beyond standard antipsychotics, antidepressants, mood stabilizers, or anxiolytics. But today, I fully expect that scientific dreams eventually come true through research. For example, the following neuroscientific therapeutics strategies may someday become routine in clinical practice:
- microglia inhibition
- mitochondria repair
- anti-apoptotic therapy
- white matter connectivity restoration
- neuroprotection (enhancing neurogenesis, increasing neurotropic factors, and enhancing synaptogenesis)
- reverse glutamate N-methyl-
d -aspartate hypofunction - prevent amyloid formation.
Data analysis breakthroughs. Side-by-side with the explosion of new findings and amassing mountains of data in psychiatric neuroscience, unprecedented and revolutionary data-management techniques have emerged to facilitate the herculean task of data analysis to extract the mythical needle in a haystack and derive the overall impact of masses of data. These techniques, whose names were not in our vocabulary during my residency days, include:
- machine learning
- artificial intelligence
- deep learning
- big data.
With the help of powerful computers and ingenious software, discovering critical nuggets of knowledge about the brain and predicting the best approaches to healing dysfunctional brains are now possible. Those powerful methods of analyzing massive data are the vehicles for transforming science fiction to reality by assembling the jigsaw puzzle(s) of the human brain, arguably the last frontier in medical science.
My life experiences as a psychiatric neuroscientist have convinced me that nothing is beyond the reach of scientific research. Unraveling the divine brain’s complexities will eventually become reality. So, let us never stop dreaming and fantasizing!
1. Nasrallah HA, McCalley-Whitters M, Bigelow LB, et al. A histological study of the corpus callosum in chronic schizophrenia. Psychiatry Res. 1983;8(4):251-260.
2. Nasrallah HA, McCalley-Whitters M, Jacoby CG. Cerebral ventricular enlargement in young manic males. A controlled CT study. J Affect Disord. 1982;4(1):15-19.
3. Nasrallah HA, McCalley-Whitters M, Jacoby CG. Cortical atrophy in schizophrenia and mania: a comparative CT study. J Clin Psychiatry. 1982;43(11):439-441.
4. Nasrallah HA, Skinner TE, Schmalbrock P, et al. Proton magnetic resonance spectroscopy (1H MRS) of the hippocampal formation in schizophrenia: a pilot study. Br J Psychiatry. 1994;165(4):481-485.
5. Nasrallah HA, Olson SC, McCalley-Whitters M, et al. Cerebral ventricular enlargement in schizophrenia. A preliminary follow-up study. Arch Gen Psychiatry. 1986;43(2):157-159.
During my residency training years, I had many rosy and bold dreams about the future of psychiatry, hoping for many breakthroughs.
Early on, I decided to pursue an academic career, and specifically to focus on the neurobiology of schizophrenia, bipolar disorder, and other psychoses. I secured a neuroscience mentor, conducted a research project, and presented my findings at the American Psychiatric Association Annual Meeting. Although at the time everyone used the term “functional” to describe mental illnesses, I was convinced that they were all neurologic conditions, with prominent psychiatric manifestations. And I have been proven right.
After my residency, I eagerly pursued a neuroscience fellowship at the National Institutes of Health. My fantasy was that during my career as a psychiatric neuroscientist, brain exploration would uncover the many mysteries of psychiatric disorders. I was insightful enough to recognize that what I envisioned for the future of psychiatry qualified as science fiction, but I never stopped dreaming.
Today, the advances in psychiatric neuroscience that were unimaginable during my residency have become dazzling discoveries. My journey as a psychiatric neuroscientist has been more thrilling than I ever imagined. I recall doing postmortem research on the brains of hundreds of deceased psychiatric patients, noticing sulci widening and ventricular dilatation, and wondering whether one day we would be able to detect those atrophic changes while the patients were alive. Although I measured those changes in postmortem brains, I was cognizant that due to preservation artifacts, such measurements were less reliable than measurements of living brains.
And then the advent of neuroimaging fulfilled my fantasies. This began towards the end of my fellowship, and has exploded with neurobiologic findings throughout my academic career. Then came dramatic methodologies to probe brain molecular and cellular pathologies, followed by breakthrough clinical advances. Entirely new vistas of research into psychiatric brain disorders are opening every day. The exhilaration will never end!
From science fiction to clinical reality
Here is a quick outline of some of the “science fiction” of psychiatry that has come true since my training days. Back then, these discoveries were completely absent from the radar screen of psychiatry, when it was still a fledgling medical specialty struggling to emerge from the dominant yet nonempirical era of psychoanalysis.
Brain exploration methods. Unprecedented breakthroughs in computer technology have allowed psychiatric neuroscientists to create a new field of neuroimaging research that includes:
- cerebral blood flow (CBF)
- position emission tomography (PET)
- single photon emission computed tomography (SPECT).
Continue to: These functional neuroimaging...
These functional neuroimaging methods (using ionizing radiation) have enabled clinicians to see abnormal blood flow patterns in the brains of living patients. One of the earliest findings was hypofrontality in patients with schizophrenia, implicating frontal pathology in this severe brain disorder. PET was also used for dopamine and serotonin receptor imaging.
Computerized axia tomography. Compared with skull X-rays, CT (“CAT”) scans provided a more detailed view of brain tissue, and began a structural neuroimaging revolution that enriched psychiatric research, but also was applied to organs other than the brain.
Magnetic resonance imaging (MRI) became the “big kahuna” of neuroimaging when arrived in the early 1980s and quickly supplanted CT research because it is safer (no ionizing radiation, and it can be repeated multiple times with or without tasks). It also provided exquisite neuroanatomical details of brain tissue with stunning fidelity. Subsequently, several MRI techniques/software programs were developed that advanced research in psychiatry to multiple new frontiers, including:
- Morphological neuroimaging with MRI
- Magnetic resonance spectroscopy (MRS), which acts like a living, noninvasive biopsy of several chemicals (such as choline, lactate, glutamine, adenosine triphosphate, and the neuronal marker N-acetylcysteine) in a small volume (≤1 cc) of neural tissue in various regions
- Functional MRI (fMRI), which measures blood flow changes during actual or imagined tasks in the brains of patients vs healthy controls
- Diffusion tensor imaging (DTI), which evaluates the integrity of white matter (60% of brain volume, including 137,000 miles of myelinated fibers) by measuring the flow of water inside myelinated fibers (anisotropy and diffusivity). DTI of the corpus callosum, the largest brain commissure that is comprised of 200 million interhemispheric fibers, has revealed many abnormalities. This was one of the structures I investigated during my fellowship, including a histopathological study.1
All 4 of these neuroimaging techniques continue to generate a wealth of data about brain structure and function in psychosis, mood disorders, anxiety disorders, borderline personality disorder, obsessive-compulsive disorder, eating disorders, and substance use disorders. All these discoveries were utterly impossible to predict during my residency. I am proud to have published the first reports in the literature of ventricular enlargement in patients with bipolar disorder,2 cortical atrophy in schizophrenia and mania,3 reductions of hippocampal volume in patients with schizophrenia using MRS,4 and progressive brain atrophy in patients with schizophrenia.5 It is especially gratifying that I played a small role in translating my science fiction fantasies into clinical reality!
Other breakthrough methodologies that are advancing psychiatric neuroscience today but were science fiction during my residency days include:
- Pluripotent stem cells, which enable the de-differentiation of adult skin cells and then re-differentiating them into any type of cell, including neurons. This allows researchers to conduct studies on any patient’s brain cells without needing to do an invasive, high-risk brain biopsy. As a young resident, I would never have predicted that this virtual brain biopsy would be possible!
- Optogenetics, which enables controlling cell behavior using light and genetically encoded light-sensitive proteins. This triggered a cornucopia of neuroscience discoveries by using optogenetics to modulate cell-signaling cascades to understand cellular biology. Halorhodopsin and bacteriorhodopsin are used as tools to turn neurons off or on rapidly and safely.
- Genome-wide association studies (GWAS) have revolutionized the field of molecular neurogenetics and are enabling clinicians to detect risk genes by comparing the DNA samples of thousands of psychiatric patients with thousands of healthy controls. This is how several hundred risk genes have been identified for schizophrenia, bipolar disorder, autism spectrum disorder, and more to come.
- Clustered regularly interspaced short palindromic repeats (CRISPR) is a remarkable genetic “scissors” (that earned its inventors the 2020 Nobel Prize) that allows splicing out a disease gene and splicing in a normal gene. This will have an enormous future application in preventing an adulthood illness at its roots during fetal life. The future medical implications for psychiatric disorders are prodigious!
Continue to: Clinical advances
Clinical advances. Many therapies or approaches that did not exist during my residency (and how I dreamed about them back then!) are available to today’s clinicians. These include:
- Rapid-acting antidepressants that reverse severe and chronic depression and suicidal urges within a few hours or a couple of days. As a resident, I waited for weeks or months to see patients with depression reach the full remission that is now achieved practically the same day with IV ketamine, intranasal esketamine, IV scopolamine, and inhalable nitrous oxide. During my residency, the closest thing we had to a rapid-acting treatment for depression was electroconvulsive therapy (ECT), but that usually took 2 to 3 weeks. Psychiatric clinicians should never cease to appreciate how an intractable, treatment-refractory depression can rapidly be turned off like a light switch, restoring normal mood to desperately ill persons.
- Neuromodulation techniques are flourishing. Beyond ECT, transcranial magnetic stimulation (TMS), vagus nerve stimulation (VNS), transcranial direct current stimulation (tDCS), deep brain stimulation (DBS), low field magnetic stimulation (LFMS), magnetic seizure therapy (MST), near-infrared radiation (NIR), and focused ultrasound (FUS) are approved or under development, offering millions of patients with various neuropsychiatric disorders potential recovery not with pharmacotherapy, but via a brain-targeted approach.
- Telepsychiatry. Now taken for granted during the COVID-19 pandemic, telepsychiatry was completely unimaginable during my residency. Yes, we had phones, but not smartphones! The only “zoom” we knew was the furious sound of a sports car engine! To be able to see and evaluate a patient literally anywhere in the world was science fiction personified! Increased remote access to psychiatric care by patients everywhere is a truly remarkable advance that helped avoid a disastrous lack of psychiatric treatment during the current pandemic that brought in-person interactions between psychiatric physicians and their patients to a screeching halt.
- Neurobiologic effects of psychotherapy. Viewing psychotherapy as a neurobiologic treatment was totally unknown and unimaginable during my residency. I was heavily trained in various types of psychotherapies, but not once did any of my supervisors mention experiential neuroplasticity as a brain-altering process, or that psychotherapy changes brain structure, induces experimental neuroplasticity, and induces billions of dendritic spines in patients’ cortex and limbic structures, helping them connect the dots and develop new insights. No one knew that psychotherapy can mimic the neural effects of pharmacotherapy.
- Immunomodulatory effects of psychotherapy. It was completely unknown that psychotherapies such as cognitive-behavioral therapy can lower levels of inflammatory biomarkers in patients’ CSF and serum. Back then, no one imagined that psychotherapy had immunomodulatory effects. These discoveries are revolutionary for us psychiatrists and confirm the neurobiologic mechanisms of psychotherapy for every patient we treat.
- Epigenetics. This was rarely, if ever, mentioned when I was a resident. We knew from clinical studies that children who were abused or neglected often develop severe mood or psychotic disorders in adulthood. But we did not know that trauma modifies some genes via under- or overexpression, and that such epigenetic changes alter brain development towards psychopathology. The mysteries of psychiatric brain disorders generated by childhood trauma have been clarified by advances in epigenetics.
Aspirational, futuristic therapies. Even now, as a seasoned psychiatric neuroscientist, I continue to dream. Research is providing many clues for potentially radical psychiatric treatments that go beyond standard antipsychotics, antidepressants, mood stabilizers, or anxiolytics. But today, I fully expect that scientific dreams eventually come true through research. For example, the following neuroscientific therapeutics strategies may someday become routine in clinical practice:
- microglia inhibition
- mitochondria repair
- anti-apoptotic therapy
- white matter connectivity restoration
- neuroprotection (enhancing neurogenesis, increasing neurotropic factors, and enhancing synaptogenesis)
- reverse glutamate N-methyl-
d -aspartate hypofunction - prevent amyloid formation.
Data analysis breakthroughs. Side-by-side with the explosion of new findings and amassing mountains of data in psychiatric neuroscience, unprecedented and revolutionary data-management techniques have emerged to facilitate the herculean task of data analysis to extract the mythical needle in a haystack and derive the overall impact of masses of data. These techniques, whose names were not in our vocabulary during my residency days, include:
- machine learning
- artificial intelligence
- deep learning
- big data.
With the help of powerful computers and ingenious software, discovering critical nuggets of knowledge about the brain and predicting the best approaches to healing dysfunctional brains are now possible. Those powerful methods of analyzing massive data are the vehicles for transforming science fiction to reality by assembling the jigsaw puzzle(s) of the human brain, arguably the last frontier in medical science.
My life experiences as a psychiatric neuroscientist have convinced me that nothing is beyond the reach of scientific research. Unraveling the divine brain’s complexities will eventually become reality. So, let us never stop dreaming and fantasizing!
During my residency training years, I had many rosy and bold dreams about the future of psychiatry, hoping for many breakthroughs.
Early on, I decided to pursue an academic career, and specifically to focus on the neurobiology of schizophrenia, bipolar disorder, and other psychoses. I secured a neuroscience mentor, conducted a research project, and presented my findings at the American Psychiatric Association Annual Meeting. Although at the time everyone used the term “functional” to describe mental illnesses, I was convinced that they were all neurologic conditions, with prominent psychiatric manifestations. And I have been proven right.
After my residency, I eagerly pursued a neuroscience fellowship at the National Institutes of Health. My fantasy was that during my career as a psychiatric neuroscientist, brain exploration would uncover the many mysteries of psychiatric disorders. I was insightful enough to recognize that what I envisioned for the future of psychiatry qualified as science fiction, but I never stopped dreaming.
Today, the advances in psychiatric neuroscience that were unimaginable during my residency have become dazzling discoveries. My journey as a psychiatric neuroscientist has been more thrilling than I ever imagined. I recall doing postmortem research on the brains of hundreds of deceased psychiatric patients, noticing sulci widening and ventricular dilatation, and wondering whether one day we would be able to detect those atrophic changes while the patients were alive. Although I measured those changes in postmortem brains, I was cognizant that due to preservation artifacts, such measurements were less reliable than measurements of living brains.
And then the advent of neuroimaging fulfilled my fantasies. This began towards the end of my fellowship, and has exploded with neurobiologic findings throughout my academic career. Then came dramatic methodologies to probe brain molecular and cellular pathologies, followed by breakthrough clinical advances. Entirely new vistas of research into psychiatric brain disorders are opening every day. The exhilaration will never end!
From science fiction to clinical reality
Here is a quick outline of some of the “science fiction” of psychiatry that has come true since my training days. Back then, these discoveries were completely absent from the radar screen of psychiatry, when it was still a fledgling medical specialty struggling to emerge from the dominant yet nonempirical era of psychoanalysis.
Brain exploration methods. Unprecedented breakthroughs in computer technology have allowed psychiatric neuroscientists to create a new field of neuroimaging research that includes:
- cerebral blood flow (CBF)
- position emission tomography (PET)
- single photon emission computed tomography (SPECT).
Continue to: These functional neuroimaging...
These functional neuroimaging methods (using ionizing radiation) have enabled clinicians to see abnormal blood flow patterns in the brains of living patients. One of the earliest findings was hypofrontality in patients with schizophrenia, implicating frontal pathology in this severe brain disorder. PET was also used for dopamine and serotonin receptor imaging.
Computerized axia tomography. Compared with skull X-rays, CT (“CAT”) scans provided a more detailed view of brain tissue, and began a structural neuroimaging revolution that enriched psychiatric research, but also was applied to organs other than the brain.
Magnetic resonance imaging (MRI) became the “big kahuna” of neuroimaging when arrived in the early 1980s and quickly supplanted CT research because it is safer (no ionizing radiation, and it can be repeated multiple times with or without tasks). It also provided exquisite neuroanatomical details of brain tissue with stunning fidelity. Subsequently, several MRI techniques/software programs were developed that advanced research in psychiatry to multiple new frontiers, including:
- Morphological neuroimaging with MRI
- Magnetic resonance spectroscopy (MRS), which acts like a living, noninvasive biopsy of several chemicals (such as choline, lactate, glutamine, adenosine triphosphate, and the neuronal marker N-acetylcysteine) in a small volume (≤1 cc) of neural tissue in various regions
- Functional MRI (fMRI), which measures blood flow changes during actual or imagined tasks in the brains of patients vs healthy controls
- Diffusion tensor imaging (DTI), which evaluates the integrity of white matter (60% of brain volume, including 137,000 miles of myelinated fibers) by measuring the flow of water inside myelinated fibers (anisotropy and diffusivity). DTI of the corpus callosum, the largest brain commissure that is comprised of 200 million interhemispheric fibers, has revealed many abnormalities. This was one of the structures I investigated during my fellowship, including a histopathological study.1
All 4 of these neuroimaging techniques continue to generate a wealth of data about brain structure and function in psychosis, mood disorders, anxiety disorders, borderline personality disorder, obsessive-compulsive disorder, eating disorders, and substance use disorders. All these discoveries were utterly impossible to predict during my residency. I am proud to have published the first reports in the literature of ventricular enlargement in patients with bipolar disorder,2 cortical atrophy in schizophrenia and mania,3 reductions of hippocampal volume in patients with schizophrenia using MRS,4 and progressive brain atrophy in patients with schizophrenia.5 It is especially gratifying that I played a small role in translating my science fiction fantasies into clinical reality!
Other breakthrough methodologies that are advancing psychiatric neuroscience today but were science fiction during my residency days include:
- Pluripotent stem cells, which enable the de-differentiation of adult skin cells and then re-differentiating them into any type of cell, including neurons. This allows researchers to conduct studies on any patient’s brain cells without needing to do an invasive, high-risk brain biopsy. As a young resident, I would never have predicted that this virtual brain biopsy would be possible!
- Optogenetics, which enables controlling cell behavior using light and genetically encoded light-sensitive proteins. This triggered a cornucopia of neuroscience discoveries by using optogenetics to modulate cell-signaling cascades to understand cellular biology. Halorhodopsin and bacteriorhodopsin are used as tools to turn neurons off or on rapidly and safely.
- Genome-wide association studies (GWAS) have revolutionized the field of molecular neurogenetics and are enabling clinicians to detect risk genes by comparing the DNA samples of thousands of psychiatric patients with thousands of healthy controls. This is how several hundred risk genes have been identified for schizophrenia, bipolar disorder, autism spectrum disorder, and more to come.
- Clustered regularly interspaced short palindromic repeats (CRISPR) is a remarkable genetic “scissors” (that earned its inventors the 2020 Nobel Prize) that allows splicing out a disease gene and splicing in a normal gene. This will have an enormous future application in preventing an adulthood illness at its roots during fetal life. The future medical implications for psychiatric disorders are prodigious!
Continue to: Clinical advances
Clinical advances. Many therapies or approaches that did not exist during my residency (and how I dreamed about them back then!) are available to today’s clinicians. These include:
- Rapid-acting antidepressants that reverse severe and chronic depression and suicidal urges within a few hours or a couple of days. As a resident, I waited for weeks or months to see patients with depression reach the full remission that is now achieved practically the same day with IV ketamine, intranasal esketamine, IV scopolamine, and inhalable nitrous oxide. During my residency, the closest thing we had to a rapid-acting treatment for depression was electroconvulsive therapy (ECT), but that usually took 2 to 3 weeks. Psychiatric clinicians should never cease to appreciate how an intractable, treatment-refractory depression can rapidly be turned off like a light switch, restoring normal mood to desperately ill persons.
- Neuromodulation techniques are flourishing. Beyond ECT, transcranial magnetic stimulation (TMS), vagus nerve stimulation (VNS), transcranial direct current stimulation (tDCS), deep brain stimulation (DBS), low field magnetic stimulation (LFMS), magnetic seizure therapy (MST), near-infrared radiation (NIR), and focused ultrasound (FUS) are approved or under development, offering millions of patients with various neuropsychiatric disorders potential recovery not with pharmacotherapy, but via a brain-targeted approach.
- Telepsychiatry. Now taken for granted during the COVID-19 pandemic, telepsychiatry was completely unimaginable during my residency. Yes, we had phones, but not smartphones! The only “zoom” we knew was the furious sound of a sports car engine! To be able to see and evaluate a patient literally anywhere in the world was science fiction personified! Increased remote access to psychiatric care by patients everywhere is a truly remarkable advance that helped avoid a disastrous lack of psychiatric treatment during the current pandemic that brought in-person interactions between psychiatric physicians and their patients to a screeching halt.
- Neurobiologic effects of psychotherapy. Viewing psychotherapy as a neurobiologic treatment was totally unknown and unimaginable during my residency. I was heavily trained in various types of psychotherapies, but not once did any of my supervisors mention experiential neuroplasticity as a brain-altering process, or that psychotherapy changes brain structure, induces experimental neuroplasticity, and induces billions of dendritic spines in patients’ cortex and limbic structures, helping them connect the dots and develop new insights. No one knew that psychotherapy can mimic the neural effects of pharmacotherapy.
- Immunomodulatory effects of psychotherapy. It was completely unknown that psychotherapies such as cognitive-behavioral therapy can lower levels of inflammatory biomarkers in patients’ CSF and serum. Back then, no one imagined that psychotherapy had immunomodulatory effects. These discoveries are revolutionary for us psychiatrists and confirm the neurobiologic mechanisms of psychotherapy for every patient we treat.
- Epigenetics. This was rarely, if ever, mentioned when I was a resident. We knew from clinical studies that children who were abused or neglected often develop severe mood or psychotic disorders in adulthood. But we did not know that trauma modifies some genes via under- or overexpression, and that such epigenetic changes alter brain development towards psychopathology. The mysteries of psychiatric brain disorders generated by childhood trauma have been clarified by advances in epigenetics.
Aspirational, futuristic therapies. Even now, as a seasoned psychiatric neuroscientist, I continue to dream. Research is providing many clues for potentially radical psychiatric treatments that go beyond standard antipsychotics, antidepressants, mood stabilizers, or anxiolytics. But today, I fully expect that scientific dreams eventually come true through research. For example, the following neuroscientific therapeutics strategies may someday become routine in clinical practice:
- microglia inhibition
- mitochondria repair
- anti-apoptotic therapy
- white matter connectivity restoration
- neuroprotection (enhancing neurogenesis, increasing neurotropic factors, and enhancing synaptogenesis)
- reverse glutamate N-methyl-
d -aspartate hypofunction - prevent amyloid formation.
Data analysis breakthroughs. Side-by-side with the explosion of new findings and amassing mountains of data in psychiatric neuroscience, unprecedented and revolutionary data-management techniques have emerged to facilitate the herculean task of data analysis to extract the mythical needle in a haystack and derive the overall impact of masses of data. These techniques, whose names were not in our vocabulary during my residency days, include:
- machine learning
- artificial intelligence
- deep learning
- big data.
With the help of powerful computers and ingenious software, discovering critical nuggets of knowledge about the brain and predicting the best approaches to healing dysfunctional brains are now possible. Those powerful methods of analyzing massive data are the vehicles for transforming science fiction to reality by assembling the jigsaw puzzle(s) of the human brain, arguably the last frontier in medical science.
My life experiences as a psychiatric neuroscientist have convinced me that nothing is beyond the reach of scientific research. Unraveling the divine brain’s complexities will eventually become reality. So, let us never stop dreaming and fantasizing!
1. Nasrallah HA, McCalley-Whitters M, Bigelow LB, et al. A histological study of the corpus callosum in chronic schizophrenia. Psychiatry Res. 1983;8(4):251-260.
2. Nasrallah HA, McCalley-Whitters M, Jacoby CG. Cerebral ventricular enlargement in young manic males. A controlled CT study. J Affect Disord. 1982;4(1):15-19.
3. Nasrallah HA, McCalley-Whitters M, Jacoby CG. Cortical atrophy in schizophrenia and mania: a comparative CT study. J Clin Psychiatry. 1982;43(11):439-441.
4. Nasrallah HA, Skinner TE, Schmalbrock P, et al. Proton magnetic resonance spectroscopy (1H MRS) of the hippocampal formation in schizophrenia: a pilot study. Br J Psychiatry. 1994;165(4):481-485.
5. Nasrallah HA, Olson SC, McCalley-Whitters M, et al. Cerebral ventricular enlargement in schizophrenia. A preliminary follow-up study. Arch Gen Psychiatry. 1986;43(2):157-159.
1. Nasrallah HA, McCalley-Whitters M, Bigelow LB, et al. A histological study of the corpus callosum in chronic schizophrenia. Psychiatry Res. 1983;8(4):251-260.
2. Nasrallah HA, McCalley-Whitters M, Jacoby CG. Cerebral ventricular enlargement in young manic males. A controlled CT study. J Affect Disord. 1982;4(1):15-19.
3. Nasrallah HA, McCalley-Whitters M, Jacoby CG. Cortical atrophy in schizophrenia and mania: a comparative CT study. J Clin Psychiatry. 1982;43(11):439-441.
4. Nasrallah HA, Skinner TE, Schmalbrock P, et al. Proton magnetic resonance spectroscopy (1H MRS) of the hippocampal formation in schizophrenia: a pilot study. Br J Psychiatry. 1994;165(4):481-485.
5. Nasrallah HA, Olson SC, McCalley-Whitters M, et al. Cerebral ventricular enlargement in schizophrenia. A preliminary follow-up study. Arch Gen Psychiatry. 1986;43(2):157-159.
High-dose lumateperone: A case report
Lumateperone is a novel antipsychotic that possesses a variety of unique receptor affinities. The recommended dose of lumateperone is 42 mg/d. In clinical trials, reductions in Positive and Negative Syndrome Scale scores observed with lumateperone, 28 mg/d and 84 mg/d, failed to separate from placebo.1 However, in these trials, safety profiles were similar for all 3 doses.
Despite the popular understanding of lumateperone’s “unexplained narrow therapeutic window,”2 we report the case of a patient with schizophrenia who responded well to lumateperone, 84 mg/d, without adverse effects or EKG changes.
Case report. Mr. W, age 26, has treatment-resistant schizophrenia (paranoid type). He failed to achieve remission on fluphenazine (10 to 25 mg/d), perphenazine (4 to 24 mg/d), risperidone (started at 4 mg/d and increased to 8 mg/d), and olanzapine (15, 20, and 25 mg/d). None of these medications eliminated his auditory or visual hallucinations. His response was most robust to perphenazine, as he reported a 50% reduction in the frequency of auditory hallucinations and a near-complete resolution of visual hallucinations (once or twice per week), but he never achieved full remission.
We started lumateperone, 42 mg/d, without a cross-taper. After 4 weeks of partial response, the patient escalated his dose to 84 mg/d on his own. At a follow-up visit 3.5 weeks after this self-directed dose increase, Mr. W reported a complete resolution of his auditory and visual hallucinations.
Six months later, Mr. W continued to receive lumateperone, 84 mg/d, without extrapyramidal symptoms, tardive dyskinesia, or other adverse effects. His QTc showed no significant change (410 ms vs 412 ms).
Although some studies indicate a possible “therapeutic window” for lumateperone dosing, clinicians should not deprive patients who partially respond to the recommended 42 mg/d dose of the opportunity for additional benefit through dose escalation. Due to the vagaries of psychiatric pathology, and unique profiles of metabolism and receptor sensitivity, there will always be patients who may require higher-than-recommended doses of lumateperone, as with all other agents.
1. Lieberman JA, Davis RE, Correll CU, et al. ITI-007 for the treatment of schizophrenia: a 4-week randomized, double-blind, controlled trial. Biol Psychiatry. 2016;79(12):952-961. doi: 10.1016/j.biopsych.2015.08.026
2. Kantrowitz JT. The potential role of lumateperone—something borrowed? something new? JAMA Psychiatry. 2020;77(4):343-344. doi:10.1001/jamapsychiatry.2019.4265
Lumateperone is a novel antipsychotic that possesses a variety of unique receptor affinities. The recommended dose of lumateperone is 42 mg/d. In clinical trials, reductions in Positive and Negative Syndrome Scale scores observed with lumateperone, 28 mg/d and 84 mg/d, failed to separate from placebo.1 However, in these trials, safety profiles were similar for all 3 doses.
Despite the popular understanding of lumateperone’s “unexplained narrow therapeutic window,”2 we report the case of a patient with schizophrenia who responded well to lumateperone, 84 mg/d, without adverse effects or EKG changes.
Case report. Mr. W, age 26, has treatment-resistant schizophrenia (paranoid type). He failed to achieve remission on fluphenazine (10 to 25 mg/d), perphenazine (4 to 24 mg/d), risperidone (started at 4 mg/d and increased to 8 mg/d), and olanzapine (15, 20, and 25 mg/d). None of these medications eliminated his auditory or visual hallucinations. His response was most robust to perphenazine, as he reported a 50% reduction in the frequency of auditory hallucinations and a near-complete resolution of visual hallucinations (once or twice per week), but he never achieved full remission.
We started lumateperone, 42 mg/d, without a cross-taper. After 4 weeks of partial response, the patient escalated his dose to 84 mg/d on his own. At a follow-up visit 3.5 weeks after this self-directed dose increase, Mr. W reported a complete resolution of his auditory and visual hallucinations.
Six months later, Mr. W continued to receive lumateperone, 84 mg/d, without extrapyramidal symptoms, tardive dyskinesia, or other adverse effects. His QTc showed no significant change (410 ms vs 412 ms).
Although some studies indicate a possible “therapeutic window” for lumateperone dosing, clinicians should not deprive patients who partially respond to the recommended 42 mg/d dose of the opportunity for additional benefit through dose escalation. Due to the vagaries of psychiatric pathology, and unique profiles of metabolism and receptor sensitivity, there will always be patients who may require higher-than-recommended doses of lumateperone, as with all other agents.
Lumateperone is a novel antipsychotic that possesses a variety of unique receptor affinities. The recommended dose of lumateperone is 42 mg/d. In clinical trials, reductions in Positive and Negative Syndrome Scale scores observed with lumateperone, 28 mg/d and 84 mg/d, failed to separate from placebo.1 However, in these trials, safety profiles were similar for all 3 doses.
Despite the popular understanding of lumateperone’s “unexplained narrow therapeutic window,”2 we report the case of a patient with schizophrenia who responded well to lumateperone, 84 mg/d, without adverse effects or EKG changes.
Case report. Mr. W, age 26, has treatment-resistant schizophrenia (paranoid type). He failed to achieve remission on fluphenazine (10 to 25 mg/d), perphenazine (4 to 24 mg/d), risperidone (started at 4 mg/d and increased to 8 mg/d), and olanzapine (15, 20, and 25 mg/d). None of these medications eliminated his auditory or visual hallucinations. His response was most robust to perphenazine, as he reported a 50% reduction in the frequency of auditory hallucinations and a near-complete resolution of visual hallucinations (once or twice per week), but he never achieved full remission.
We started lumateperone, 42 mg/d, without a cross-taper. After 4 weeks of partial response, the patient escalated his dose to 84 mg/d on his own. At a follow-up visit 3.5 weeks after this self-directed dose increase, Mr. W reported a complete resolution of his auditory and visual hallucinations.
Six months later, Mr. W continued to receive lumateperone, 84 mg/d, without extrapyramidal symptoms, tardive dyskinesia, or other adverse effects. His QTc showed no significant change (410 ms vs 412 ms).
Although some studies indicate a possible “therapeutic window” for lumateperone dosing, clinicians should not deprive patients who partially respond to the recommended 42 mg/d dose of the opportunity for additional benefit through dose escalation. Due to the vagaries of psychiatric pathology, and unique profiles of metabolism and receptor sensitivity, there will always be patients who may require higher-than-recommended doses of lumateperone, as with all other agents.
1. Lieberman JA, Davis RE, Correll CU, et al. ITI-007 for the treatment of schizophrenia: a 4-week randomized, double-blind, controlled trial. Biol Psychiatry. 2016;79(12):952-961. doi: 10.1016/j.biopsych.2015.08.026
2. Kantrowitz JT. The potential role of lumateperone—something borrowed? something new? JAMA Psychiatry. 2020;77(4):343-344. doi:10.1001/jamapsychiatry.2019.4265
1. Lieberman JA, Davis RE, Correll CU, et al. ITI-007 for the treatment of schizophrenia: a 4-week randomized, double-blind, controlled trial. Biol Psychiatry. 2016;79(12):952-961. doi: 10.1016/j.biopsych.2015.08.026
2. Kantrowitz JT. The potential role of lumateperone—something borrowed? something new? JAMA Psychiatry. 2020;77(4):343-344. doi:10.1001/jamapsychiatry.2019.4265
History made, history revisited
The Biden administration has passed and signed the $1.9 trillion American Rescue Plan, which contains a plethora of moneys targeted to people, businesses, and health systems impacted by the pandemic. According to the Economist, the bill would bring the amount of COVID-related spending since December 2020 to $3 trillion (14% of prepandemic GDP) and to $6 trillion since the start of the pandemic. This type of stimulus (regarded as income, not savings, by most people) will generate unprecedented consumer spending. The risk, of course, is inflation, rising interest rates, and long-term debt.
That said, there is substantial funding targeting scientific research, vaccine distribution, public health entities, global health initiatives, rural health care, and a variety of other health-related issues. By my rough estimation, the Centers for Disease Control and Prevention will see $12 billion in incremental funding, $10 billion for public health projects including $3 billion for community health centers and federally qualified health centers, and over $3 billion for mental and behavioral health. The Department of Health & Human Services will see substantial funding for a variety of projects. Teaching health centers will see $330 million additional funds (including a $10,000 per-resident increase and payments to establish new graduate residency training programs).
The impact on low-income families and childhood poverty will be substantial and reverses the philosophical underpinning of recent welfare reforms. U.S. welfare dates back to the early 1900s and the philosophical foundation has evolved over time. According to the Constitutional Rights Foundation (www.crf-usa.org), it began after food riots broke out during the Great Depression. The Great Depression affected children and the elderly most severely, so the nation’s willingness to implement federal welfare was high. Prior to the Depression, the only federal program providing money to low-income people was the “mothers pension” designed to support poor fatherless children, but it excluded divorced, deserted and minority mothers. President Roosevelt was able to pass the Social Security Act (1935), which supported the elderly and began Federal welfare. During the Clinton presidency, welfare “as we know it” changed to include work requirements. With the passage of the current Biden legislation, those requirements are rolled back and funds are targeted broadly to low income Americans and children.
John I. Allen, MD, MBA, AGAF
Editor in Chief
The Biden administration has passed and signed the $1.9 trillion American Rescue Plan, which contains a plethora of moneys targeted to people, businesses, and health systems impacted by the pandemic. According to the Economist, the bill would bring the amount of COVID-related spending since December 2020 to $3 trillion (14% of prepandemic GDP) and to $6 trillion since the start of the pandemic. This type of stimulus (regarded as income, not savings, by most people) will generate unprecedented consumer spending. The risk, of course, is inflation, rising interest rates, and long-term debt.
That said, there is substantial funding targeting scientific research, vaccine distribution, public health entities, global health initiatives, rural health care, and a variety of other health-related issues. By my rough estimation, the Centers for Disease Control and Prevention will see $12 billion in incremental funding, $10 billion for public health projects including $3 billion for community health centers and federally qualified health centers, and over $3 billion for mental and behavioral health. The Department of Health & Human Services will see substantial funding for a variety of projects. Teaching health centers will see $330 million additional funds (including a $10,000 per-resident increase and payments to establish new graduate residency training programs).
The impact on low-income families and childhood poverty will be substantial and reverses the philosophical underpinning of recent welfare reforms. U.S. welfare dates back to the early 1900s and the philosophical foundation has evolved over time. According to the Constitutional Rights Foundation (www.crf-usa.org), it began after food riots broke out during the Great Depression. The Great Depression affected children and the elderly most severely, so the nation’s willingness to implement federal welfare was high. Prior to the Depression, the only federal program providing money to low-income people was the “mothers pension” designed to support poor fatherless children, but it excluded divorced, deserted and minority mothers. President Roosevelt was able to pass the Social Security Act (1935), which supported the elderly and began Federal welfare. During the Clinton presidency, welfare “as we know it” changed to include work requirements. With the passage of the current Biden legislation, those requirements are rolled back and funds are targeted broadly to low income Americans and children.
John I. Allen, MD, MBA, AGAF
Editor in Chief
The Biden administration has passed and signed the $1.9 trillion American Rescue Plan, which contains a plethora of moneys targeted to people, businesses, and health systems impacted by the pandemic. According to the Economist, the bill would bring the amount of COVID-related spending since December 2020 to $3 trillion (14% of prepandemic GDP) and to $6 trillion since the start of the pandemic. This type of stimulus (regarded as income, not savings, by most people) will generate unprecedented consumer spending. The risk, of course, is inflation, rising interest rates, and long-term debt.
That said, there is substantial funding targeting scientific research, vaccine distribution, public health entities, global health initiatives, rural health care, and a variety of other health-related issues. By my rough estimation, the Centers for Disease Control and Prevention will see $12 billion in incremental funding, $10 billion for public health projects including $3 billion for community health centers and federally qualified health centers, and over $3 billion for mental and behavioral health. The Department of Health & Human Services will see substantial funding for a variety of projects. Teaching health centers will see $330 million additional funds (including a $10,000 per-resident increase and payments to establish new graduate residency training programs).
The impact on low-income families and childhood poverty will be substantial and reverses the philosophical underpinning of recent welfare reforms. U.S. welfare dates back to the early 1900s and the philosophical foundation has evolved over time. According to the Constitutional Rights Foundation (www.crf-usa.org), it began after food riots broke out during the Great Depression. The Great Depression affected children and the elderly most severely, so the nation’s willingness to implement federal welfare was high. Prior to the Depression, the only federal program providing money to low-income people was the “mothers pension” designed to support poor fatherless children, but it excluded divorced, deserted and minority mothers. President Roosevelt was able to pass the Social Security Act (1935), which supported the elderly and began Federal welfare. During the Clinton presidency, welfare “as we know it” changed to include work requirements. With the passage of the current Biden legislation, those requirements are rolled back and funds are targeted broadly to low income Americans and children.
John I. Allen, MD, MBA, AGAF
Editor in Chief
Changes required for gynecologic surgeons to achieve greater pay equity
In a recent commentary published in Obstetrics & Gynecology, Katie L. Watson, JD, and Louise P. King, MD, JD, describe the issue of “double discrimination” in gynecologic surgery. The authors outlined how lower pay in a specialty where a majority of the surgeons and all of the patients are women may impact quality of care.
The commentary raises a number of concerns in gynecologic surgery that are important to discuss. Ob.gyn. as a whole is underpaid, as are many nonprocedural specialties such as family medicine and internal medicine. When ob.gyns. were predominantly men, the same situation existed – ob.gyns. were paid less than many other procedural specialties. While we’ve come a long way from the relative value unit (RVU) originally determined from the Harvard studies 30 years ago, there is room for additional improvement.
Several rationales were proposed by the authors to explain the disparities in pay between gynecologic surgery and those in urology: patient gender, surgeon gender, and length of training for gynecologic surgeons. The authors cited comparisons between urology and gynecology regarding “anatomically similar, sex-specific procedures” which require closer examination. Many of the code pairs selected were not actually comparable services. For example, management of Peyronie’s disease is a highly complex treatment performed by urologists that is not comparable with vaginectomy, yet this is an example of two codes used in the reference cited by the authors to conclude that surgeries on women are undervalued.
The overall RVUs for a procedure are also dependent upon the global period. The Centers for Medicare & Medicaid Services designated RVUs as the total amount of work before, during, and after a procedure. If a surgery has a 90-day global period, all the work for 90 days thereafter is bundled into the value, whereas if something is a zero-day global, only that day’s work is counted. A gynecologic surgeon who sees a patient back two or three times is coding and billing for those encounters in addition to that initial procedure.
Many of the code comparisons used in the analysis of gender in RVUs compared services with different global periods. Finally, some of the services that were compared had vastly different utilization. Some of the services and codes that were compared are performed extremely rarely and for that reason have not had their values reassessed over the years. There may be inequities in the RVUs for these services, but they will account for extremely little in overall compensation.
As a former chair of the American Medical Association’s RVS Update Committee (RUC), I spent years attempting to revalue ob.gyn. procedures. CMS assigns work RVUs based on physician work, practice expense, and professional liability insurance. The work is calculated using total physician time and intensity based on surveys completed by the specialty. The American College of Obstetrician and Gynecologist’s Committee on Health Economics and Coding, and the AMA RUC have worked diligently over many years to reassess potentially misvalued services. The ultimate RVUs assigned by CMS for gynecologic surgery are determined by the surveys completed by ACOG members. One issue we encountered with reexamining some procedures under RBRVS is that they have become so low volume that it has been difficult to justify the cost and effort to revalue them.
Lack of ob.gyn. training isn’t the full story
On average, ob.gyns. have between 18 and 24 months of surgical training, which is significantly less than other specialties. Lack of training in gynecologic surgery was proposed as another explanation for reduced compensation among female gynecologic surgeons. This is a complex issue not adequately explained by training time for gynecologic surgeons alone. While the number of trained ob.gyns. has increased in recent decades, the surgical volume has diminished and the workload of gynecologic surgery is far lower than it used to be. Surgical volume during and after training was much higher 35 years ago, prior to the advancements of procedures like endometrial ablation or tubal ligation. Women who had finished childbearing often underwent vaginal hysterectomies to manage contraception along with various other conditions.
With the advent of minimally invasive surgery, laparoscopic sterilization became possible, which has reduced the number of hysterectomies performed. Endometrial ablation is an office-based, noninvasive procedure. The development of the levonorgestrel IUD has helped manage abnormal bleeding, further reducing the need for hysterectomy.
This reduction in surgical volume does have an impact on quality of care. The model of tracking surgical outcomes at Kaiser Health System, as mentioned by the authors, could work well in some, but not all centers. A more approachable solution to address surgical volume for the average ob.gyn. would be to implement a mentoring and coaching process whereby recently trained ob.gyns. assist their senior partner(s) in surgery. This was the model years ago: I was trained by an ob.gyn. who was trained as a general surgeon. It was through the experience of assisting on each one of his cases – and him assisting on each one of my cases – that I received incredibly thorough surgical training.
These changes in practice, however, do not impact reimbursement. Rather than discrimination based on the gender of the surgeon, lower salaries in ob.gyn. are more likely to be the result of these and other factors.
The wage and quality gap in ob.gyn.
As a predominantly female surgical specialty, some of the disparity between gynecology and urology could be explained by how each specialty values its work. Here, gender plays a role in that when ob.gyns. are surveyed during the RUC process they may undervalue their work by reporting they can perform a procedure (and the before and after care) faster than what a urologist reports. The survey results may then result in lower RVUs.
Ob.gyn. is an overpopulated specialty for the number of surgeons needed to manage the volume of gynecologic surgery. When a health system wants to hire a general ob.gyn., it doesn’t have trouble finding one, while urologists are more challenging to recruit. This is not because of the structure of resource-based relative value scale (RBRVS) – despite the overall RVUs for gynecologic surgery, gynecologic oncologists are often paid well because health systems need them – but rather to the market economy of hiring physicians in specialty areas where there is demand.
Women are also chronically undervalued for the hours that we spend with patients. Data show that we spend more time with patients, which does not generate as many RVUs, but it generates better outcomes for patients. Evidence shows that women doctors in internal medicine and family medicine have better outcomes than doctors who are men.
On Jan. 1, 2021, Medicare and other payers implemented a new structure to reporting the level of office visit based on either medical decision-making or time spent on the date of encounter. Time spent with patients will now be rewarded – increased RVUs for increased time.
Part of the solution is value-based medicine and moving away from counting RVUs. This is also an opportunity to look at where time is spent in general ob.gyn. training and redistribute it, focusing on what trainees need for their education and not what hospitals need to service labor and delivery. We should step back and look creatively at optimizing the education and the training of ob.gyns., and where possible utilize other health care professionals such as nurse practitioners and midwives to address the uncomplicated obstetric needs of the hospital which could free up ob.gyn. trainees to obtain further surgical education.
To be clear, gender discrimination in compensation is prevalent and a persistent problem in medicine – ob.gyn. is no exception. Many ob.gyns. are employed by large health systems with payment structures and incentives that don’t align with those of the physician or the patient. There is definite misalignment in the way salaries are determined. Transparency on salaries is a critical component of addressing the pay gap that exists between women and men in medicine and in other industries.
The pay gap as it relates to reimbursement for gynecologic surgery, however, is a more complex matter that relates to how the RBRVS system was developed nearly 30 years ago when gynecologic surgery was not predominantly performed by women.
Dr. Levy is a voluntary clinical professor in the department of obstetrics, gynecology, and reproductive sciences at University of California San Diego Health, the former vice president of health policy at ACOG, past chair of the AMA/RUC, and current voting member of the AMA CPT editorial panel. She reported no relevant financial disclosures.
In a recent commentary published in Obstetrics & Gynecology, Katie L. Watson, JD, and Louise P. King, MD, JD, describe the issue of “double discrimination” in gynecologic surgery. The authors outlined how lower pay in a specialty where a majority of the surgeons and all of the patients are women may impact quality of care.
The commentary raises a number of concerns in gynecologic surgery that are important to discuss. Ob.gyn. as a whole is underpaid, as are many nonprocedural specialties such as family medicine and internal medicine. When ob.gyns. were predominantly men, the same situation existed – ob.gyns. were paid less than many other procedural specialties. While we’ve come a long way from the relative value unit (RVU) originally determined from the Harvard studies 30 years ago, there is room for additional improvement.
Several rationales were proposed by the authors to explain the disparities in pay between gynecologic surgery and those in urology: patient gender, surgeon gender, and length of training for gynecologic surgeons. The authors cited comparisons between urology and gynecology regarding “anatomically similar, sex-specific procedures” which require closer examination. Many of the code pairs selected were not actually comparable services. For example, management of Peyronie’s disease is a highly complex treatment performed by urologists that is not comparable with vaginectomy, yet this is an example of two codes used in the reference cited by the authors to conclude that surgeries on women are undervalued.
The overall RVUs for a procedure are also dependent upon the global period. The Centers for Medicare & Medicaid Services designated RVUs as the total amount of work before, during, and after a procedure. If a surgery has a 90-day global period, all the work for 90 days thereafter is bundled into the value, whereas if something is a zero-day global, only that day’s work is counted. A gynecologic surgeon who sees a patient back two or three times is coding and billing for those encounters in addition to that initial procedure.
Many of the code comparisons used in the analysis of gender in RVUs compared services with different global periods. Finally, some of the services that were compared had vastly different utilization. Some of the services and codes that were compared are performed extremely rarely and for that reason have not had their values reassessed over the years. There may be inequities in the RVUs for these services, but they will account for extremely little in overall compensation.
As a former chair of the American Medical Association’s RVS Update Committee (RUC), I spent years attempting to revalue ob.gyn. procedures. CMS assigns work RVUs based on physician work, practice expense, and professional liability insurance. The work is calculated using total physician time and intensity based on surveys completed by the specialty. The American College of Obstetrician and Gynecologist’s Committee on Health Economics and Coding, and the AMA RUC have worked diligently over many years to reassess potentially misvalued services. The ultimate RVUs assigned by CMS for gynecologic surgery are determined by the surveys completed by ACOG members. One issue we encountered with reexamining some procedures under RBRVS is that they have become so low volume that it has been difficult to justify the cost and effort to revalue them.
Lack of ob.gyn. training isn’t the full story
On average, ob.gyns. have between 18 and 24 months of surgical training, which is significantly less than other specialties. Lack of training in gynecologic surgery was proposed as another explanation for reduced compensation among female gynecologic surgeons. This is a complex issue not adequately explained by training time for gynecologic surgeons alone. While the number of trained ob.gyns. has increased in recent decades, the surgical volume has diminished and the workload of gynecologic surgery is far lower than it used to be. Surgical volume during and after training was much higher 35 years ago, prior to the advancements of procedures like endometrial ablation or tubal ligation. Women who had finished childbearing often underwent vaginal hysterectomies to manage contraception along with various other conditions.
With the advent of minimally invasive surgery, laparoscopic sterilization became possible, which has reduced the number of hysterectomies performed. Endometrial ablation is an office-based, noninvasive procedure. The development of the levonorgestrel IUD has helped manage abnormal bleeding, further reducing the need for hysterectomy.
This reduction in surgical volume does have an impact on quality of care. The model of tracking surgical outcomes at Kaiser Health System, as mentioned by the authors, could work well in some, but not all centers. A more approachable solution to address surgical volume for the average ob.gyn. would be to implement a mentoring and coaching process whereby recently trained ob.gyns. assist their senior partner(s) in surgery. This was the model years ago: I was trained by an ob.gyn. who was trained as a general surgeon. It was through the experience of assisting on each one of his cases – and him assisting on each one of my cases – that I received incredibly thorough surgical training.
These changes in practice, however, do not impact reimbursement. Rather than discrimination based on the gender of the surgeon, lower salaries in ob.gyn. are more likely to be the result of these and other factors.
The wage and quality gap in ob.gyn.
As a predominantly female surgical specialty, some of the disparity between gynecology and urology could be explained by how each specialty values its work. Here, gender plays a role in that when ob.gyns. are surveyed during the RUC process they may undervalue their work by reporting they can perform a procedure (and the before and after care) faster than what a urologist reports. The survey results may then result in lower RVUs.
Ob.gyn. is an overpopulated specialty for the number of surgeons needed to manage the volume of gynecologic surgery. When a health system wants to hire a general ob.gyn., it doesn’t have trouble finding one, while urologists are more challenging to recruit. This is not because of the structure of resource-based relative value scale (RBRVS) – despite the overall RVUs for gynecologic surgery, gynecologic oncologists are often paid well because health systems need them – but rather to the market economy of hiring physicians in specialty areas where there is demand.
Women are also chronically undervalued for the hours that we spend with patients. Data show that we spend more time with patients, which does not generate as many RVUs, but it generates better outcomes for patients. Evidence shows that women doctors in internal medicine and family medicine have better outcomes than doctors who are men.
On Jan. 1, 2021, Medicare and other payers implemented a new structure to reporting the level of office visit based on either medical decision-making or time spent on the date of encounter. Time spent with patients will now be rewarded – increased RVUs for increased time.
Part of the solution is value-based medicine and moving away from counting RVUs. This is also an opportunity to look at where time is spent in general ob.gyn. training and redistribute it, focusing on what trainees need for their education and not what hospitals need to service labor and delivery. We should step back and look creatively at optimizing the education and the training of ob.gyns., and where possible utilize other health care professionals such as nurse practitioners and midwives to address the uncomplicated obstetric needs of the hospital which could free up ob.gyn. trainees to obtain further surgical education.
To be clear, gender discrimination in compensation is prevalent and a persistent problem in medicine – ob.gyn. is no exception. Many ob.gyns. are employed by large health systems with payment structures and incentives that don’t align with those of the physician or the patient. There is definite misalignment in the way salaries are determined. Transparency on salaries is a critical component of addressing the pay gap that exists between women and men in medicine and in other industries.
The pay gap as it relates to reimbursement for gynecologic surgery, however, is a more complex matter that relates to how the RBRVS system was developed nearly 30 years ago when gynecologic surgery was not predominantly performed by women.
Dr. Levy is a voluntary clinical professor in the department of obstetrics, gynecology, and reproductive sciences at University of California San Diego Health, the former vice president of health policy at ACOG, past chair of the AMA/RUC, and current voting member of the AMA CPT editorial panel. She reported no relevant financial disclosures.
In a recent commentary published in Obstetrics & Gynecology, Katie L. Watson, JD, and Louise P. King, MD, JD, describe the issue of “double discrimination” in gynecologic surgery. The authors outlined how lower pay in a specialty where a majority of the surgeons and all of the patients are women may impact quality of care.
The commentary raises a number of concerns in gynecologic surgery that are important to discuss. Ob.gyn. as a whole is underpaid, as are many nonprocedural specialties such as family medicine and internal medicine. When ob.gyns. were predominantly men, the same situation existed – ob.gyns. were paid less than many other procedural specialties. While we’ve come a long way from the relative value unit (RVU) originally determined from the Harvard studies 30 years ago, there is room for additional improvement.
Several rationales were proposed by the authors to explain the disparities in pay between gynecologic surgery and those in urology: patient gender, surgeon gender, and length of training for gynecologic surgeons. The authors cited comparisons between urology and gynecology regarding “anatomically similar, sex-specific procedures” which require closer examination. Many of the code pairs selected were not actually comparable services. For example, management of Peyronie’s disease is a highly complex treatment performed by urologists that is not comparable with vaginectomy, yet this is an example of two codes used in the reference cited by the authors to conclude that surgeries on women are undervalued.
The overall RVUs for a procedure are also dependent upon the global period. The Centers for Medicare & Medicaid Services designated RVUs as the total amount of work before, during, and after a procedure. If a surgery has a 90-day global period, all the work for 90 days thereafter is bundled into the value, whereas if something is a zero-day global, only that day’s work is counted. A gynecologic surgeon who sees a patient back two or three times is coding and billing for those encounters in addition to that initial procedure.
Many of the code comparisons used in the analysis of gender in RVUs compared services with different global periods. Finally, some of the services that were compared had vastly different utilization. Some of the services and codes that were compared are performed extremely rarely and for that reason have not had their values reassessed over the years. There may be inequities in the RVUs for these services, but they will account for extremely little in overall compensation.
As a former chair of the American Medical Association’s RVS Update Committee (RUC), I spent years attempting to revalue ob.gyn. procedures. CMS assigns work RVUs based on physician work, practice expense, and professional liability insurance. The work is calculated using total physician time and intensity based on surveys completed by the specialty. The American College of Obstetrician and Gynecologist’s Committee on Health Economics and Coding, and the AMA RUC have worked diligently over many years to reassess potentially misvalued services. The ultimate RVUs assigned by CMS for gynecologic surgery are determined by the surveys completed by ACOG members. One issue we encountered with reexamining some procedures under RBRVS is that they have become so low volume that it has been difficult to justify the cost and effort to revalue them.
Lack of ob.gyn. training isn’t the full story
On average, ob.gyns. have between 18 and 24 months of surgical training, which is significantly less than other specialties. Lack of training in gynecologic surgery was proposed as another explanation for reduced compensation among female gynecologic surgeons. This is a complex issue not adequately explained by training time for gynecologic surgeons alone. While the number of trained ob.gyns. has increased in recent decades, the surgical volume has diminished and the workload of gynecologic surgery is far lower than it used to be. Surgical volume during and after training was much higher 35 years ago, prior to the advancements of procedures like endometrial ablation or tubal ligation. Women who had finished childbearing often underwent vaginal hysterectomies to manage contraception along with various other conditions.
With the advent of minimally invasive surgery, laparoscopic sterilization became possible, which has reduced the number of hysterectomies performed. Endometrial ablation is an office-based, noninvasive procedure. The development of the levonorgestrel IUD has helped manage abnormal bleeding, further reducing the need for hysterectomy.
This reduction in surgical volume does have an impact on quality of care. The model of tracking surgical outcomes at Kaiser Health System, as mentioned by the authors, could work well in some, but not all centers. A more approachable solution to address surgical volume for the average ob.gyn. would be to implement a mentoring and coaching process whereby recently trained ob.gyns. assist their senior partner(s) in surgery. This was the model years ago: I was trained by an ob.gyn. who was trained as a general surgeon. It was through the experience of assisting on each one of his cases – and him assisting on each one of my cases – that I received incredibly thorough surgical training.
These changes in practice, however, do not impact reimbursement. Rather than discrimination based on the gender of the surgeon, lower salaries in ob.gyn. are more likely to be the result of these and other factors.
The wage and quality gap in ob.gyn.
As a predominantly female surgical specialty, some of the disparity between gynecology and urology could be explained by how each specialty values its work. Here, gender plays a role in that when ob.gyns. are surveyed during the RUC process they may undervalue their work by reporting they can perform a procedure (and the before and after care) faster than what a urologist reports. The survey results may then result in lower RVUs.
Ob.gyn. is an overpopulated specialty for the number of surgeons needed to manage the volume of gynecologic surgery. When a health system wants to hire a general ob.gyn., it doesn’t have trouble finding one, while urologists are more challenging to recruit. This is not because of the structure of resource-based relative value scale (RBRVS) – despite the overall RVUs for gynecologic surgery, gynecologic oncologists are often paid well because health systems need them – but rather to the market economy of hiring physicians in specialty areas where there is demand.
Women are also chronically undervalued for the hours that we spend with patients. Data show that we spend more time with patients, which does not generate as many RVUs, but it generates better outcomes for patients. Evidence shows that women doctors in internal medicine and family medicine have better outcomes than doctors who are men.
On Jan. 1, 2021, Medicare and other payers implemented a new structure to reporting the level of office visit based on either medical decision-making or time spent on the date of encounter. Time spent with patients will now be rewarded – increased RVUs for increased time.
Part of the solution is value-based medicine and moving away from counting RVUs. This is also an opportunity to look at where time is spent in general ob.gyn. training and redistribute it, focusing on what trainees need for their education and not what hospitals need to service labor and delivery. We should step back and look creatively at optimizing the education and the training of ob.gyns., and where possible utilize other health care professionals such as nurse practitioners and midwives to address the uncomplicated obstetric needs of the hospital which could free up ob.gyn. trainees to obtain further surgical education.
To be clear, gender discrimination in compensation is prevalent and a persistent problem in medicine – ob.gyn. is no exception. Many ob.gyns. are employed by large health systems with payment structures and incentives that don’t align with those of the physician or the patient. There is definite misalignment in the way salaries are determined. Transparency on salaries is a critical component of addressing the pay gap that exists between women and men in medicine and in other industries.
The pay gap as it relates to reimbursement for gynecologic surgery, however, is a more complex matter that relates to how the RBRVS system was developed nearly 30 years ago when gynecologic surgery was not predominantly performed by women.
Dr. Levy is a voluntary clinical professor in the department of obstetrics, gynecology, and reproductive sciences at University of California San Diego Health, the former vice president of health policy at ACOG, past chair of the AMA/RUC, and current voting member of the AMA CPT editorial panel. She reported no relevant financial disclosures.
A paleolithic raw bar, and the human brush with extinction
This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”
“He was a bold man that first ate an oyster.” – Jonathan Swift
That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.
Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.
For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.
The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.
Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.
Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!
“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.
“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”
MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.
One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.
Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
A sea of vitamins
Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.
Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.
By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.
Unless you ask Michael Crawford.
Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.
In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.
Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.
In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.
, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.
Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.
Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.
Not Dr. Crawford.
He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”
Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.
For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.
University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”
Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.
Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.
Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.
In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.
Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.
This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”
“He was a bold man that first ate an oyster.” – Jonathan Swift
That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.
Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.
For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.
The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.
Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.
Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!
“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.
“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”
MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.
One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.
Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
A sea of vitamins
Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.
Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.
By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.
Unless you ask Michael Crawford.
Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.
In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.
Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.
In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.
, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.
Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.
Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.
Not Dr. Crawford.
He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”
Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.
For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.
University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”
Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.
Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.
Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.
In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.
Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.
This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”
“He was a bold man that first ate an oyster.” – Jonathan Swift
That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.
Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.
For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.
The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.
Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.
Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!
“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.
“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”
MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.
One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.
Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
A sea of vitamins
Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.
Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.
By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.
Unless you ask Michael Crawford.
Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.
In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.
Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.
In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.
, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.
Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.
Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.
Not Dr. Crawford.
He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”
Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.
For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.
University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”
Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.
Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.
Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.
In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.
Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.
The significance of mismatch repair deficiency in endometrial cancer
Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.
During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.
The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1
Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2
It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2
Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.
Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.
Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at obnews@mdedge.com.
References
1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.
2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.
3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/
4. Kandoth et al. Nature. 2013;497(7447):67-73.
5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.
6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.
Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.
During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.
The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1
Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2
It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2
Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.
Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.
Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at obnews@mdedge.com.
References
1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.
2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.
3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/
4. Kandoth et al. Nature. 2013;497(7447):67-73.
5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.
6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.
Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.
During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.
The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1
Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2
It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2
Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.
Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.
Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at obnews@mdedge.com.
References
1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.
2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.
3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/
4. Kandoth et al. Nature. 2013;497(7447):67-73.
5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.
6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.