User login
New and future therapies for lupus nephritis
Treatment for lupus nephritis has changed dramatically in recent years. Only 10 years ago, rheumatologists and nephrologists, whether specializing in adult or pediatric medicine, treated lupus nephritis with a similar regimen of monthly intravenous cyclophosphamide (Cytoxan) and glucocorticoids. Although the regimen is effective, side effects such as infection, hair loss, and infertility were extremely common.
Effective but very toxic therapy is common in autoimmune diseases. In the last decade, clinical trials have shown that less toxic drugs are as effective for treating lupus nephritis. This article will review new developments in therapy for lupus nephritis, which can be viewed as a prototype for other fields of medicine.
DEMOGRAPHICS ARE IMPORTANT
Although numerous factors have prognostic value in lupus nephritis (eg, serum creatinine, proteinuria, renal biopsy findings), the most important to consider when designing and interpreting studies are race and socioeconomic variables.
A retrospective study in Miami, FL,1 evaluated 213 patients with lupus nephritis, of whom 47% were Hispanic, 44% African American, and 20% white. At baseline, African Americans had higher blood pressure, higher serum creatinine levels, and lower household income. After 6 years, African Americans fared the worst in terms of doubling of serum creatinine, developing end-stage renal disease, and death; whites had the best outcomes, and Hispanics were in between. Low income was found to be a significant risk factor, independent of racial background.
In a similar retrospective study in New York City in 128 patients (43% white, 40% Hispanic, and 17% African American) with proliferative lupus nephritis,2 disease was much more likely to progress to renal failure over 10 years in patients living in a poor neighborhood, even after adjustment for race.
We need to keep in mind that racial and socioeconomic factors correlate with disease severity when we design and interpret studies of lupus nephritis. Study groups must be carefully balanced with patients of similar racial and socioeconomic profiles. Study findings must be interpreted with caution; for example, whether results from a study from China are applicable to an African American with lupus nephritis in New York City is unclear.
OLDER STANDARD THERAPY: EFFECTIVE BUT TOXIC
The last large National Institutes of Health study that involved only cyclophosphamide and a glucocorticoid was published in 2001,3 with 21 patients receiving cyclophosphamide alone and 20 patients receiving cyclophosphamide plus methylprednisolone. Although lupus nephritis improved, serious side effects occurred in one-third to one-half of patients in each group and included hypertension, hyperlipidemia, valvular heart disease, avascular necrosis, premature menopause, and major infections, including herpes zoster.
Less cyclophosphamide works just as well
The multicenter, prospective Euro-Lupus Nephritis Trial4 randomized 90 patients with proliferative lupus nephritis to receive either standard high-dose intravenous (IV) cyclophosphamide therapy (six monthly pulses and two quarterly pulses, with doses increasing according to the white blood cell count) or low-dose IV cyclophosphamide therapy (six pulses every 2 weeks at a fixed dose of 500 mg). Both regimens were followed by azathioprine (Imuran).
At 4 years, the two treatment groups were not significantly different in terms of treatment failure, remission rates, serum creatinine levels, 24-hour proteinuria, and freedom from renal flares. However, the rates of side effects were significantly different, with more patients in the low-dosage group free of severe infection.
One problem with this study is whether it is applicable to an American lupus nephritis population, since 84% of the patients were white. Since this study, others indicate that this regimen is probably also safe and effective for different racial groups in the United States.
At 10-year follow-up,5 both treatment groups still had identical excellent rates of freedom from end-stage renal disease. Serum creatinine and 24-hour proteinuria were also at excellent levels and identical in both groups. Nearly three quarters of patients still needed glucocorticoid therapy and more than half still needed immunosuppressive therapy, but the rates were not statistically significantly different between the treatment groups.
The cumulative dose of cyclophosphamide was 9.5 g in the standard-treatment group and 5.5 g in the low-dose group. This difference in exposure could make a tremendous difference to patients, not only for immediate side effects such as early menopause and infections, but for the risk of cancer in later decades.
This study showed clearly that low-dose cyclophosphamide is an option for induction therapy. Drawbacks of the study were that the population was mostly white and that patients had only moderately severe disease.
Low-dose cyclophosphamide has largely replaced the older National Institutes of Health regimen, although during the last decade drug therapy has undergone more changes.
MYCOPHENOLATE AND AZATHIOPRINE: ALTERNATIVES TO CYCLOPHOSPHAMIDE
In a Chinese study, mycophenolate was better than cyclophosphamide for induction
In a study in Hong Kong, Chan et al6 randomized 42 patients with severe lupus nephritis to receive either mycophenolate mofetil (available in the United States as CellCept; 2 g/day for 6 months, then 1 g/day for 6 months) or oral cyclophosphamide (2.5 mg/kg per day for 6 months) followed by azathioprine (1.5–2.0 mg/kg per day) for 6 months. Both groups also received prednisolone during the year.
At the end of the first year, the two groups were not significantly different in their rates of complete remission, partial remission, and relapse. The rate of infection, although not significantly different, was higher in the cyclophosphamide group (33% vs 19%). Two patients (10%) died in the cyclophosphamide group, but the difference in mortality rates was not statistically significant.
Nearly 5 years later,7 rates of chronic renal failure and relapse were still statistically the same in the two groups. Infections were fewer in the mycophenolate group (13% vs 40%, P = .013). The rate of amenorrhea was 36% in the cyclophosphamide group and only 4% in the mycophenolate group (P = .004). Four patients in the cyclophosphamide group and none in the mycophenolate group reached the composite end point of end-stage renal failure or death (P = .062).
This study appeared to offer a new option with equal efficacy and fewer side effects than standard therapy. However, its applicability to non-Chinese populations remained to be shown.
In a US study, mycophenolate or azathioprine was better than cyclophosphamide as maintenance
In a study in Miami,8 59 patients with lupus nephritis were given standard induction therapy with IV cyclophosphamide plus glucocorticoids for 6 months, then randomly assigned to one of three maintenance therapies for 1 to 3 years: IV injections of cyclophosphamide every 3 months (standard therapy), oral azathioprine, or oral mycophenolate. The population was 93% female, their average age was 33 years, and nearly half were African American, with many of the others being Hispanic. Patients tended to have severe disease, with nearly two-thirds having nephrotic syndrome.
After 6 years, there had been more deaths in the cyclophosphamide group than in the azathioprine group (P = .02) and in the mycophenolate group, although the latter difference was not statistically significant (P = .11). The combined rate of death and chronic renal failure was significantly higher with cyclophosphamide than with either of the oral agents. The cyclophosphamide group also had the highest relapse rate during the maintenance phase.
The differences in side effects were even more dramatic. Amenorrhea affected 32% of patients in the cyclophosphamide group, and only 7% and 6% in the azathioprine and mycophenolate groups, respectively. Rates of infections were 68% in the cyclophosphamide group and 28% and 21% in the azathioprine and mycophenolate groups, respectively. Patients given cyclophosphamide had 13 hospital days per patient per year, while the other groups each had only 1.
This study showed that maintenance therapy with oral azathioprine or mycophenolate was more effective and had fewer adverse effects than standard IV cyclophosphamide therapy. As a result of this study, oral agents for maintenance therapy became the new standard, but the question remained whether oral agents could safely be used for induction.
In a US study, mycophenolate was better than cyclophosphamide for induction
In a noninferiority study, Ginzler et al9 randomized 140 patients with severe lupus nephritis to receive either monthly IV cyclophosphamide or oral mycophenolate as induction therapy for 6 months. Adjunctive care with glucocorticoids was given in both groups. The study population was from 18 US academic centers and was predominantly female, and more than half were African American.
After 24 weeks, 22.5% of the mycophenolate patients were in complete remission by very strict criteria vs only 4% of those given cyclophosphamide (P = .005). The trend for partial remissions was also in favor of mycophenolate, although the difference was not statistically significant. The rate of complete and partial remissions, a prespecified end point, was significantly higher in the mycophenolate group. Although the study was trying to evaluate equivalency, it actually showed superiority for mycophenolate induction therapy.
Serum creatinine levels declined in both groups, but more in the mycophenolate group by 24 weeks. Urinary protein levels fell the same amount in both groups. At 3 years, the groups were statistically equivalent in terms of renal flares, renal failures, and deaths. However, the study groups were small, and the mycophenolate group did have a better trend for both renal failure (N = 4 vs 7) and deaths (N = 4 vs 8).
Mycophenolate also had fewer side effects, including infection, although again the numbers were too small to show statistical significance. The exception was diarrhea (N = 15 in the mycophenolate group vs 2 in the cyclophosphamide group).
A drawback of the study is that it was designed as a crossover study: a patient for whom therapy was failing after 3 months could switch to the other group, introducing potential confounding. Other problems involved the small population size and the question of whether results from patients in the United States were applicable to others worldwide.
In a worldwide study, mycophenolate was at least equivalent to cyclophosphamide for induction
The Aspreva Lupus Management Study (ALMS)10 used a similar design with 370 patients worldwide (United States, China, South America, and Europe) in one of the largest trials ever conducted in lupus nephritis. Patients were randomized to 6 months of induction therapy with either IV cyclophosphamide or oral mycophenolate but could not cross over.
At 6 months, response rates were identical between the two groups, with response defined as a combination of specific improvement in proteinuria, serum creatinine, and hematuria (50%–55%). In terms of individual renal and nonrenal variables, both groups appeared identical.
However, the side effect profiles differed between the two groups. As expected for mycophenolate, diarrhea was the most common side effect (occurring in 28% vs 12% in the cyclophosphamide group). Nausea and vomiting were more common with cyclophosphamide (45% and 37% respectively vs 14% and 13% in the mycophenolate group). Cyclophosphamide also caused hair loss in 35%, vs 10% in the mycophenolate group.
There were 14 deaths overall, which is a very low number considering the patients’ severity of illness, and it indicates the better results now achieved with therapy. The mortality rate was higher in the mycophenolate group (5% vs 3%), but the difference was not statistically significant. Six of the nine deaths with mycophenolate were from the same center in China, and none were from Europe or the United States. In summary, the study did not show that mycophenolate was superior to IV cyclophosphamide for induction therapy, but that they were equivalent in efficacy with different side effect profiles.
Membranous nephropathy: Mycophenolate vs cyclophosphamide
Less evidence is available about treatment for membranous disease, which is characterized by heavy proteinuria and the nephrotic syndrome but usually does not progress to renal failure. Radhakrishnan et al11 combined data from the trial by Ginzler et al9 and the ALMS trial10 and found 84 patients with pure membranous lupus, who were equally divided between the treatment groups receiving IV cyclophosphamide and mycophenolate. Consistent with the larger group’s data, mycophenolate and cyclophosphamide performed similarly in terms of efficacy, but there was a slightly higher rate of side effects with cyclophosphamide.
Maintenance therapy: Mycophenolate superior to azathioprine
The ALMS Maintenance Trial12 evaluated maintenance therapy in the same worldwide population that was studied for induction therapy. Of the 370 patients involved in the induction phase that compared IV cyclophosphamide and oral mycophenolate, 227 responded sufficiently to be rerandomized in a controlled, double-blinded trial of 36 months of maintenance therapy with corticosteroids and either mycophenolate (1 g twice daily) or azathioprine (2 mg/kg per day).
In intention-to-treat analysis, the time to treatment failure (ie, doubling of the serum creatinine level, progressing to renal failure, or death) was significantly shorter in the azathioprine group (P = .003). Every individual end point—end-stage renal disease, renal flares, doubling of serum creatinine, rescue immunosuppression required—was in favor of mycophenolate maintenance. At 3 years, the completion rate was 63% with mycophenolate and 49% with azathioprine. Serious adverse events and withdrawals because of adverse events were more common in the azathioprine group.
In summary, mycophenolate was superior to azathioprine in maintaining renal response and in preventing relapse in patients with active lupus nephritis who responded to induction therapy with either mycophenolate or IV cyclophosphamide. Mycophenolate was found to be superior regardless of initial induction treatment, race, or region and was confirmed by all key secondary end points.
Only one of the 227 patients died during the 3 years—from an auto accident. Again, this indicates the dramatically improved survival today compared with a decade ago.
RITUXIMAB: PROMISING BUT UNPROVEN
Rituximab (Rituxan) was originally approved to treat tumors, then rheumatoid arthritis, and most recently vasculitis. Evidence thus far is mixed regarding its use as a treatment for lupus nephritis. Although randomized clinical trials have not found it to be superior to standard regimens, there are many signs that it may be effective.
Rituximab in uncontrolled studies
Terrier et al13 analyzed prospective data from 136 patients with systemic lupus erythematosus, most of whom had renal disease, from the French Autoimmunity and Rituximab registry. Response occurred in 71% of patients using rituximab, with no difference found between patients receiving rituximab monotherapy and those concomitantly receiving immunosuppressive agents.
Melander et al14 retrospectively studied 19 women and 1 man who had been treated with rituximab for severe lupus nephritis and followed for at least 1 year. Three patients had concurrent therapy with cyclophosphamide, and 10 patients continued rituximab as maintenance therapy; 12 patients had lupus nephritis that had been refractory to standard treatment, and 6 had relapsing disease.
At a median follow-up of 22 months, 12 patients (60%) had achieved complete or partial renal remission.
Condon et al15 treated 21 patients who had severe lupus nephritis with two doses of rituximab and IV methylprednisolone 2 weeks apart, then maintenance therapy with mycophenolate without any oral steroids. At a mean follow-up of 35 months ( ± 14 months), 16 (76%) were in complete remission, with a mean time to remission of 12 months. Two (9.5%) achieved partial remission. The rate of toxicity was low.
Thus, rituximab appears promising in uncontrolled studies.
Placebo-controlled trials fail to prove rituximab effective
LUNAR trial. On the other hand, the largest placebo-controlled trial to evaluate rituximab in patients with proliferative lupus nephritis, the Lupus Nephritis Assessment With Rituximab (LUNAR) trial16 found differences in favor of rituximab, but none reached statistical significance. The trial randomized 140 patients to receive either mycophenolate plus periodic rituximab infusions or mycophenolate plus placebo infusions for 1 year. All patients received the same dosage of glucocorticoids, which was tapered over the year.
At the end of 1 year, the groups were not statistically different in terms of complete renal response and partial renal response. Rituximab appeared less likely to produce no response, but the difference was not statistically significant.
African Americans appeared to have a higher response rate to rituximab (70% in the rituximab group achieved a response vs 45% in the control group), but again, the difference did not reach statistical significance, and the total study population of African Americans was only 40.
Rituximab did have a statistically significant positive effect on two serologic markers at 1 year: levels of anti-dsDNA fell faster and complement rose faster. In addition, rates of adverse and serious adverse events were similar between the two groups, with no new or unexpected “safety signals.”
This study can be interpreted in a number of ways. The number of patients may have been too small to show significance and the follow-up may have been too short. On the other hand, it may simply not be effective to add rituximab to a full dose of mycophenolate and steroids, an already good treatment.
EXPLORER trial. Similarly, for patients with lupus without nephritis, the Exploratory Phase II/III SLE Evaluation of Rituximab (EXPLORER) trial17 also tested rituximab against a background of an effective therapeutic regimen and found no additional benefit. This study had design problems similar to those of the LUNAR trial.
Rituximab as rescue therapy
The evidence so far indicates that rituximab may have a role as rescue therapy for refractory or relapsing disease. Rituximab must be used with other therapies, but maintenance corticosteroid therapy is not necessary. Its role as a first-line agent in induction therapy for lupus nephritis remains unclear, although it may have an important role for nonwhites. In general, it has been well tolerated. Until a large randomized trial indicates otherwise, it should not be used as a first-line therapy.
The US Food and Drug Administration (FDA) sent out a warning about the danger of progressive multifocal leukoencephalopathy as an adverse effect of rituximab and of mycophenolate, but this does not appear to be a major concern for most patients and is only likely to occur in those who have been over-immunosuppressed for many years.
MULTITARGET THERAPY
The concept of using multiple drugs simultaneously—such as mycophenolate, steroids, and rituximab—is increasingly being tried. Multi-target therapy appears to offer the advantages of combining different modes of action with better results, and it offers fewer side effects because dosages of each individual drug can be lower when combined with other immunosuppressives.
Bao et al18 in China randomly assigned 40 patients with diffuse proliferative and membranous nephritis to 6 to 9 months of induction treatment with either multitarget therapy (mycophenolate, tacrolimus [Prograf], and glucocorticoids) or IV cyclophosphamide. More complete remissions occurred in the multitarget therapy group, both at 6 months (50% vs 5%) and at 9 months (65% vs 15%). Most adverse events were less frequent in the multitarget therapy group, although three patients (15%) in the multitarget therapy group developed new-onset hypertension vs none in the cyclophosphamide group.
NEW MEDICATIONS
Entirely new classes of drugs are being developed with immunomodulatory effects, including tolerance molecules, cytokine blockers, inhibitors of human B lymphocyte stimulator, and costimulatory blockers.
Belimumab offers small improvement for lupus
Belimumab (Benlysta) is a human monoclonal antibody that inhibits the biologic activity of human B lymphocyte stimulator; it has recently been approved by the FDA for lupus nephritis. In a worldwide study,19 867 patients with systemic lupus erythematosus were randomized to receive either belimumab (1 mg/kg or 10 mg/kg) or placebo.
The primary end point was the reduction of disease activity by a scoring system (SELENA-SLEDAI) that incorporated multiple features of lupus, including arthritis, vasculitis, proteinuria, rash, and others. Patients in the belimumab group had better outcomes, but the results were not dramatic. Because the drug is so expensive (about $25,000 per year) and the improvement offered is only incremental, this drug will not likely change the treatment of lupus very much.
Moreover, patients with lupus nephritis were not included in the study, but a new study is being planned to do so. Improvement is harder to demonstrate in lupus nephritis than in rheumatoid arthritis and systemic lupus erythematosus: significant changes in creatinine levels and 24-hour urinary protein must be achieved, rather than more qualitative signs and symptoms of joint pain, rash, and feeling better. Although belimumab is still unproven for lupus nephritis, it might be worth trying for patients failing other therapy.
Laquinimod: A promising experimental drug
Laquinimod is an oral immunomodulatory drug with a number of effects, including down-regulating major histocompatability complex II, chemokines, and adhesion-related molecules related to inflammation. It has been studied in more than 2,500 patients with multiple sclerosis. Pilot studies are now being done for its use for lupus nephritis. If it shows promise, a large randomized, controlled trial will be conducted.
Abatacept is in clinical trials
Abatacept (Orencia), a costimulation blocker, is undergoing clinical trials in lupus nephritis. Results should be available shortly.
INDIVIDUALIZE THERAPY
This past decade has seen such an increase in options to treat lupus nephritis that therapy can now be individualized.
Choosing IV cyclophosphamide vs mycophenolate
As a result of recent trials, doctors in the United States are increasingly using mycophenolate as the first-line drug for lupus nephritis. In Europe, however, many are choosing the shorter regimen of IV cyclophosphamide because of the results of the Euro-Lupus study.
Nowadays, I tend to use IV cyclophosphamide as the first-line drug only for patients with severe crescenteric glomerulonephritis or a very high serum creatinine level. In such cases, there is more experience with cyclophosphamide, and such severe disease does not lend itself to the luxury of trying out different therapies sequentially. If such a severely ill patient insists that a future pregnancy is very important, an alternative therapy of mycophenolate plus rituximab should be considered. I prefer mycophenolate for induction and maintenance therapy in most patients.
Dosing and formulation considerations for mycophenolate
Large dosages of mycophenolate are much better tolerated when broken up throughout the day. A patient who cannot tolerate 1 g twice daily may be able to tolerate 500 mg four times a day. The formulation can also make a difference. Some patients tolerate sustained-release mycophenolate (Myfortic) better than CellCept, and vice versa.
For patients who cannot tolerate mycophenolate, azathioprine is an acceptable alternative. In addition, for a patient who is already doing well on azathioprine, there is no need to change to mycophenolate.
Long maintenance therapy now acceptable
The ALMS Maintenance Trial12 found 3 years of maintenance therapy to be safe and effective. Such a long maintenance period is increasingly viewed as important, especially for patients in their teens and 20s, as it allows them to live a normal life, ie, to finish their education, get married, and become settled socially. Whether 5 years of maintenance therapy or even 10 years is advisable is still unknown.
Treatment during pregnancy
Neither mycophenolate nor azathioprine is recommended during pregnancy, although their effects are unknown. Because we have much more renal transplant experience with azathioprine during pregnancy, I recommend either switching from mycophenolate to azathioprine or trying to stop medication altogether if the patient has been well controlled.
- Contreras G, Lenz O, Pardo V, et al. Outcomes in African Americans and Hispanics with lupus nephritis. Kidney Int 2006; 69:1846–1851.
- Barr RG, Seliger S, Appel GB, et al. Prognosis in proliferative lupus nephritis: the role of socio-economic status and race/ethnicity. Nephrol Dial Transplant 2003; 18:2039–2046.
- Illei GG, Austin HA, Crane M, et al. Combination therapy with pulse cyclophosphamide plus pulse methylprednisolone improves long-term renal outcome without adding toxicity in patients with lupus nephritis. Ann Intern Med 2001; 135:248–257.
- Houssiau FA, Vasconcelos C, D’Cruz D, et al. Immunosuppressive therapy in lupus nephritis: the Euro-Lupus Nephritis Trial, a randomized trial of low-dose versus high-dose intravenous cyclophosphamide. Arthritis Rheum 2002; 46:2121–2131.
- Houssiau FA, Vasconcelos C, D’Cruz D, et al. The 10-year follow-up data of the Euro-Lupus Nephritis Trial comparing low-dose and high-dose intravenous cyclophosphamide. Ann Rheum Dis 2010; 69:61–64.
- Chan TM, Li FK, Tang CS, et al. Efficacy of mycophenolate mofetil in patients with diffuse proliferative lupus nephritis. Hong King-Guangzhou Nephrology Study Group. N Engl J Med 2000; 343:1156–1162.
- Chan TM, Tse KC, Tang CS, Mok MY, Li FK; Hong Kong Nephrology Study Group. Long-term study of mycophenolate mofetil as continuous induction and maintenance treatment for diffuse proliferative lupus nephritis. J Am Soc Nephrol 2005; 16:1076–1084.
- Contreras G, Pardo V, Leclercq B, et al. Sequential therapies for proliferative lupus nephritis. N Engl J Med 2004; 350:971–980.
- Ginzler EM, Dooley MA, Aranow C, et al. Mycophenolate mofetil or intravenous cyclophosphamide for lupus nephritis. N Engl J Med 2005; 353:2219–2228.
- Appel GB, Contreras G, Dooley MA, et al. Mycophenolate mofetil versus cyclophosphamide for induction treatment of lupus nephritis. J Am Soc Nephrol 2009; 20:1103–1112.
- Radhakrishnan J, Moutzouris DA, Ginzler EM, Solomons N, Siempos II, Appel GB. Mycophenolate mofetil and intravenous cyclophosphamide are similar as induction therapy for class V lupus nephritis. Kidney Int 2010; 77:152–160.
- Dooley MA, Jayne D, Ginzler EM, et al; for the ALMS Group. Mycophenolate versus azathioprine as maintenance therapy for lupus nephritis. N Engl J Med 2011; 365:1886–1895.
- Terrier B, Amoura Z, Ravaud P, et al; Club Rhumatismes et Inflammation. Safety and efficacy of rituximab in systemic lupus erythematosus: results from 136 patients from the French AutoImmunity and Rituximab registry. Arthritis Rheum 2010; 62:2458–2466.
- Melander C, Sallée M, Troillet P, et al. Rituximab in severe lupus nephritis: early B-cell depletion affects long-term renal outcome. Clin J Am Soc Nephrol 2009; 4:579–587.
- Condon MB, Griffith M, Cook HT, Levy J, Lightstone L, Cairns T. Treatment of class IV lupus nephritis with rituximab & mycophenolate mofetil (MMF) with no oral steroids is effective and safe (abstract). J Am Soc Nephrol 2010; 21(suppl):625A–626A.
- Furie RA, Looney RJ, Rovin E, et al. Efficacy and safety of rituximab in subjects with active proliferative lupus nephritis (LN): results from the randomized, double-blind phase III LUNAR study (abstract). Arthritis Rheum 2009; 60(suppl 1):S429.
- Merrill JT, Neuwelt CM, Wallace DJ, et al. Efficacy and safety of rituximab in moderately-to-severely active systemic lupus erythematosus: the randomized, double-blind, phase II/III systemic lupus erythematosus evaluation of rituximab trial. Arthritis Rheum 2010; 62:222–233.
- Bao H, Liu ZH, Zie HL, Hu WX, Zhang HT, Li LS. Successful treatment of class V+IV lupus nephritis with multitarget therapy. J Am Soc Nephrol 2008; 19:2001–2010.
- Navarra SV, Guzmán RM, Gallacher AE, et al; BLISS-52 Study Group. Efficacy and safety of belimumab in patients with active systemic lupus erythematosus: a randomised, placebo-controlled, phase 3 trial. Lancet 2011; 377:721–731.
Treatment for lupus nephritis has changed dramatically in recent years. Only 10 years ago, rheumatologists and nephrologists, whether specializing in adult or pediatric medicine, treated lupus nephritis with a similar regimen of monthly intravenous cyclophosphamide (Cytoxan) and glucocorticoids. Although the regimen is effective, side effects such as infection, hair loss, and infertility were extremely common.
Effective but very toxic therapy is common in autoimmune diseases. In the last decade, clinical trials have shown that less toxic drugs are as effective for treating lupus nephritis. This article will review new developments in therapy for lupus nephritis, which can be viewed as a prototype for other fields of medicine.
DEMOGRAPHICS ARE IMPORTANT
Although numerous factors have prognostic value in lupus nephritis (eg, serum creatinine, proteinuria, renal biopsy findings), the most important to consider when designing and interpreting studies are race and socioeconomic variables.
A retrospective study in Miami, FL,1 evaluated 213 patients with lupus nephritis, of whom 47% were Hispanic, 44% African American, and 20% white. At baseline, African Americans had higher blood pressure, higher serum creatinine levels, and lower household income. After 6 years, African Americans fared the worst in terms of doubling of serum creatinine, developing end-stage renal disease, and death; whites had the best outcomes, and Hispanics were in between. Low income was found to be a significant risk factor, independent of racial background.
In a similar retrospective study in New York City in 128 patients (43% white, 40% Hispanic, and 17% African American) with proliferative lupus nephritis,2 disease was much more likely to progress to renal failure over 10 years in patients living in a poor neighborhood, even after adjustment for race.
We need to keep in mind that racial and socioeconomic factors correlate with disease severity when we design and interpret studies of lupus nephritis. Study groups must be carefully balanced with patients of similar racial and socioeconomic profiles. Study findings must be interpreted with caution; for example, whether results from a study from China are applicable to an African American with lupus nephritis in New York City is unclear.
OLDER STANDARD THERAPY: EFFECTIVE BUT TOXIC
The last large National Institutes of Health study that involved only cyclophosphamide and a glucocorticoid was published in 2001,3 with 21 patients receiving cyclophosphamide alone and 20 patients receiving cyclophosphamide plus methylprednisolone. Although lupus nephritis improved, serious side effects occurred in one-third to one-half of patients in each group and included hypertension, hyperlipidemia, valvular heart disease, avascular necrosis, premature menopause, and major infections, including herpes zoster.
Less cyclophosphamide works just as well
The multicenter, prospective Euro-Lupus Nephritis Trial4 randomized 90 patients with proliferative lupus nephritis to receive either standard high-dose intravenous (IV) cyclophosphamide therapy (six monthly pulses and two quarterly pulses, with doses increasing according to the white blood cell count) or low-dose IV cyclophosphamide therapy (six pulses every 2 weeks at a fixed dose of 500 mg). Both regimens were followed by azathioprine (Imuran).
At 4 years, the two treatment groups were not significantly different in terms of treatment failure, remission rates, serum creatinine levels, 24-hour proteinuria, and freedom from renal flares. However, the rates of side effects were significantly different, with more patients in the low-dosage group free of severe infection.
One problem with this study is whether it is applicable to an American lupus nephritis population, since 84% of the patients were white. Since this study, others indicate that this regimen is probably also safe and effective for different racial groups in the United States.
At 10-year follow-up,5 both treatment groups still had identical excellent rates of freedom from end-stage renal disease. Serum creatinine and 24-hour proteinuria were also at excellent levels and identical in both groups. Nearly three quarters of patients still needed glucocorticoid therapy and more than half still needed immunosuppressive therapy, but the rates were not statistically significantly different between the treatment groups.
The cumulative dose of cyclophosphamide was 9.5 g in the standard-treatment group and 5.5 g in the low-dose group. This difference in exposure could make a tremendous difference to patients, not only for immediate side effects such as early menopause and infections, but for the risk of cancer in later decades.
This study showed clearly that low-dose cyclophosphamide is an option for induction therapy. Drawbacks of the study were that the population was mostly white and that patients had only moderately severe disease.
Low-dose cyclophosphamide has largely replaced the older National Institutes of Health regimen, although during the last decade drug therapy has undergone more changes.
MYCOPHENOLATE AND AZATHIOPRINE: ALTERNATIVES TO CYCLOPHOSPHAMIDE
In a Chinese study, mycophenolate was better than cyclophosphamide for induction
In a study in Hong Kong, Chan et al6 randomized 42 patients with severe lupus nephritis to receive either mycophenolate mofetil (available in the United States as CellCept; 2 g/day for 6 months, then 1 g/day for 6 months) or oral cyclophosphamide (2.5 mg/kg per day for 6 months) followed by azathioprine (1.5–2.0 mg/kg per day) for 6 months. Both groups also received prednisolone during the year.
At the end of the first year, the two groups were not significantly different in their rates of complete remission, partial remission, and relapse. The rate of infection, although not significantly different, was higher in the cyclophosphamide group (33% vs 19%). Two patients (10%) died in the cyclophosphamide group, but the difference in mortality rates was not statistically significant.
Nearly 5 years later,7 rates of chronic renal failure and relapse were still statistically the same in the two groups. Infections were fewer in the mycophenolate group (13% vs 40%, P = .013). The rate of amenorrhea was 36% in the cyclophosphamide group and only 4% in the mycophenolate group (P = .004). Four patients in the cyclophosphamide group and none in the mycophenolate group reached the composite end point of end-stage renal failure or death (P = .062).
This study appeared to offer a new option with equal efficacy and fewer side effects than standard therapy. However, its applicability to non-Chinese populations remained to be shown.
In a US study, mycophenolate or azathioprine was better than cyclophosphamide as maintenance
In a study in Miami,8 59 patients with lupus nephritis were given standard induction therapy with IV cyclophosphamide plus glucocorticoids for 6 months, then randomly assigned to one of three maintenance therapies for 1 to 3 years: IV injections of cyclophosphamide every 3 months (standard therapy), oral azathioprine, or oral mycophenolate. The population was 93% female, their average age was 33 years, and nearly half were African American, with many of the others being Hispanic. Patients tended to have severe disease, with nearly two-thirds having nephrotic syndrome.
After 6 years, there had been more deaths in the cyclophosphamide group than in the azathioprine group (P = .02) and in the mycophenolate group, although the latter difference was not statistically significant (P = .11). The combined rate of death and chronic renal failure was significantly higher with cyclophosphamide than with either of the oral agents. The cyclophosphamide group also had the highest relapse rate during the maintenance phase.
The differences in side effects were even more dramatic. Amenorrhea affected 32% of patients in the cyclophosphamide group, and only 7% and 6% in the azathioprine and mycophenolate groups, respectively. Rates of infections were 68% in the cyclophosphamide group and 28% and 21% in the azathioprine and mycophenolate groups, respectively. Patients given cyclophosphamide had 13 hospital days per patient per year, while the other groups each had only 1.
This study showed that maintenance therapy with oral azathioprine or mycophenolate was more effective and had fewer adverse effects than standard IV cyclophosphamide therapy. As a result of this study, oral agents for maintenance therapy became the new standard, but the question remained whether oral agents could safely be used for induction.
In a US study, mycophenolate was better than cyclophosphamide for induction
In a noninferiority study, Ginzler et al9 randomized 140 patients with severe lupus nephritis to receive either monthly IV cyclophosphamide or oral mycophenolate as induction therapy for 6 months. Adjunctive care with glucocorticoids was given in both groups. The study population was from 18 US academic centers and was predominantly female, and more than half were African American.
After 24 weeks, 22.5% of the mycophenolate patients were in complete remission by very strict criteria vs only 4% of those given cyclophosphamide (P = .005). The trend for partial remissions was also in favor of mycophenolate, although the difference was not statistically significant. The rate of complete and partial remissions, a prespecified end point, was significantly higher in the mycophenolate group. Although the study was trying to evaluate equivalency, it actually showed superiority for mycophenolate induction therapy.
Serum creatinine levels declined in both groups, but more in the mycophenolate group by 24 weeks. Urinary protein levels fell the same amount in both groups. At 3 years, the groups were statistically equivalent in terms of renal flares, renal failures, and deaths. However, the study groups were small, and the mycophenolate group did have a better trend for both renal failure (N = 4 vs 7) and deaths (N = 4 vs 8).
Mycophenolate also had fewer side effects, including infection, although again the numbers were too small to show statistical significance. The exception was diarrhea (N = 15 in the mycophenolate group vs 2 in the cyclophosphamide group).
A drawback of the study is that it was designed as a crossover study: a patient for whom therapy was failing after 3 months could switch to the other group, introducing potential confounding. Other problems involved the small population size and the question of whether results from patients in the United States were applicable to others worldwide.
In a worldwide study, mycophenolate was at least equivalent to cyclophosphamide for induction
The Aspreva Lupus Management Study (ALMS)10 used a similar design with 370 patients worldwide (United States, China, South America, and Europe) in one of the largest trials ever conducted in lupus nephritis. Patients were randomized to 6 months of induction therapy with either IV cyclophosphamide or oral mycophenolate but could not cross over.
At 6 months, response rates were identical between the two groups, with response defined as a combination of specific improvement in proteinuria, serum creatinine, and hematuria (50%–55%). In terms of individual renal and nonrenal variables, both groups appeared identical.
However, the side effect profiles differed between the two groups. As expected for mycophenolate, diarrhea was the most common side effect (occurring in 28% vs 12% in the cyclophosphamide group). Nausea and vomiting were more common with cyclophosphamide (45% and 37% respectively vs 14% and 13% in the mycophenolate group). Cyclophosphamide also caused hair loss in 35%, vs 10% in the mycophenolate group.
There were 14 deaths overall, which is a very low number considering the patients’ severity of illness, and it indicates the better results now achieved with therapy. The mortality rate was higher in the mycophenolate group (5% vs 3%), but the difference was not statistically significant. Six of the nine deaths with mycophenolate were from the same center in China, and none were from Europe or the United States. In summary, the study did not show that mycophenolate was superior to IV cyclophosphamide for induction therapy, but that they were equivalent in efficacy with different side effect profiles.
Membranous nephropathy: Mycophenolate vs cyclophosphamide
Less evidence is available about treatment for membranous disease, which is characterized by heavy proteinuria and the nephrotic syndrome but usually does not progress to renal failure. Radhakrishnan et al11 combined data from the trial by Ginzler et al9 and the ALMS trial10 and found 84 patients with pure membranous lupus, who were equally divided between the treatment groups receiving IV cyclophosphamide and mycophenolate. Consistent with the larger group’s data, mycophenolate and cyclophosphamide performed similarly in terms of efficacy, but there was a slightly higher rate of side effects with cyclophosphamide.
Maintenance therapy: Mycophenolate superior to azathioprine
The ALMS Maintenance Trial12 evaluated maintenance therapy in the same worldwide population that was studied for induction therapy. Of the 370 patients involved in the induction phase that compared IV cyclophosphamide and oral mycophenolate, 227 responded sufficiently to be rerandomized in a controlled, double-blinded trial of 36 months of maintenance therapy with corticosteroids and either mycophenolate (1 g twice daily) or azathioprine (2 mg/kg per day).
In intention-to-treat analysis, the time to treatment failure (ie, doubling of the serum creatinine level, progressing to renal failure, or death) was significantly shorter in the azathioprine group (P = .003). Every individual end point—end-stage renal disease, renal flares, doubling of serum creatinine, rescue immunosuppression required—was in favor of mycophenolate maintenance. At 3 years, the completion rate was 63% with mycophenolate and 49% with azathioprine. Serious adverse events and withdrawals because of adverse events were more common in the azathioprine group.
In summary, mycophenolate was superior to azathioprine in maintaining renal response and in preventing relapse in patients with active lupus nephritis who responded to induction therapy with either mycophenolate or IV cyclophosphamide. Mycophenolate was found to be superior regardless of initial induction treatment, race, or region and was confirmed by all key secondary end points.
Only one of the 227 patients died during the 3 years—from an auto accident. Again, this indicates the dramatically improved survival today compared with a decade ago.
RITUXIMAB: PROMISING BUT UNPROVEN
Rituximab (Rituxan) was originally approved to treat tumors, then rheumatoid arthritis, and most recently vasculitis. Evidence thus far is mixed regarding its use as a treatment for lupus nephritis. Although randomized clinical trials have not found it to be superior to standard regimens, there are many signs that it may be effective.
Rituximab in uncontrolled studies
Terrier et al13 analyzed prospective data from 136 patients with systemic lupus erythematosus, most of whom had renal disease, from the French Autoimmunity and Rituximab registry. Response occurred in 71% of patients using rituximab, with no difference found between patients receiving rituximab monotherapy and those concomitantly receiving immunosuppressive agents.
Melander et al14 retrospectively studied 19 women and 1 man who had been treated with rituximab for severe lupus nephritis and followed for at least 1 year. Three patients had concurrent therapy with cyclophosphamide, and 10 patients continued rituximab as maintenance therapy; 12 patients had lupus nephritis that had been refractory to standard treatment, and 6 had relapsing disease.
At a median follow-up of 22 months, 12 patients (60%) had achieved complete or partial renal remission.
Condon et al15 treated 21 patients who had severe lupus nephritis with two doses of rituximab and IV methylprednisolone 2 weeks apart, then maintenance therapy with mycophenolate without any oral steroids. At a mean follow-up of 35 months ( ± 14 months), 16 (76%) were in complete remission, with a mean time to remission of 12 months. Two (9.5%) achieved partial remission. The rate of toxicity was low.
Thus, rituximab appears promising in uncontrolled studies.
Placebo-controlled trials fail to prove rituximab effective
LUNAR trial. On the other hand, the largest placebo-controlled trial to evaluate rituximab in patients with proliferative lupus nephritis, the Lupus Nephritis Assessment With Rituximab (LUNAR) trial16 found differences in favor of rituximab, but none reached statistical significance. The trial randomized 140 patients to receive either mycophenolate plus periodic rituximab infusions or mycophenolate plus placebo infusions for 1 year. All patients received the same dosage of glucocorticoids, which was tapered over the year.
At the end of 1 year, the groups were not statistically different in terms of complete renal response and partial renal response. Rituximab appeared less likely to produce no response, but the difference was not statistically significant.
African Americans appeared to have a higher response rate to rituximab (70% in the rituximab group achieved a response vs 45% in the control group), but again, the difference did not reach statistical significance, and the total study population of African Americans was only 40.
Rituximab did have a statistically significant positive effect on two serologic markers at 1 year: levels of anti-dsDNA fell faster and complement rose faster. In addition, rates of adverse and serious adverse events were similar between the two groups, with no new or unexpected “safety signals.”
This study can be interpreted in a number of ways. The number of patients may have been too small to show significance and the follow-up may have been too short. On the other hand, it may simply not be effective to add rituximab to a full dose of mycophenolate and steroids, an already good treatment.
EXPLORER trial. Similarly, for patients with lupus without nephritis, the Exploratory Phase II/III SLE Evaluation of Rituximab (EXPLORER) trial17 also tested rituximab against a background of an effective therapeutic regimen and found no additional benefit. This study had design problems similar to those of the LUNAR trial.
Rituximab as rescue therapy
The evidence so far indicates that rituximab may have a role as rescue therapy for refractory or relapsing disease. Rituximab must be used with other therapies, but maintenance corticosteroid therapy is not necessary. Its role as a first-line agent in induction therapy for lupus nephritis remains unclear, although it may have an important role for nonwhites. In general, it has been well tolerated. Until a large randomized trial indicates otherwise, it should not be used as a first-line therapy.
The US Food and Drug Administration (FDA) sent out a warning about the danger of progressive multifocal leukoencephalopathy as an adverse effect of rituximab and of mycophenolate, but this does not appear to be a major concern for most patients and is only likely to occur in those who have been over-immunosuppressed for many years.
MULTITARGET THERAPY
The concept of using multiple drugs simultaneously—such as mycophenolate, steroids, and rituximab—is increasingly being tried. Multi-target therapy appears to offer the advantages of combining different modes of action with better results, and it offers fewer side effects because dosages of each individual drug can be lower when combined with other immunosuppressives.
Bao et al18 in China randomly assigned 40 patients with diffuse proliferative and membranous nephritis to 6 to 9 months of induction treatment with either multitarget therapy (mycophenolate, tacrolimus [Prograf], and glucocorticoids) or IV cyclophosphamide. More complete remissions occurred in the multitarget therapy group, both at 6 months (50% vs 5%) and at 9 months (65% vs 15%). Most adverse events were less frequent in the multitarget therapy group, although three patients (15%) in the multitarget therapy group developed new-onset hypertension vs none in the cyclophosphamide group.
NEW MEDICATIONS
Entirely new classes of drugs are being developed with immunomodulatory effects, including tolerance molecules, cytokine blockers, inhibitors of human B lymphocyte stimulator, and costimulatory blockers.
Belimumab offers small improvement for lupus
Belimumab (Benlysta) is a human monoclonal antibody that inhibits the biologic activity of human B lymphocyte stimulator; it has recently been approved by the FDA for lupus nephritis. In a worldwide study,19 867 patients with systemic lupus erythematosus were randomized to receive either belimumab (1 mg/kg or 10 mg/kg) or placebo.
The primary end point was the reduction of disease activity by a scoring system (SELENA-SLEDAI) that incorporated multiple features of lupus, including arthritis, vasculitis, proteinuria, rash, and others. Patients in the belimumab group had better outcomes, but the results were not dramatic. Because the drug is so expensive (about $25,000 per year) and the improvement offered is only incremental, this drug will not likely change the treatment of lupus very much.
Moreover, patients with lupus nephritis were not included in the study, but a new study is being planned to do so. Improvement is harder to demonstrate in lupus nephritis than in rheumatoid arthritis and systemic lupus erythematosus: significant changes in creatinine levels and 24-hour urinary protein must be achieved, rather than more qualitative signs and symptoms of joint pain, rash, and feeling better. Although belimumab is still unproven for lupus nephritis, it might be worth trying for patients failing other therapy.
Laquinimod: A promising experimental drug
Laquinimod is an oral immunomodulatory drug with a number of effects, including down-regulating major histocompatability complex II, chemokines, and adhesion-related molecules related to inflammation. It has been studied in more than 2,500 patients with multiple sclerosis. Pilot studies are now being done for its use for lupus nephritis. If it shows promise, a large randomized, controlled trial will be conducted.
Abatacept is in clinical trials
Abatacept (Orencia), a costimulation blocker, is undergoing clinical trials in lupus nephritis. Results should be available shortly.
INDIVIDUALIZE THERAPY
This past decade has seen such an increase in options to treat lupus nephritis that therapy can now be individualized.
Choosing IV cyclophosphamide vs mycophenolate
As a result of recent trials, doctors in the United States are increasingly using mycophenolate as the first-line drug for lupus nephritis. In Europe, however, many are choosing the shorter regimen of IV cyclophosphamide because of the results of the Euro-Lupus study.
Nowadays, I tend to use IV cyclophosphamide as the first-line drug only for patients with severe crescenteric glomerulonephritis or a very high serum creatinine level. In such cases, there is more experience with cyclophosphamide, and such severe disease does not lend itself to the luxury of trying out different therapies sequentially. If such a severely ill patient insists that a future pregnancy is very important, an alternative therapy of mycophenolate plus rituximab should be considered. I prefer mycophenolate for induction and maintenance therapy in most patients.
Dosing and formulation considerations for mycophenolate
Large dosages of mycophenolate are much better tolerated when broken up throughout the day. A patient who cannot tolerate 1 g twice daily may be able to tolerate 500 mg four times a day. The formulation can also make a difference. Some patients tolerate sustained-release mycophenolate (Myfortic) better than CellCept, and vice versa.
For patients who cannot tolerate mycophenolate, azathioprine is an acceptable alternative. In addition, for a patient who is already doing well on azathioprine, there is no need to change to mycophenolate.
Long maintenance therapy now acceptable
The ALMS Maintenance Trial12 found 3 years of maintenance therapy to be safe and effective. Such a long maintenance period is increasingly viewed as important, especially for patients in their teens and 20s, as it allows them to live a normal life, ie, to finish their education, get married, and become settled socially. Whether 5 years of maintenance therapy or even 10 years is advisable is still unknown.
Treatment during pregnancy
Neither mycophenolate nor azathioprine is recommended during pregnancy, although their effects are unknown. Because we have much more renal transplant experience with azathioprine during pregnancy, I recommend either switching from mycophenolate to azathioprine or trying to stop medication altogether if the patient has been well controlled.
Treatment for lupus nephritis has changed dramatically in recent years. Only 10 years ago, rheumatologists and nephrologists, whether specializing in adult or pediatric medicine, treated lupus nephritis with a similar regimen of monthly intravenous cyclophosphamide (Cytoxan) and glucocorticoids. Although the regimen is effective, side effects such as infection, hair loss, and infertility were extremely common.
Effective but very toxic therapy is common in autoimmune diseases. In the last decade, clinical trials have shown that less toxic drugs are as effective for treating lupus nephritis. This article will review new developments in therapy for lupus nephritis, which can be viewed as a prototype for other fields of medicine.
DEMOGRAPHICS ARE IMPORTANT
Although numerous factors have prognostic value in lupus nephritis (eg, serum creatinine, proteinuria, renal biopsy findings), the most important to consider when designing and interpreting studies are race and socioeconomic variables.
A retrospective study in Miami, FL,1 evaluated 213 patients with lupus nephritis, of whom 47% were Hispanic, 44% African American, and 20% white. At baseline, African Americans had higher blood pressure, higher serum creatinine levels, and lower household income. After 6 years, African Americans fared the worst in terms of doubling of serum creatinine, developing end-stage renal disease, and death; whites had the best outcomes, and Hispanics were in between. Low income was found to be a significant risk factor, independent of racial background.
In a similar retrospective study in New York City in 128 patients (43% white, 40% Hispanic, and 17% African American) with proliferative lupus nephritis,2 disease was much more likely to progress to renal failure over 10 years in patients living in a poor neighborhood, even after adjustment for race.
We need to keep in mind that racial and socioeconomic factors correlate with disease severity when we design and interpret studies of lupus nephritis. Study groups must be carefully balanced with patients of similar racial and socioeconomic profiles. Study findings must be interpreted with caution; for example, whether results from a study from China are applicable to an African American with lupus nephritis in New York City is unclear.
OLDER STANDARD THERAPY: EFFECTIVE BUT TOXIC
The last large National Institutes of Health study that involved only cyclophosphamide and a glucocorticoid was published in 2001,3 with 21 patients receiving cyclophosphamide alone and 20 patients receiving cyclophosphamide plus methylprednisolone. Although lupus nephritis improved, serious side effects occurred in one-third to one-half of patients in each group and included hypertension, hyperlipidemia, valvular heart disease, avascular necrosis, premature menopause, and major infections, including herpes zoster.
Less cyclophosphamide works just as well
The multicenter, prospective Euro-Lupus Nephritis Trial4 randomized 90 patients with proliferative lupus nephritis to receive either standard high-dose intravenous (IV) cyclophosphamide therapy (six monthly pulses and two quarterly pulses, with doses increasing according to the white blood cell count) or low-dose IV cyclophosphamide therapy (six pulses every 2 weeks at a fixed dose of 500 mg). Both regimens were followed by azathioprine (Imuran).
At 4 years, the two treatment groups were not significantly different in terms of treatment failure, remission rates, serum creatinine levels, 24-hour proteinuria, and freedom from renal flares. However, the rates of side effects were significantly different, with more patients in the low-dosage group free of severe infection.
One problem with this study is whether it is applicable to an American lupus nephritis population, since 84% of the patients were white. Since this study, others indicate that this regimen is probably also safe and effective for different racial groups in the United States.
At 10-year follow-up,5 both treatment groups still had identical excellent rates of freedom from end-stage renal disease. Serum creatinine and 24-hour proteinuria were also at excellent levels and identical in both groups. Nearly three quarters of patients still needed glucocorticoid therapy and more than half still needed immunosuppressive therapy, but the rates were not statistically significantly different between the treatment groups.
The cumulative dose of cyclophosphamide was 9.5 g in the standard-treatment group and 5.5 g in the low-dose group. This difference in exposure could make a tremendous difference to patients, not only for immediate side effects such as early menopause and infections, but for the risk of cancer in later decades.
This study showed clearly that low-dose cyclophosphamide is an option for induction therapy. Drawbacks of the study were that the population was mostly white and that patients had only moderately severe disease.
Low-dose cyclophosphamide has largely replaced the older National Institutes of Health regimen, although during the last decade drug therapy has undergone more changes.
MYCOPHENOLATE AND AZATHIOPRINE: ALTERNATIVES TO CYCLOPHOSPHAMIDE
In a Chinese study, mycophenolate was better than cyclophosphamide for induction
In a study in Hong Kong, Chan et al6 randomized 42 patients with severe lupus nephritis to receive either mycophenolate mofetil (available in the United States as CellCept; 2 g/day for 6 months, then 1 g/day for 6 months) or oral cyclophosphamide (2.5 mg/kg per day for 6 months) followed by azathioprine (1.5–2.0 mg/kg per day) for 6 months. Both groups also received prednisolone during the year.
At the end of the first year, the two groups were not significantly different in their rates of complete remission, partial remission, and relapse. The rate of infection, although not significantly different, was higher in the cyclophosphamide group (33% vs 19%). Two patients (10%) died in the cyclophosphamide group, but the difference in mortality rates was not statistically significant.
Nearly 5 years later,7 rates of chronic renal failure and relapse were still statistically the same in the two groups. Infections were fewer in the mycophenolate group (13% vs 40%, P = .013). The rate of amenorrhea was 36% in the cyclophosphamide group and only 4% in the mycophenolate group (P = .004). Four patients in the cyclophosphamide group and none in the mycophenolate group reached the composite end point of end-stage renal failure or death (P = .062).
This study appeared to offer a new option with equal efficacy and fewer side effects than standard therapy. However, its applicability to non-Chinese populations remained to be shown.
In a US study, mycophenolate or azathioprine was better than cyclophosphamide as maintenance
In a study in Miami,8 59 patients with lupus nephritis were given standard induction therapy with IV cyclophosphamide plus glucocorticoids for 6 months, then randomly assigned to one of three maintenance therapies for 1 to 3 years: IV injections of cyclophosphamide every 3 months (standard therapy), oral azathioprine, or oral mycophenolate. The population was 93% female, their average age was 33 years, and nearly half were African American, with many of the others being Hispanic. Patients tended to have severe disease, with nearly two-thirds having nephrotic syndrome.
After 6 years, there had been more deaths in the cyclophosphamide group than in the azathioprine group (P = .02) and in the mycophenolate group, although the latter difference was not statistically significant (P = .11). The combined rate of death and chronic renal failure was significantly higher with cyclophosphamide than with either of the oral agents. The cyclophosphamide group also had the highest relapse rate during the maintenance phase.
The differences in side effects were even more dramatic. Amenorrhea affected 32% of patients in the cyclophosphamide group, and only 7% and 6% in the azathioprine and mycophenolate groups, respectively. Rates of infections were 68% in the cyclophosphamide group and 28% and 21% in the azathioprine and mycophenolate groups, respectively. Patients given cyclophosphamide had 13 hospital days per patient per year, while the other groups each had only 1.
This study showed that maintenance therapy with oral azathioprine or mycophenolate was more effective and had fewer adverse effects than standard IV cyclophosphamide therapy. As a result of this study, oral agents for maintenance therapy became the new standard, but the question remained whether oral agents could safely be used for induction.
In a US study, mycophenolate was better than cyclophosphamide for induction
In a noninferiority study, Ginzler et al9 randomized 140 patients with severe lupus nephritis to receive either monthly IV cyclophosphamide or oral mycophenolate as induction therapy for 6 months. Adjunctive care with glucocorticoids was given in both groups. The study population was from 18 US academic centers and was predominantly female, and more than half were African American.
After 24 weeks, 22.5% of the mycophenolate patients were in complete remission by very strict criteria vs only 4% of those given cyclophosphamide (P = .005). The trend for partial remissions was also in favor of mycophenolate, although the difference was not statistically significant. The rate of complete and partial remissions, a prespecified end point, was significantly higher in the mycophenolate group. Although the study was trying to evaluate equivalency, it actually showed superiority for mycophenolate induction therapy.
Serum creatinine levels declined in both groups, but more in the mycophenolate group by 24 weeks. Urinary protein levels fell the same amount in both groups. At 3 years, the groups were statistically equivalent in terms of renal flares, renal failures, and deaths. However, the study groups were small, and the mycophenolate group did have a better trend for both renal failure (N = 4 vs 7) and deaths (N = 4 vs 8).
Mycophenolate also had fewer side effects, including infection, although again the numbers were too small to show statistical significance. The exception was diarrhea (N = 15 in the mycophenolate group vs 2 in the cyclophosphamide group).
A drawback of the study is that it was designed as a crossover study: a patient for whom therapy was failing after 3 months could switch to the other group, introducing potential confounding. Other problems involved the small population size and the question of whether results from patients in the United States were applicable to others worldwide.
In a worldwide study, mycophenolate was at least equivalent to cyclophosphamide for induction
The Aspreva Lupus Management Study (ALMS)10 used a similar design with 370 patients worldwide (United States, China, South America, and Europe) in one of the largest trials ever conducted in lupus nephritis. Patients were randomized to 6 months of induction therapy with either IV cyclophosphamide or oral mycophenolate but could not cross over.
At 6 months, response rates were identical between the two groups, with response defined as a combination of specific improvement in proteinuria, serum creatinine, and hematuria (50%–55%). In terms of individual renal and nonrenal variables, both groups appeared identical.
However, the side effect profiles differed between the two groups. As expected for mycophenolate, diarrhea was the most common side effect (occurring in 28% vs 12% in the cyclophosphamide group). Nausea and vomiting were more common with cyclophosphamide (45% and 37% respectively vs 14% and 13% in the mycophenolate group). Cyclophosphamide also caused hair loss in 35%, vs 10% in the mycophenolate group.
There were 14 deaths overall, which is a very low number considering the patients’ severity of illness, and it indicates the better results now achieved with therapy. The mortality rate was higher in the mycophenolate group (5% vs 3%), but the difference was not statistically significant. Six of the nine deaths with mycophenolate were from the same center in China, and none were from Europe or the United States. In summary, the study did not show that mycophenolate was superior to IV cyclophosphamide for induction therapy, but that they were equivalent in efficacy with different side effect profiles.
Membranous nephropathy: Mycophenolate vs cyclophosphamide
Less evidence is available about treatment for membranous disease, which is characterized by heavy proteinuria and the nephrotic syndrome but usually does not progress to renal failure. Radhakrishnan et al11 combined data from the trial by Ginzler et al9 and the ALMS trial10 and found 84 patients with pure membranous lupus, who were equally divided between the treatment groups receiving IV cyclophosphamide and mycophenolate. Consistent with the larger group’s data, mycophenolate and cyclophosphamide performed similarly in terms of efficacy, but there was a slightly higher rate of side effects with cyclophosphamide.
Maintenance therapy: Mycophenolate superior to azathioprine
The ALMS Maintenance Trial12 evaluated maintenance therapy in the same worldwide population that was studied for induction therapy. Of the 370 patients involved in the induction phase that compared IV cyclophosphamide and oral mycophenolate, 227 responded sufficiently to be rerandomized in a controlled, double-blinded trial of 36 months of maintenance therapy with corticosteroids and either mycophenolate (1 g twice daily) or azathioprine (2 mg/kg per day).
In intention-to-treat analysis, the time to treatment failure (ie, doubling of the serum creatinine level, progressing to renal failure, or death) was significantly shorter in the azathioprine group (P = .003). Every individual end point—end-stage renal disease, renal flares, doubling of serum creatinine, rescue immunosuppression required—was in favor of mycophenolate maintenance. At 3 years, the completion rate was 63% with mycophenolate and 49% with azathioprine. Serious adverse events and withdrawals because of adverse events were more common in the azathioprine group.
In summary, mycophenolate was superior to azathioprine in maintaining renal response and in preventing relapse in patients with active lupus nephritis who responded to induction therapy with either mycophenolate or IV cyclophosphamide. Mycophenolate was found to be superior regardless of initial induction treatment, race, or region and was confirmed by all key secondary end points.
Only one of the 227 patients died during the 3 years—from an auto accident. Again, this indicates the dramatically improved survival today compared with a decade ago.
RITUXIMAB: PROMISING BUT UNPROVEN
Rituximab (Rituxan) was originally approved to treat tumors, then rheumatoid arthritis, and most recently vasculitis. Evidence thus far is mixed regarding its use as a treatment for lupus nephritis. Although randomized clinical trials have not found it to be superior to standard regimens, there are many signs that it may be effective.
Rituximab in uncontrolled studies
Terrier et al13 analyzed prospective data from 136 patients with systemic lupus erythematosus, most of whom had renal disease, from the French Autoimmunity and Rituximab registry. Response occurred in 71% of patients using rituximab, with no difference found between patients receiving rituximab monotherapy and those concomitantly receiving immunosuppressive agents.
Melander et al14 retrospectively studied 19 women and 1 man who had been treated with rituximab for severe lupus nephritis and followed for at least 1 year. Three patients had concurrent therapy with cyclophosphamide, and 10 patients continued rituximab as maintenance therapy; 12 patients had lupus nephritis that had been refractory to standard treatment, and 6 had relapsing disease.
At a median follow-up of 22 months, 12 patients (60%) had achieved complete or partial renal remission.
Condon et al15 treated 21 patients who had severe lupus nephritis with two doses of rituximab and IV methylprednisolone 2 weeks apart, then maintenance therapy with mycophenolate without any oral steroids. At a mean follow-up of 35 months ( ± 14 months), 16 (76%) were in complete remission, with a mean time to remission of 12 months. Two (9.5%) achieved partial remission. The rate of toxicity was low.
Thus, rituximab appears promising in uncontrolled studies.
Placebo-controlled trials fail to prove rituximab effective
LUNAR trial. On the other hand, the largest placebo-controlled trial to evaluate rituximab in patients with proliferative lupus nephritis, the Lupus Nephritis Assessment With Rituximab (LUNAR) trial16 found differences in favor of rituximab, but none reached statistical significance. The trial randomized 140 patients to receive either mycophenolate plus periodic rituximab infusions or mycophenolate plus placebo infusions for 1 year. All patients received the same dosage of glucocorticoids, which was tapered over the year.
At the end of 1 year, the groups were not statistically different in terms of complete renal response and partial renal response. Rituximab appeared less likely to produce no response, but the difference was not statistically significant.
African Americans appeared to have a higher response rate to rituximab (70% in the rituximab group achieved a response vs 45% in the control group), but again, the difference did not reach statistical significance, and the total study population of African Americans was only 40.
Rituximab did have a statistically significant positive effect on two serologic markers at 1 year: levels of anti-dsDNA fell faster and complement rose faster. In addition, rates of adverse and serious adverse events were similar between the two groups, with no new or unexpected “safety signals.”
This study can be interpreted in a number of ways. The number of patients may have been too small to show significance and the follow-up may have been too short. On the other hand, it may simply not be effective to add rituximab to a full dose of mycophenolate and steroids, an already good treatment.
EXPLORER trial. Similarly, for patients with lupus without nephritis, the Exploratory Phase II/III SLE Evaluation of Rituximab (EXPLORER) trial17 also tested rituximab against a background of an effective therapeutic regimen and found no additional benefit. This study had design problems similar to those of the LUNAR trial.
Rituximab as rescue therapy
The evidence so far indicates that rituximab may have a role as rescue therapy for refractory or relapsing disease. Rituximab must be used with other therapies, but maintenance corticosteroid therapy is not necessary. Its role as a first-line agent in induction therapy for lupus nephritis remains unclear, although it may have an important role for nonwhites. In general, it has been well tolerated. Until a large randomized trial indicates otherwise, it should not be used as a first-line therapy.
The US Food and Drug Administration (FDA) sent out a warning about the danger of progressive multifocal leukoencephalopathy as an adverse effect of rituximab and of mycophenolate, but this does not appear to be a major concern for most patients and is only likely to occur in those who have been over-immunosuppressed for many years.
MULTITARGET THERAPY
The concept of using multiple drugs simultaneously—such as mycophenolate, steroids, and rituximab—is increasingly being tried. Multi-target therapy appears to offer the advantages of combining different modes of action with better results, and it offers fewer side effects because dosages of each individual drug can be lower when combined with other immunosuppressives.
Bao et al18 in China randomly assigned 40 patients with diffuse proliferative and membranous nephritis to 6 to 9 months of induction treatment with either multitarget therapy (mycophenolate, tacrolimus [Prograf], and glucocorticoids) or IV cyclophosphamide. More complete remissions occurred in the multitarget therapy group, both at 6 months (50% vs 5%) and at 9 months (65% vs 15%). Most adverse events were less frequent in the multitarget therapy group, although three patients (15%) in the multitarget therapy group developed new-onset hypertension vs none in the cyclophosphamide group.
NEW MEDICATIONS
Entirely new classes of drugs are being developed with immunomodulatory effects, including tolerance molecules, cytokine blockers, inhibitors of human B lymphocyte stimulator, and costimulatory blockers.
Belimumab offers small improvement for lupus
Belimumab (Benlysta) is a human monoclonal antibody that inhibits the biologic activity of human B lymphocyte stimulator; it has recently been approved by the FDA for lupus nephritis. In a worldwide study,19 867 patients with systemic lupus erythematosus were randomized to receive either belimumab (1 mg/kg or 10 mg/kg) or placebo.
The primary end point was the reduction of disease activity by a scoring system (SELENA-SLEDAI) that incorporated multiple features of lupus, including arthritis, vasculitis, proteinuria, rash, and others. Patients in the belimumab group had better outcomes, but the results were not dramatic. Because the drug is so expensive (about $25,000 per year) and the improvement offered is only incremental, this drug will not likely change the treatment of lupus very much.
Moreover, patients with lupus nephritis were not included in the study, but a new study is being planned to do so. Improvement is harder to demonstrate in lupus nephritis than in rheumatoid arthritis and systemic lupus erythematosus: significant changes in creatinine levels and 24-hour urinary protein must be achieved, rather than more qualitative signs and symptoms of joint pain, rash, and feeling better. Although belimumab is still unproven for lupus nephritis, it might be worth trying for patients failing other therapy.
Laquinimod: A promising experimental drug
Laquinimod is an oral immunomodulatory drug with a number of effects, including down-regulating major histocompatability complex II, chemokines, and adhesion-related molecules related to inflammation. It has been studied in more than 2,500 patients with multiple sclerosis. Pilot studies are now being done for its use for lupus nephritis. If it shows promise, a large randomized, controlled trial will be conducted.
Abatacept is in clinical trials
Abatacept (Orencia), a costimulation blocker, is undergoing clinical trials in lupus nephritis. Results should be available shortly.
INDIVIDUALIZE THERAPY
This past decade has seen such an increase in options to treat lupus nephritis that therapy can now be individualized.
Choosing IV cyclophosphamide vs mycophenolate
As a result of recent trials, doctors in the United States are increasingly using mycophenolate as the first-line drug for lupus nephritis. In Europe, however, many are choosing the shorter regimen of IV cyclophosphamide because of the results of the Euro-Lupus study.
Nowadays, I tend to use IV cyclophosphamide as the first-line drug only for patients with severe crescenteric glomerulonephritis or a very high serum creatinine level. In such cases, there is more experience with cyclophosphamide, and such severe disease does not lend itself to the luxury of trying out different therapies sequentially. If such a severely ill patient insists that a future pregnancy is very important, an alternative therapy of mycophenolate plus rituximab should be considered. I prefer mycophenolate for induction and maintenance therapy in most patients.
Dosing and formulation considerations for mycophenolate
Large dosages of mycophenolate are much better tolerated when broken up throughout the day. A patient who cannot tolerate 1 g twice daily may be able to tolerate 500 mg four times a day. The formulation can also make a difference. Some patients tolerate sustained-release mycophenolate (Myfortic) better than CellCept, and vice versa.
For patients who cannot tolerate mycophenolate, azathioprine is an acceptable alternative. In addition, for a patient who is already doing well on azathioprine, there is no need to change to mycophenolate.
Long maintenance therapy now acceptable
The ALMS Maintenance Trial12 found 3 years of maintenance therapy to be safe and effective. Such a long maintenance period is increasingly viewed as important, especially for patients in their teens and 20s, as it allows them to live a normal life, ie, to finish their education, get married, and become settled socially. Whether 5 years of maintenance therapy or even 10 years is advisable is still unknown.
Treatment during pregnancy
Neither mycophenolate nor azathioprine is recommended during pregnancy, although their effects are unknown. Because we have much more renal transplant experience with azathioprine during pregnancy, I recommend either switching from mycophenolate to azathioprine or trying to stop medication altogether if the patient has been well controlled.
- Contreras G, Lenz O, Pardo V, et al. Outcomes in African Americans and Hispanics with lupus nephritis. Kidney Int 2006; 69:1846–1851.
- Barr RG, Seliger S, Appel GB, et al. Prognosis in proliferative lupus nephritis: the role of socio-economic status and race/ethnicity. Nephrol Dial Transplant 2003; 18:2039–2046.
- Illei GG, Austin HA, Crane M, et al. Combination therapy with pulse cyclophosphamide plus pulse methylprednisolone improves long-term renal outcome without adding toxicity in patients with lupus nephritis. Ann Intern Med 2001; 135:248–257.
- Houssiau FA, Vasconcelos C, D’Cruz D, et al. Immunosuppressive therapy in lupus nephritis: the Euro-Lupus Nephritis Trial, a randomized trial of low-dose versus high-dose intravenous cyclophosphamide. Arthritis Rheum 2002; 46:2121–2131.
- Houssiau FA, Vasconcelos C, D’Cruz D, et al. The 10-year follow-up data of the Euro-Lupus Nephritis Trial comparing low-dose and high-dose intravenous cyclophosphamide. Ann Rheum Dis 2010; 69:61–64.
- Chan TM, Li FK, Tang CS, et al. Efficacy of mycophenolate mofetil in patients with diffuse proliferative lupus nephritis. Hong King-Guangzhou Nephrology Study Group. N Engl J Med 2000; 343:1156–1162.
- Chan TM, Tse KC, Tang CS, Mok MY, Li FK; Hong Kong Nephrology Study Group. Long-term study of mycophenolate mofetil as continuous induction and maintenance treatment for diffuse proliferative lupus nephritis. J Am Soc Nephrol 2005; 16:1076–1084.
- Contreras G, Pardo V, Leclercq B, et al. Sequential therapies for proliferative lupus nephritis. N Engl J Med 2004; 350:971–980.
- Ginzler EM, Dooley MA, Aranow C, et al. Mycophenolate mofetil or intravenous cyclophosphamide for lupus nephritis. N Engl J Med 2005; 353:2219–2228.
- Appel GB, Contreras G, Dooley MA, et al. Mycophenolate mofetil versus cyclophosphamide for induction treatment of lupus nephritis. J Am Soc Nephrol 2009; 20:1103–1112.
- Radhakrishnan J, Moutzouris DA, Ginzler EM, Solomons N, Siempos II, Appel GB. Mycophenolate mofetil and intravenous cyclophosphamide are similar as induction therapy for class V lupus nephritis. Kidney Int 2010; 77:152–160.
- Dooley MA, Jayne D, Ginzler EM, et al; for the ALMS Group. Mycophenolate versus azathioprine as maintenance therapy for lupus nephritis. N Engl J Med 2011; 365:1886–1895.
- Terrier B, Amoura Z, Ravaud P, et al; Club Rhumatismes et Inflammation. Safety and efficacy of rituximab in systemic lupus erythematosus: results from 136 patients from the French AutoImmunity and Rituximab registry. Arthritis Rheum 2010; 62:2458–2466.
- Melander C, Sallée M, Troillet P, et al. Rituximab in severe lupus nephritis: early B-cell depletion affects long-term renal outcome. Clin J Am Soc Nephrol 2009; 4:579–587.
- Condon MB, Griffith M, Cook HT, Levy J, Lightstone L, Cairns T. Treatment of class IV lupus nephritis with rituximab & mycophenolate mofetil (MMF) with no oral steroids is effective and safe (abstract). J Am Soc Nephrol 2010; 21(suppl):625A–626A.
- Furie RA, Looney RJ, Rovin E, et al. Efficacy and safety of rituximab in subjects with active proliferative lupus nephritis (LN): results from the randomized, double-blind phase III LUNAR study (abstract). Arthritis Rheum 2009; 60(suppl 1):S429.
- Merrill JT, Neuwelt CM, Wallace DJ, et al. Efficacy and safety of rituximab in moderately-to-severely active systemic lupus erythematosus: the randomized, double-blind, phase II/III systemic lupus erythematosus evaluation of rituximab trial. Arthritis Rheum 2010; 62:222–233.
- Bao H, Liu ZH, Zie HL, Hu WX, Zhang HT, Li LS. Successful treatment of class V+IV lupus nephritis with multitarget therapy. J Am Soc Nephrol 2008; 19:2001–2010.
- Navarra SV, Guzmán RM, Gallacher AE, et al; BLISS-52 Study Group. Efficacy and safety of belimumab in patients with active systemic lupus erythematosus: a randomised, placebo-controlled, phase 3 trial. Lancet 2011; 377:721–731.
- Contreras G, Lenz O, Pardo V, et al. Outcomes in African Americans and Hispanics with lupus nephritis. Kidney Int 2006; 69:1846–1851.
- Barr RG, Seliger S, Appel GB, et al. Prognosis in proliferative lupus nephritis: the role of socio-economic status and race/ethnicity. Nephrol Dial Transplant 2003; 18:2039–2046.
- Illei GG, Austin HA, Crane M, et al. Combination therapy with pulse cyclophosphamide plus pulse methylprednisolone improves long-term renal outcome without adding toxicity in patients with lupus nephritis. Ann Intern Med 2001; 135:248–257.
- Houssiau FA, Vasconcelos C, D’Cruz D, et al. Immunosuppressive therapy in lupus nephritis: the Euro-Lupus Nephritis Trial, a randomized trial of low-dose versus high-dose intravenous cyclophosphamide. Arthritis Rheum 2002; 46:2121–2131.
- Houssiau FA, Vasconcelos C, D’Cruz D, et al. The 10-year follow-up data of the Euro-Lupus Nephritis Trial comparing low-dose and high-dose intravenous cyclophosphamide. Ann Rheum Dis 2010; 69:61–64.
- Chan TM, Li FK, Tang CS, et al. Efficacy of mycophenolate mofetil in patients with diffuse proliferative lupus nephritis. Hong King-Guangzhou Nephrology Study Group. N Engl J Med 2000; 343:1156–1162.
- Chan TM, Tse KC, Tang CS, Mok MY, Li FK; Hong Kong Nephrology Study Group. Long-term study of mycophenolate mofetil as continuous induction and maintenance treatment for diffuse proliferative lupus nephritis. J Am Soc Nephrol 2005; 16:1076–1084.
- Contreras G, Pardo V, Leclercq B, et al. Sequential therapies for proliferative lupus nephritis. N Engl J Med 2004; 350:971–980.
- Ginzler EM, Dooley MA, Aranow C, et al. Mycophenolate mofetil or intravenous cyclophosphamide for lupus nephritis. N Engl J Med 2005; 353:2219–2228.
- Appel GB, Contreras G, Dooley MA, et al. Mycophenolate mofetil versus cyclophosphamide for induction treatment of lupus nephritis. J Am Soc Nephrol 2009; 20:1103–1112.
- Radhakrishnan J, Moutzouris DA, Ginzler EM, Solomons N, Siempos II, Appel GB. Mycophenolate mofetil and intravenous cyclophosphamide are similar as induction therapy for class V lupus nephritis. Kidney Int 2010; 77:152–160.
- Dooley MA, Jayne D, Ginzler EM, et al; for the ALMS Group. Mycophenolate versus azathioprine as maintenance therapy for lupus nephritis. N Engl J Med 2011; 365:1886–1895.
- Terrier B, Amoura Z, Ravaud P, et al; Club Rhumatismes et Inflammation. Safety and efficacy of rituximab in systemic lupus erythematosus: results from 136 patients from the French AutoImmunity and Rituximab registry. Arthritis Rheum 2010; 62:2458–2466.
- Melander C, Sallée M, Troillet P, et al. Rituximab in severe lupus nephritis: early B-cell depletion affects long-term renal outcome. Clin J Am Soc Nephrol 2009; 4:579–587.
- Condon MB, Griffith M, Cook HT, Levy J, Lightstone L, Cairns T. Treatment of class IV lupus nephritis with rituximab & mycophenolate mofetil (MMF) with no oral steroids is effective and safe (abstract). J Am Soc Nephrol 2010; 21(suppl):625A–626A.
- Furie RA, Looney RJ, Rovin E, et al. Efficacy and safety of rituximab in subjects with active proliferative lupus nephritis (LN): results from the randomized, double-blind phase III LUNAR study (abstract). Arthritis Rheum 2009; 60(suppl 1):S429.
- Merrill JT, Neuwelt CM, Wallace DJ, et al. Efficacy and safety of rituximab in moderately-to-severely active systemic lupus erythematosus: the randomized, double-blind, phase II/III systemic lupus erythematosus evaluation of rituximab trial. Arthritis Rheum 2010; 62:222–233.
- Bao H, Liu ZH, Zie HL, Hu WX, Zhang HT, Li LS. Successful treatment of class V+IV lupus nephritis with multitarget therapy. J Am Soc Nephrol 2008; 19:2001–2010.
- Navarra SV, Guzmán RM, Gallacher AE, et al; BLISS-52 Study Group. Efficacy and safety of belimumab in patients with active systemic lupus erythematosus: a randomised, placebo-controlled, phase 3 trial. Lancet 2011; 377:721–731.
KEY POINTS
- Mycophenolate is at least equivalent to intravenous cyclophosphamide for induction and maintenance treatment of severe lupus nephritis.
- The role of rituximab is unclear, and for now it should only be used in relapsing patients or patients whose disease is resistant to standard therapy.
- Using combination therapies for induction treatment and maintenance is becoming increasingly common.
- Three-year maintenance therapy is now considered advisable in most patients.
- Entirely new drugs under study include costimulatory blockers, inhibitors of human B lymphocyte stimulator, tolerance molecules, and cytokine blockers.
Finding the cause of acute kidney injury: Which index of fractional excretion is better?
An acute kidney injury can result from a myriad of causes and pathogenic pathways. Of these, the two main categories are prerenal causes (eg, heart failure, volume depletion) and causes that are intrinsic to the kidney (eg, acute tubular necrosis). Together, these categories account for more than 70% of all cases.1–3
While early intervention improves outcomes in both of these categories, the physician in the acute care setting must quickly distinguish between them, as their treatments differ. Similar clinical presentations along with confounding laboratory values make this distinction difficult. Furthermore, prolonged prerenal azotemia can eventually lead to acute tubular necrosis.
Therefore, several methods for distinguishing prerenal from intrinsic causes of acute kidney injury have been developed, including urinalysis, response to fluid challenge, the blood urea nitrogen-to-plasma creatinine ratio, levels of various urine electrolytes and biomarkers, and, the topics of our discussion here, the fractional excretion of sodium (FENa) and the fractional excretion of urea (FEU).4 While each method offers a unique picture of renal function, the validity of each may be affected by specific clinical factors.
In light of the frequent use of diuretics in inpatients and outpatients, a review of the utility of the FEU test is warranted. We will therefore present the theory behind the use of the FENa and the FEU for distinguishing intrinsic from prerenal causes of acute kidney injury, the relevant literature comparing the utility of these investigations, and our suggestions for clinical practice.
ACUTE KIDNEY INJURY DEFINED
Acute kidney injury (formerly called acute renal failure) describes an abrupt decline in renal function. Consensus definitions of it have been published and are gaining more widespread acceptance and use.9,10 The current definition is10:
- An absolute increase in serum creatinine ≥ 0.3 mg/dL (26.4 μmol/L) in 48 hours, or
- A percentage increase in serum creatinine ≥ 50% in 48 hours, or
- Urine output < 0.5 mL/kg/hour for > 6 hours.
These clear criteria allow for earlier recognition and treatment of this condition.
Acute kidney injury is fairly common in hospitalized patients, with 172 to 620 cases per million patients per year.11–14 Furthermore, hospitalized patients with acute kidney injury continue to have high rates of morbidity and death, especially those with more severe cases, in which the mortality rate remains as high as 40%.15
FRACTIONAL EXCRETION OF SODIUM
The FENa is a measure of the extraction of sodium and water from the glomerular filtrate. It is the ratio of the rate of sodium filtration (the urinary sodium concentration times the urinary flow rate, divided by the plasma sodium concentration) to the overall glomerular filtration rate, estimated by the renal filtration of creatinine. It can be calculated as the ratio of plasma creatinine to urine creatinine divided by the ratio of plasma sodium to urine sodium:
A euvolemic person with normal renal function and moderate salt intake in a steady state will have an FENa of approximately 1%.16
In 1976, Espinel17 originally showed that the FENa could be used during the oliguric phase in patients in acute renal failure to differentiate between prerenal acute kidney injury and acute tubular necrosis. Given the kidney’s ability to reabsorb more sodium during times of volume depletion, Espinel suggested that an FENa of less than 1% reflected normal sodium retention, indicating a prerenal cause, ie, diminished effective circulating volume. A value greater than 3% likely represented tubular damage, indicating that the nephrons were unable to properly reabsorb sodium.
The clinical utility of this index was apparent, as the management of prerenal azotemia and acute tubular necrosis differ.18 While both require fluid repletion, the risk of volume overload in acute tubular necrosis is high. Furthermore, acute tubular necrosis secondary to nephrotoxins could require hemodialysis to facilitate clearance of the offending agent.
The FENa test was subsequently validated in a number of studies in different populations and is still widely used.19–21
Limitations to the use of the FENa have been noted in various clinical settings. Notably, it can be falsely depressed in a number of intrinsic renal conditions, such as contrast-induced nephropathy, rhabdomyolysis, and acute glomerulonephritis. Conversely, patients with prerenal acute kidney injury who take diuretics can have a falsely elevated value due to the pharmacologically induced renal excretion of sodium independent of volume status. This is commonly seen in patients on diuretic therapy with baseline low effective circulating volumes, such those with congestive heart failure and hepatic cirrhosis.
FRACTIONAL EXCRETION OF UREA
Urea is continuously produced in the liver as the end product of protein metabolism. It is a small, water-soluble molecule that freely passes across cell membranes and is therefore continuously filtered and excreted by the kidneys. Not merely a waste product, urea is also important in water balance and constitutes approximately half of the normal solute content of urine.22
Urea’s excretion mechanisms are well characterized.22,23 It is absorbed in the proximal tubule, the medullary loop of Henle, and the medullary collecting ducts via facilitated diffusion through specific urea transporters.24 After being absorbed in the loop of Henle, urea is resecreted, a process that creates an osmotic gradient along the medulla that ultimately regulates urea excretion and reabsorption in the medullary collecting duct. Low-volume states are associated with decreased urea excretion due to a physiologic increase in antidiuretic hormone secretion, and the reverse is true for high-volume states.
The FEU has been recognized as a clinically useful tool. The correlation between serum and urine urea concentrations was investigated as early as 1904.25 However, most studies during the ensuing century focused on the serum urea concentration or the creatinine-to-urea ratio as a measure of glomerular failure.26–28 In 1992, Kaplan and Kohn29 proposed that the FEU could be a useful measure for assessing renal dysfunction in acute kidney injury. Conceptually similar to the FENa, the FEU is calculated as:
An FEU less than 35% suggests a prerenal cause of acute kidney injury, while a value greater than 50% suggests an intrinsic one.
FRACTIONAL EXCRETION OF UREA VS FRACTIONAL EXCRETION OF SODIUM
Kaplan and Kohn (1992)
Kaplan and Kohn,29 in their 1992 study, retrospectively analyzed 87 urine samples from 40 patients with renal dysfunction (not specifically acute kidney injury) thought to be secondary to volume depletion in which the FENa was discordant with the FEU.
Findings. Thirty-nine of the 40 patients treated with diuretics had a high FENa value. However, the FEU was low in all of these patients, leading the authors to conclude that the latter may be the more useful of the two indices in evaluating patients receiving diuretics who present with symptoms that suggest prerenal azotemia.
Limitations of the study. On closer inspection, these findings were not generalizable, for several reasons. First, the time that elapsed between administration of diuretics and evaluation of urinary electrolytes varied widely. Additionally, the study was a retrospective analysis of isolated urine specimens without clear correlation to a clinical patient or context. For these reasons, prospective analyses to investigate the utility of the fractional excretion of urea needed to be conducted.
Carvounis et al (2002)
Carvounis et al30 prospectively evaluated the FENa and the FEU in 102 consecutive intensive care patients with acute kidney injury (defined as a serum creatinine concentration > 1.5 mg/dL or an increase of more than 0.5 mg/dL in less than 48 hours). Oliguria was not an inclusion criterion for the study, but patients with acute glomerulonephritis and obstructive nephropathy were excluded. The study grouped subjects into those with prerenal azotemia, prerenal azotemia plus diuretic use, or acute tubular necrosis on the basis of the clinical diagnosis of the attending nephrologist.
Findings. The FEU was more sensitive than the FENa in detecting prerenal azotemia, especially in those with prerenal azotemia who were receiving diuretics. Overall, the FEU had higher sensitivity and specificity for prerenal azotemia regardless of diuretic usage, and more importantly, the best overall positive and negative predictive value for detecting it (99% and 75% respectively).
These results indicate that, in patients given diuretics, the FENa fails to discriminate between prerenal azotemia and acute tubular necrosis. Conversely, the FEU was excellent in discriminating between all cases of prerenal azotemia and acute tubular necrosis irrespective of the use of diuretics. This has significant practical application, given the frequency of diuretic use in the hospital, particularly in intensive care patients.
Limitations of the study. While the findings supported the utility of the FEU, the study population was limited to intensive care patients. Furthermore, the authors did not report the statistical significance of their findings.30
Pépin et al (2007)
Pépin et al8 performed a similar study, investigating the diagnostic utility of the FENa and the FEU in patients with acute kidney injury, with or without diuretic therapy.
The authors prospectively studied 99 consecutive patients confirmed by an independent nephrologist to have acute kidney injury (defined as an increase in serum creatinine of more than 30% over baseline values within less than 1 week) due to either volume depletion or ischemia. They excluded patients with less common causes of acute kidney injury, such as rhabdomyolysis, obstructive nephropathy, adrenal insufficiency, acute glomerulonephritis, and nephrotoxic acute kidney injury, as well as patients with chronic kidney disease.
Patients were grouped into those with transient acute kidney injury (from decreased kidney perfusion) and persistent acute kidney injury (attributed to acute tubular necrosis), with or without diuretic therapy, according to predefined clinical criteria. They were considered to have diuretic exposure if they had received furosemide (Lasix) within 24 hours or a thiazide within 48 hours of sampling.
Findings. The FENa proved superior to the FEU in patients not taking diuretics and, contrary to the findings of Carvounis et al,30 exhibited diagnostic utility in patients taking diuretics as well. Neither index discriminated between the different etiologies exceptionally well, however.
Of note, the study population was more inclusive than in previous studies, with only 63 intensive care patients, thus making the results more generalizable to all cases of inpatient acute kidney injury. Furthermore, the study included patients with and without oliguria, and the sensitivity and specificity of both the FENa and the FEU were higher in the nonoliguric group (n = 25).
Limitations of the study. The authors admit that a long time may have elapsed between diuretic administration and urine measurements, thereby mitigating the diuretic’s natriuretic effect independent of the patient’s volume status. While this variable may account for the better performance of the FENa than in the other studies, it does not account for the poor performance of the FEU.
Additionally, few of the findings reached statistical significance.
Lastly, a high percentage (30%) of patients had sepsis. The FEU is less effective in patients with infection, as cytokines interfere with the urea transporters in the kidney and colon.31
Lim et al (2009)
Lim et al32 conducted a study similar in design to that of Pépin et al.8
Findings. The FEU was as clinically useful as the FENa at distinguishing transient from persistent acute kidney injury in patients on diuretics. Using a cutoff FEU of less than 30% and a cutoff FENa of less than 1.5% for transient acute kidney injury (based on calculated receiver operating characteristic curves), FENa was more sensitive and specific than FEU in the nondiuretic groups. In patients exposed to diuretics, FEU was more sensitive but less specific than FENa.
FRACTIONAL EXCRETION OF UREA IN OLIGURIA
Diskin et al (2010)
In 2010, Diskin et al33 published a prospective, observational study of 100 consecutive patients with oliguric azotemia referred to a nephrology service. They defined acute kidney injury as serum creatinine concentration greater than 1.9 mg/dL and urine output less than 100 mL in 24 hours. They used a higher FEU cutoff for prerenal azotemia of less than 40% to reflect the known urea secretion rate in oliguric patients (600 mL/24 hours). They used an FENa of less than 1% and greater than 3% to distinguish prerenal azotemia from acute tubular necrosis.
Findings. The FEU was more accurate than the FENa, giving the right diagnosis in 95% vs 54% of cases (P < .0001). The difference was exclusively due to the FEU’s greater utility in the 67 patients who had received diuretics (98% vs 49%, P < .0001). Both the FEU and the FENa accurately detected acute tubular necrosis. As expected, the FENa outperformed FEU in the setting of infection, in which cytokine stimulation interferes with urea excretion.
Limitations of the study. Approximately 80% of the patients had prerenal azotemia, potentially biasing the results toward a test geared toward detecting this condition. However, since prerenal causes are more common than intrinsic causes, the authors argued that their cohort more accurately reflected the population encountered in clinical practice.
Additionally, only patients with oliguria and more advanced kidney injury (serum creatinine > 1.9 mg/dL) were included in the study, potentially limiting the applicability of these results in patients with preserved urine output in the early stages of renal failure.
Table 2 summarizes the findings of the studies discussed above.8,15,30,32,33
FRACTIONAL EXCRETION OF UREA IN CHILDREN AND THE ELDERLY
The FEU has also been validated in populations at the extremes of age.
In children, Fahimi et al34 performed a cross-sectional study in 43 patients referred to a nephrology service because of acute kidney injury.
An FEU less than 35% had greater sensitivity and specificity than an FENa less than 1% for differentiating prerenal from intrinsic causes in pediatric populations. An FEU of less than 30% had an even greater power of distinguishing between the two. Interestingly, 15 of the 26 patients in the group with prerenal azotemia had an FENa greater than 1%, 8 of whom had an obvious cause (diuretic therapy in 5, salt-losing congenital adrenal hyperplasia in 2, and metabolic alkalosis in 1).
In elderly people, urinary indices are less reliable because of reduced sodium and urea reabsorption and urinary concentrating capability. Thus, the FENa and FEU are increased, making the standard cutoff values unreliable and unpredictable for distinguishing prerenal from intrinsic causes of acute kidney injury.35
WHICH TEST SHOULD BE USED?
Both the FENa and the FEU have been validated in prospective trials as useful clinical indices in identifying prerenal azotemia. Results of these studies vary as to which index is superior and when. This may be attributable to the various definitions of acute kidney injury and diagnostic criteria used in the studies as well as the heterogeneity of patients in each study.
However, the preponderance of evidence indicates that the FEU is more useful than the FENa in patients on diuretics. Since diuretics are widely used, particularly in acute care settings in which acute kidney injury is prevalent, the FEU is a useful clinical tool and should be utilized in this context accordingly. Specifically, when there is a history of recent diuretic use, the evidence supports ordering the FEU alone, or at least in conjunction with the FENa. If the two indices yield disparate results, the physician should look for circumstances that would alter each one of them, such as sepsis or an unrecognized dose of diuretic.
In managing acute kidney injury, distinguishing prerenal from intrinsic causes is a difficult task, particularly because prolonged prerenal azotemia can develop into acute tubular necrosis. Therefore, a single index, calculated at a specific time, often is insufficient to properly characterize the pathogenesis of acute kidney injury, and a combination of both of these indices may increase diagnostic sensitivity and specificity.36 Moreover, urine samples collected after acute changes in volume or osmolarity, such as blood loss, administration of intravenous fluids or parenteral nutrition, or dialysis may compromise their diagnostic utility, and care must be taken to interpret the results in the appropriate clinical context.
The clinician must be aware of both the respective applications and limitations of these indices when using them to guide management and navigate the differential diagnosis in the appropriate clinical settings.
- Nolan CR, Anderson RJ. Hospital-acquired acute renal failure. J Am Soc Nephrol 1998; 9:710–718.
- Mehta RL, Pascual MT, Soroko S, et al; Program to Improve Care in Acute Renal Disease. Spectrum of acute renal failure in the intensive care unit: the PICARD experience. Kidney Int 2004; 66:1613–1621.
- Myers BD, Miller DC, Mehigan JT, et al. Nature of the renal injury following total renal ischemia in man. J Clin Invest 1984; 73:329–341.
- Ho E, Fard A, Maisel A. Evolving use of biomarkers for kidney injury in acute care settings. Curr Opin Crit Care 2010; 16:399–407.
- Steiner RW. Low fractional excretion of sodium in myoglobinuric acute renal failure. Arch Intern Med 1982; 142:1216–1217.
- Vaz AJ. Low fractional excretion of urine sodium in acute renal failure due to sepsis. Arch Intern Med 1983; 143:738–739.
- Pru C, Kjellstrand CM. The FENa test is of no prognostic value in acute renal failure. Nephron 1984; 36:20–23.
- Pépin MN, Bouchard J, Legault L, Ethier J. Diagnostic performance of fractional excretion of urea and fractional excretion of sodium in the evaluations of patients with acute kidney injury with or without diuretic treatment. Am J Kidney Dis 2007; 50:566–573.
- Bellomo R, Ronco C, Kellum JA, Mehta RL, Palevsky P; Acute Dialysis Quality Initiative workgroup. Acute renal failure—definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group. Crit Care 2004; 8:R204–R212.
- Mehta RL, Kellum JA, Shah SV, et al; Acute Kidney Injury Network. Acute Kidney Injury Network: report of an initiative to improve outcomes in acute kidney injury. Crit Care 2007; 11:R31.
- Stevens PE, Tamimi NA, Al-Hasani MK, et al. Non-specialist management of acute renal failure. QJM 2001; 94:533–540.
- Feest TG, Round A, Hamad S. Incidence of severe acute renal failure in adults: results of a community based study. BMJ 1993; 306:481–483.
- Liaño F, Pascual J. Epidemiology of acute renal failure: a prospective, multicenter, community-based study. Madrid Acute Renal Failure Study Group. Kidney Int 1996; 50:811–818.
- Thadhani R, Pascual M, Bonventre JV. Acute renal failure. N Engl J Med 1996; 334:1448–1460.
- Bagshaw SM, George C, Bellomo R; ANZICS Database Management Committee. Changes in the incidence and outcome for early acute kidney injury in a cohort of Australian intensive care units. Crit Care 2007; 11:R68.
- Sodium homeostasis in chronic renal disease. Kidney Int 1982; 21:886–897.
- Espinel CH. The FENa test. Use in the differential diagnosis of acute renal failure. JAMA 1976; 236:579–581.
- Schrier RW, Wang W, Poole B, Mitra A. Acute renal failure: definitions, diagnosis, pathogenesis, and therapy. J Clin Invest 2004; 114:5–14.
- Miller TR, Anderson RJ, Linas SL, et al. Urinary diagnostic indices in acute renal failure: a prospective study. Ann Intern Med 1978; 89:47–50.
- Zarich S, Fang LS, Diamond JR. Fractional excretion of sodium. Exceptions to its diagnostic value. Arch Intern Med 1985; 145:108–112.
- Mandal AK, Baig M, Koutoubi Z. Management of acute renal failure in the elderly. Treatment options. Drugs Aging 1996; 9:226–250.
- Sands JM. Critical role of urea in the urine-concentrating mechanism. J Am Soc Nephrol 2007; 18:670–671.
- Goldstein MH, Lenz PR, Levitt MF. Effect of urine flow rate on urea reabsorption in man: urea as a “tubular marker”. J Appl Physiol 1969; 26:594–599.
- Fenton RA, Knepper MA. Urea and renal function in the 21st century: insights from knockout mice. J Am Soc Nephrol 2007; 18:679–688.
- Gréhant N. Physiologique des reins par le dosage de l’urée dans le sang et dans l’urine. J Physiol Pathol Gen (Paris) 1904; 6:1–8.
- Dossetor JB. Creatininemia versus uremia. The relative significance of blood urea nitrogen and serum creatinine concentrations in azotemia. Ann Intern Med 1966; 65:1287–1299.
- Kahn S, Sagel J, Eales L, Rabkin R. The significance of serum creatinine and the blood urea-serum creatinine ratio in azotaemia. S Afr Med J 1972; 46:1828–1832.
- Kerr DNS, Davison JM. The assessment of renal function. Br J Hosp Med 1975; 14:360–372.
- Kaplan AA, Kohn OF. Fractional excretion of urea as a guide to renal dysfunction. Am J Nephrol 1992; 12:49–54.
- Carvounis CP, Nisar S, Guro-Razuman S. Significance of the fractional excretion of urea in the differential diagnosis of acute renal failure. Kidney Int 2002; 62:2223–2229.
- Schmidt C, Höcherl K, Bucher M. Cytokine-mediated regulation of urea transporters during experimental endotoxemia. Am J Physiol Renal Physiol 2007; 292:F1479–F1489.
- Lim DH, Jeong JM, Oh SH, et al. Diagnostic performance of fractional excretion of urea in evaluating patients with acute kidney injury with diuretics treatment. Korean J Nephrol 2009; 28:190–198.
- Diskin CJ, Stokes TJ, Dansby LM, Radcliff L, Carter TB. The comparative benefits of the fractional excretion of urea and sodium in various azotemic oliguric states. Nephron Clin Pract 2010; 114:c145–c150.
- Fahimi D, Mohajeri S, Hajizadeh N, et al. Comparison between fractional excretions of urea and sodium in children with acute kidney injury. Pediatr Nephrol 2009; 24:2409–2412.
- Musso CG, Liakopoulos V, Ioannidis I, Eleftheriadis T, Stefanidis I. Acute renal failure in the elderly: particular characteristics. Int Urol Nephrol 2006; 38:787–793.
- Schönermarck U, Kehl K, Samtleben W. Diagnostic performance of fractional excretion of urea and sodium in acute kidney injury. Am J Kidney Dis 2008; 51:870–871.
An acute kidney injury can result from a myriad of causes and pathogenic pathways. Of these, the two main categories are prerenal causes (eg, heart failure, volume depletion) and causes that are intrinsic to the kidney (eg, acute tubular necrosis). Together, these categories account for more than 70% of all cases.1–3
While early intervention improves outcomes in both of these categories, the physician in the acute care setting must quickly distinguish between them, as their treatments differ. Similar clinical presentations along with confounding laboratory values make this distinction difficult. Furthermore, prolonged prerenal azotemia can eventually lead to acute tubular necrosis.
Therefore, several methods for distinguishing prerenal from intrinsic causes of acute kidney injury have been developed, including urinalysis, response to fluid challenge, the blood urea nitrogen-to-plasma creatinine ratio, levels of various urine electrolytes and biomarkers, and, the topics of our discussion here, the fractional excretion of sodium (FENa) and the fractional excretion of urea (FEU).4 While each method offers a unique picture of renal function, the validity of each may be affected by specific clinical factors.
In light of the frequent use of diuretics in inpatients and outpatients, a review of the utility of the FEU test is warranted. We will therefore present the theory behind the use of the FENa and the FEU for distinguishing intrinsic from prerenal causes of acute kidney injury, the relevant literature comparing the utility of these investigations, and our suggestions for clinical practice.
ACUTE KIDNEY INJURY DEFINED
Acute kidney injury (formerly called acute renal failure) describes an abrupt decline in renal function. Consensus definitions of it have been published and are gaining more widespread acceptance and use.9,10 The current definition is10:
- An absolute increase in serum creatinine ≥ 0.3 mg/dL (26.4 μmol/L) in 48 hours, or
- A percentage increase in serum creatinine ≥ 50% in 48 hours, or
- Urine output < 0.5 mL/kg/hour for > 6 hours.
These clear criteria allow for earlier recognition and treatment of this condition.
Acute kidney injury is fairly common in hospitalized patients, with 172 to 620 cases per million patients per year.11–14 Furthermore, hospitalized patients with acute kidney injury continue to have high rates of morbidity and death, especially those with more severe cases, in which the mortality rate remains as high as 40%.15
FRACTIONAL EXCRETION OF SODIUM
The FENa is a measure of the extraction of sodium and water from the glomerular filtrate. It is the ratio of the rate of sodium filtration (the urinary sodium concentration times the urinary flow rate, divided by the plasma sodium concentration) to the overall glomerular filtration rate, estimated by the renal filtration of creatinine. It can be calculated as the ratio of plasma creatinine to urine creatinine divided by the ratio of plasma sodium to urine sodium:
A euvolemic person with normal renal function and moderate salt intake in a steady state will have an FENa of approximately 1%.16
In 1976, Espinel17 originally showed that the FENa could be used during the oliguric phase in patients in acute renal failure to differentiate between prerenal acute kidney injury and acute tubular necrosis. Given the kidney’s ability to reabsorb more sodium during times of volume depletion, Espinel suggested that an FENa of less than 1% reflected normal sodium retention, indicating a prerenal cause, ie, diminished effective circulating volume. A value greater than 3% likely represented tubular damage, indicating that the nephrons were unable to properly reabsorb sodium.
The clinical utility of this index was apparent, as the management of prerenal azotemia and acute tubular necrosis differ.18 While both require fluid repletion, the risk of volume overload in acute tubular necrosis is high. Furthermore, acute tubular necrosis secondary to nephrotoxins could require hemodialysis to facilitate clearance of the offending agent.
The FENa test was subsequently validated in a number of studies in different populations and is still widely used.19–21
Limitations to the use of the FENa have been noted in various clinical settings. Notably, it can be falsely depressed in a number of intrinsic renal conditions, such as contrast-induced nephropathy, rhabdomyolysis, and acute glomerulonephritis. Conversely, patients with prerenal acute kidney injury who take diuretics can have a falsely elevated value due to the pharmacologically induced renal excretion of sodium independent of volume status. This is commonly seen in patients on diuretic therapy with baseline low effective circulating volumes, such those with congestive heart failure and hepatic cirrhosis.
FRACTIONAL EXCRETION OF UREA
Urea is continuously produced in the liver as the end product of protein metabolism. It is a small, water-soluble molecule that freely passes across cell membranes and is therefore continuously filtered and excreted by the kidneys. Not merely a waste product, urea is also important in water balance and constitutes approximately half of the normal solute content of urine.22
Urea’s excretion mechanisms are well characterized.22,23 It is absorbed in the proximal tubule, the medullary loop of Henle, and the medullary collecting ducts via facilitated diffusion through specific urea transporters.24 After being absorbed in the loop of Henle, urea is resecreted, a process that creates an osmotic gradient along the medulla that ultimately regulates urea excretion and reabsorption in the medullary collecting duct. Low-volume states are associated with decreased urea excretion due to a physiologic increase in antidiuretic hormone secretion, and the reverse is true for high-volume states.
The FEU has been recognized as a clinically useful tool. The correlation between serum and urine urea concentrations was investigated as early as 1904.25 However, most studies during the ensuing century focused on the serum urea concentration or the creatinine-to-urea ratio as a measure of glomerular failure.26–28 In 1992, Kaplan and Kohn29 proposed that the FEU could be a useful measure for assessing renal dysfunction in acute kidney injury. Conceptually similar to the FENa, the FEU is calculated as:
An FEU less than 35% suggests a prerenal cause of acute kidney injury, while a value greater than 50% suggests an intrinsic one.
FRACTIONAL EXCRETION OF UREA VS FRACTIONAL EXCRETION OF SODIUM
Kaplan and Kohn (1992)
Kaplan and Kohn,29 in their 1992 study, retrospectively analyzed 87 urine samples from 40 patients with renal dysfunction (not specifically acute kidney injury) thought to be secondary to volume depletion in which the FENa was discordant with the FEU.
Findings. Thirty-nine of the 40 patients treated with diuretics had a high FENa value. However, the FEU was low in all of these patients, leading the authors to conclude that the latter may be the more useful of the two indices in evaluating patients receiving diuretics who present with symptoms that suggest prerenal azotemia.
Limitations of the study. On closer inspection, these findings were not generalizable, for several reasons. First, the time that elapsed between administration of diuretics and evaluation of urinary electrolytes varied widely. Additionally, the study was a retrospective analysis of isolated urine specimens without clear correlation to a clinical patient or context. For these reasons, prospective analyses to investigate the utility of the fractional excretion of urea needed to be conducted.
Carvounis et al (2002)
Carvounis et al30 prospectively evaluated the FENa and the FEU in 102 consecutive intensive care patients with acute kidney injury (defined as a serum creatinine concentration > 1.5 mg/dL or an increase of more than 0.5 mg/dL in less than 48 hours). Oliguria was not an inclusion criterion for the study, but patients with acute glomerulonephritis and obstructive nephropathy were excluded. The study grouped subjects into those with prerenal azotemia, prerenal azotemia plus diuretic use, or acute tubular necrosis on the basis of the clinical diagnosis of the attending nephrologist.
Findings. The FEU was more sensitive than the FENa in detecting prerenal azotemia, especially in those with prerenal azotemia who were receiving diuretics. Overall, the FEU had higher sensitivity and specificity for prerenal azotemia regardless of diuretic usage, and more importantly, the best overall positive and negative predictive value for detecting it (99% and 75% respectively).
These results indicate that, in patients given diuretics, the FENa fails to discriminate between prerenal azotemia and acute tubular necrosis. Conversely, the FEU was excellent in discriminating between all cases of prerenal azotemia and acute tubular necrosis irrespective of the use of diuretics. This has significant practical application, given the frequency of diuretic use in the hospital, particularly in intensive care patients.
Limitations of the study. While the findings supported the utility of the FEU, the study population was limited to intensive care patients. Furthermore, the authors did not report the statistical significance of their findings.30
Pépin et al (2007)
Pépin et al8 performed a similar study, investigating the diagnostic utility of the FENa and the FEU in patients with acute kidney injury, with or without diuretic therapy.
The authors prospectively studied 99 consecutive patients confirmed by an independent nephrologist to have acute kidney injury (defined as an increase in serum creatinine of more than 30% over baseline values within less than 1 week) due to either volume depletion or ischemia. They excluded patients with less common causes of acute kidney injury, such as rhabdomyolysis, obstructive nephropathy, adrenal insufficiency, acute glomerulonephritis, and nephrotoxic acute kidney injury, as well as patients with chronic kidney disease.
Patients were grouped into those with transient acute kidney injury (from decreased kidney perfusion) and persistent acute kidney injury (attributed to acute tubular necrosis), with or without diuretic therapy, according to predefined clinical criteria. They were considered to have diuretic exposure if they had received furosemide (Lasix) within 24 hours or a thiazide within 48 hours of sampling.
Findings. The FENa proved superior to the FEU in patients not taking diuretics and, contrary to the findings of Carvounis et al,30 exhibited diagnostic utility in patients taking diuretics as well. Neither index discriminated between the different etiologies exceptionally well, however.
Of note, the study population was more inclusive than in previous studies, with only 63 intensive care patients, thus making the results more generalizable to all cases of inpatient acute kidney injury. Furthermore, the study included patients with and without oliguria, and the sensitivity and specificity of both the FENa and the FEU were higher in the nonoliguric group (n = 25).
Limitations of the study. The authors admit that a long time may have elapsed between diuretic administration and urine measurements, thereby mitigating the diuretic’s natriuretic effect independent of the patient’s volume status. While this variable may account for the better performance of the FENa than in the other studies, it does not account for the poor performance of the FEU.
Additionally, few of the findings reached statistical significance.
Lastly, a high percentage (30%) of patients had sepsis. The FEU is less effective in patients with infection, as cytokines interfere with the urea transporters in the kidney and colon.31
Lim et al (2009)
Lim et al32 conducted a study similar in design to that of Pépin et al.8
Findings. The FEU was as clinically useful as the FENa at distinguishing transient from persistent acute kidney injury in patients on diuretics. Using a cutoff FEU of less than 30% and a cutoff FENa of less than 1.5% for transient acute kidney injury (based on calculated receiver operating characteristic curves), FENa was more sensitive and specific than FEU in the nondiuretic groups. In patients exposed to diuretics, FEU was more sensitive but less specific than FENa.
FRACTIONAL EXCRETION OF UREA IN OLIGURIA
Diskin et al (2010)
In 2010, Diskin et al33 published a prospective, observational study of 100 consecutive patients with oliguric azotemia referred to a nephrology service. They defined acute kidney injury as serum creatinine concentration greater than 1.9 mg/dL and urine output less than 100 mL in 24 hours. They used a higher FEU cutoff for prerenal azotemia of less than 40% to reflect the known urea secretion rate in oliguric patients (600 mL/24 hours). They used an FENa of less than 1% and greater than 3% to distinguish prerenal azotemia from acute tubular necrosis.
Findings. The FEU was more accurate than the FENa, giving the right diagnosis in 95% vs 54% of cases (P < .0001). The difference was exclusively due to the FEU’s greater utility in the 67 patients who had received diuretics (98% vs 49%, P < .0001). Both the FEU and the FENa accurately detected acute tubular necrosis. As expected, the FENa outperformed FEU in the setting of infection, in which cytokine stimulation interferes with urea excretion.
Limitations of the study. Approximately 80% of the patients had prerenal azotemia, potentially biasing the results toward a test geared toward detecting this condition. However, since prerenal causes are more common than intrinsic causes, the authors argued that their cohort more accurately reflected the population encountered in clinical practice.
Additionally, only patients with oliguria and more advanced kidney injury (serum creatinine > 1.9 mg/dL) were included in the study, potentially limiting the applicability of these results in patients with preserved urine output in the early stages of renal failure.
Table 2 summarizes the findings of the studies discussed above.8,15,30,32,33
FRACTIONAL EXCRETION OF UREA IN CHILDREN AND THE ELDERLY
The FEU has also been validated in populations at the extremes of age.
In children, Fahimi et al34 performed a cross-sectional study in 43 patients referred to a nephrology service because of acute kidney injury.
An FEU less than 35% had greater sensitivity and specificity than an FENa less than 1% for differentiating prerenal from intrinsic causes in pediatric populations. An FEU of less than 30% had an even greater power of distinguishing between the two. Interestingly, 15 of the 26 patients in the group with prerenal azotemia had an FENa greater than 1%, 8 of whom had an obvious cause (diuretic therapy in 5, salt-losing congenital adrenal hyperplasia in 2, and metabolic alkalosis in 1).
In elderly people, urinary indices are less reliable because of reduced sodium and urea reabsorption and urinary concentrating capability. Thus, the FENa and FEU are increased, making the standard cutoff values unreliable and unpredictable for distinguishing prerenal from intrinsic causes of acute kidney injury.35
WHICH TEST SHOULD BE USED?
Both the FENa and the FEU have been validated in prospective trials as useful clinical indices in identifying prerenal azotemia. Results of these studies vary as to which index is superior and when. This may be attributable to the various definitions of acute kidney injury and diagnostic criteria used in the studies as well as the heterogeneity of patients in each study.
However, the preponderance of evidence indicates that the FEU is more useful than the FENa in patients on diuretics. Since diuretics are widely used, particularly in acute care settings in which acute kidney injury is prevalent, the FEU is a useful clinical tool and should be utilized in this context accordingly. Specifically, when there is a history of recent diuretic use, the evidence supports ordering the FEU alone, or at least in conjunction with the FENa. If the two indices yield disparate results, the physician should look for circumstances that would alter each one of them, such as sepsis or an unrecognized dose of diuretic.
In managing acute kidney injury, distinguishing prerenal from intrinsic causes is a difficult task, particularly because prolonged prerenal azotemia can develop into acute tubular necrosis. Therefore, a single index, calculated at a specific time, often is insufficient to properly characterize the pathogenesis of acute kidney injury, and a combination of both of these indices may increase diagnostic sensitivity and specificity.36 Moreover, urine samples collected after acute changes in volume or osmolarity, such as blood loss, administration of intravenous fluids or parenteral nutrition, or dialysis may compromise their diagnostic utility, and care must be taken to interpret the results in the appropriate clinical context.
The clinician must be aware of both the respective applications and limitations of these indices when using them to guide management and navigate the differential diagnosis in the appropriate clinical settings.
An acute kidney injury can result from a myriad of causes and pathogenic pathways. Of these, the two main categories are prerenal causes (eg, heart failure, volume depletion) and causes that are intrinsic to the kidney (eg, acute tubular necrosis). Together, these categories account for more than 70% of all cases.1–3
While early intervention improves outcomes in both of these categories, the physician in the acute care setting must quickly distinguish between them, as their treatments differ. Similar clinical presentations along with confounding laboratory values make this distinction difficult. Furthermore, prolonged prerenal azotemia can eventually lead to acute tubular necrosis.
Therefore, several methods for distinguishing prerenal from intrinsic causes of acute kidney injury have been developed, including urinalysis, response to fluid challenge, the blood urea nitrogen-to-plasma creatinine ratio, levels of various urine electrolytes and biomarkers, and, the topics of our discussion here, the fractional excretion of sodium (FENa) and the fractional excretion of urea (FEU).4 While each method offers a unique picture of renal function, the validity of each may be affected by specific clinical factors.
In light of the frequent use of diuretics in inpatients and outpatients, a review of the utility of the FEU test is warranted. We will therefore present the theory behind the use of the FENa and the FEU for distinguishing intrinsic from prerenal causes of acute kidney injury, the relevant literature comparing the utility of these investigations, and our suggestions for clinical practice.
ACUTE KIDNEY INJURY DEFINED
Acute kidney injury (formerly called acute renal failure) describes an abrupt decline in renal function. Consensus definitions of it have been published and are gaining more widespread acceptance and use.9,10 The current definition is10:
- An absolute increase in serum creatinine ≥ 0.3 mg/dL (26.4 μmol/L) in 48 hours, or
- A percentage increase in serum creatinine ≥ 50% in 48 hours, or
- Urine output < 0.5 mL/kg/hour for > 6 hours.
These clear criteria allow for earlier recognition and treatment of this condition.
Acute kidney injury is fairly common in hospitalized patients, with 172 to 620 cases per million patients per year.11–14 Furthermore, hospitalized patients with acute kidney injury continue to have high rates of morbidity and death, especially those with more severe cases, in which the mortality rate remains as high as 40%.15
FRACTIONAL EXCRETION OF SODIUM
The FENa is a measure of the extraction of sodium and water from the glomerular filtrate. It is the ratio of the rate of sodium filtration (the urinary sodium concentration times the urinary flow rate, divided by the plasma sodium concentration) to the overall glomerular filtration rate, estimated by the renal filtration of creatinine. It can be calculated as the ratio of plasma creatinine to urine creatinine divided by the ratio of plasma sodium to urine sodium:
A euvolemic person with normal renal function and moderate salt intake in a steady state will have an FENa of approximately 1%.16
In 1976, Espinel17 originally showed that the FENa could be used during the oliguric phase in patients in acute renal failure to differentiate between prerenal acute kidney injury and acute tubular necrosis. Given the kidney’s ability to reabsorb more sodium during times of volume depletion, Espinel suggested that an FENa of less than 1% reflected normal sodium retention, indicating a prerenal cause, ie, diminished effective circulating volume. A value greater than 3% likely represented tubular damage, indicating that the nephrons were unable to properly reabsorb sodium.
The clinical utility of this index was apparent, as the management of prerenal azotemia and acute tubular necrosis differ.18 While both require fluid repletion, the risk of volume overload in acute tubular necrosis is high. Furthermore, acute tubular necrosis secondary to nephrotoxins could require hemodialysis to facilitate clearance of the offending agent.
The FENa test was subsequently validated in a number of studies in different populations and is still widely used.19–21
Limitations to the use of the FENa have been noted in various clinical settings. Notably, it can be falsely depressed in a number of intrinsic renal conditions, such as contrast-induced nephropathy, rhabdomyolysis, and acute glomerulonephritis. Conversely, patients with prerenal acute kidney injury who take diuretics can have a falsely elevated value due to the pharmacologically induced renal excretion of sodium independent of volume status. This is commonly seen in patients on diuretic therapy with baseline low effective circulating volumes, such those with congestive heart failure and hepatic cirrhosis.
FRACTIONAL EXCRETION OF UREA
Urea is continuously produced in the liver as the end product of protein metabolism. It is a small, water-soluble molecule that freely passes across cell membranes and is therefore continuously filtered and excreted by the kidneys. Not merely a waste product, urea is also important in water balance and constitutes approximately half of the normal solute content of urine.22
Urea’s excretion mechanisms are well characterized.22,23 It is absorbed in the proximal tubule, the medullary loop of Henle, and the medullary collecting ducts via facilitated diffusion through specific urea transporters.24 After being absorbed in the loop of Henle, urea is resecreted, a process that creates an osmotic gradient along the medulla that ultimately regulates urea excretion and reabsorption in the medullary collecting duct. Low-volume states are associated with decreased urea excretion due to a physiologic increase in antidiuretic hormone secretion, and the reverse is true for high-volume states.
The FEU has been recognized as a clinically useful tool. The correlation between serum and urine urea concentrations was investigated as early as 1904.25 However, most studies during the ensuing century focused on the serum urea concentration or the creatinine-to-urea ratio as a measure of glomerular failure.26–28 In 1992, Kaplan and Kohn29 proposed that the FEU could be a useful measure for assessing renal dysfunction in acute kidney injury. Conceptually similar to the FENa, the FEU is calculated as:
An FEU less than 35% suggests a prerenal cause of acute kidney injury, while a value greater than 50% suggests an intrinsic one.
FRACTIONAL EXCRETION OF UREA VS FRACTIONAL EXCRETION OF SODIUM
Kaplan and Kohn (1992)
Kaplan and Kohn,29 in their 1992 study, retrospectively analyzed 87 urine samples from 40 patients with renal dysfunction (not specifically acute kidney injury) thought to be secondary to volume depletion in which the FENa was discordant with the FEU.
Findings. Thirty-nine of the 40 patients treated with diuretics had a high FENa value. However, the FEU was low in all of these patients, leading the authors to conclude that the latter may be the more useful of the two indices in evaluating patients receiving diuretics who present with symptoms that suggest prerenal azotemia.
Limitations of the study. On closer inspection, these findings were not generalizable, for several reasons. First, the time that elapsed between administration of diuretics and evaluation of urinary electrolytes varied widely. Additionally, the study was a retrospective analysis of isolated urine specimens without clear correlation to a clinical patient or context. For these reasons, prospective analyses to investigate the utility of the fractional excretion of urea needed to be conducted.
Carvounis et al (2002)
Carvounis et al30 prospectively evaluated the FENa and the FEU in 102 consecutive intensive care patients with acute kidney injury (defined as a serum creatinine concentration > 1.5 mg/dL or an increase of more than 0.5 mg/dL in less than 48 hours). Oliguria was not an inclusion criterion for the study, but patients with acute glomerulonephritis and obstructive nephropathy were excluded. The study grouped subjects into those with prerenal azotemia, prerenal azotemia plus diuretic use, or acute tubular necrosis on the basis of the clinical diagnosis of the attending nephrologist.
Findings. The FEU was more sensitive than the FENa in detecting prerenal azotemia, especially in those with prerenal azotemia who were receiving diuretics. Overall, the FEU had higher sensitivity and specificity for prerenal azotemia regardless of diuretic usage, and more importantly, the best overall positive and negative predictive value for detecting it (99% and 75% respectively).
These results indicate that, in patients given diuretics, the FENa fails to discriminate between prerenal azotemia and acute tubular necrosis. Conversely, the FEU was excellent in discriminating between all cases of prerenal azotemia and acute tubular necrosis irrespective of the use of diuretics. This has significant practical application, given the frequency of diuretic use in the hospital, particularly in intensive care patients.
Limitations of the study. While the findings supported the utility of the FEU, the study population was limited to intensive care patients. Furthermore, the authors did not report the statistical significance of their findings.30
Pépin et al (2007)
Pépin et al8 performed a similar study, investigating the diagnostic utility of the FENa and the FEU in patients with acute kidney injury, with or without diuretic therapy.
The authors prospectively studied 99 consecutive patients confirmed by an independent nephrologist to have acute kidney injury (defined as an increase in serum creatinine of more than 30% over baseline values within less than 1 week) due to either volume depletion or ischemia. They excluded patients with less common causes of acute kidney injury, such as rhabdomyolysis, obstructive nephropathy, adrenal insufficiency, acute glomerulonephritis, and nephrotoxic acute kidney injury, as well as patients with chronic kidney disease.
Patients were grouped into those with transient acute kidney injury (from decreased kidney perfusion) and persistent acute kidney injury (attributed to acute tubular necrosis), with or without diuretic therapy, according to predefined clinical criteria. They were considered to have diuretic exposure if they had received furosemide (Lasix) within 24 hours or a thiazide within 48 hours of sampling.
Findings. The FENa proved superior to the FEU in patients not taking diuretics and, contrary to the findings of Carvounis et al,30 exhibited diagnostic utility in patients taking diuretics as well. Neither index discriminated between the different etiologies exceptionally well, however.
Of note, the study population was more inclusive than in previous studies, with only 63 intensive care patients, thus making the results more generalizable to all cases of inpatient acute kidney injury. Furthermore, the study included patients with and without oliguria, and the sensitivity and specificity of both the FENa and the FEU were higher in the nonoliguric group (n = 25).
Limitations of the study. The authors admit that a long time may have elapsed between diuretic administration and urine measurements, thereby mitigating the diuretic’s natriuretic effect independent of the patient’s volume status. While this variable may account for the better performance of the FENa than in the other studies, it does not account for the poor performance of the FEU.
Additionally, few of the findings reached statistical significance.
Lastly, a high percentage (30%) of patients had sepsis. The FEU is less effective in patients with infection, as cytokines interfere with the urea transporters in the kidney and colon.31
Lim et al (2009)
Lim et al32 conducted a study similar in design to that of Pépin et al.8
Findings. The FEU was as clinically useful as the FENa at distinguishing transient from persistent acute kidney injury in patients on diuretics. Using a cutoff FEU of less than 30% and a cutoff FENa of less than 1.5% for transient acute kidney injury (based on calculated receiver operating characteristic curves), FENa was more sensitive and specific than FEU in the nondiuretic groups. In patients exposed to diuretics, FEU was more sensitive but less specific than FENa.
FRACTIONAL EXCRETION OF UREA IN OLIGURIA
Diskin et al (2010)
In 2010, Diskin et al33 published a prospective, observational study of 100 consecutive patients with oliguric azotemia referred to a nephrology service. They defined acute kidney injury as serum creatinine concentration greater than 1.9 mg/dL and urine output less than 100 mL in 24 hours. They used a higher FEU cutoff for prerenal azotemia of less than 40% to reflect the known urea secretion rate in oliguric patients (600 mL/24 hours). They used an FENa of less than 1% and greater than 3% to distinguish prerenal azotemia from acute tubular necrosis.
Findings. The FEU was more accurate than the FENa, giving the right diagnosis in 95% vs 54% of cases (P < .0001). The difference was exclusively due to the FEU’s greater utility in the 67 patients who had received diuretics (98% vs 49%, P < .0001). Both the FEU and the FENa accurately detected acute tubular necrosis. As expected, the FENa outperformed FEU in the setting of infection, in which cytokine stimulation interferes with urea excretion.
Limitations of the study. Approximately 80% of the patients had prerenal azotemia, potentially biasing the results toward a test geared toward detecting this condition. However, since prerenal causes are more common than intrinsic causes, the authors argued that their cohort more accurately reflected the population encountered in clinical practice.
Additionally, only patients with oliguria and more advanced kidney injury (serum creatinine > 1.9 mg/dL) were included in the study, potentially limiting the applicability of these results in patients with preserved urine output in the early stages of renal failure.
Table 2 summarizes the findings of the studies discussed above.8,15,30,32,33
FRACTIONAL EXCRETION OF UREA IN CHILDREN AND THE ELDERLY
The FEU has also been validated in populations at the extremes of age.
In children, Fahimi et al34 performed a cross-sectional study in 43 patients referred to a nephrology service because of acute kidney injury.
An FEU less than 35% had greater sensitivity and specificity than an FENa less than 1% for differentiating prerenal from intrinsic causes in pediatric populations. An FEU of less than 30% had an even greater power of distinguishing between the two. Interestingly, 15 of the 26 patients in the group with prerenal azotemia had an FENa greater than 1%, 8 of whom had an obvious cause (diuretic therapy in 5, salt-losing congenital adrenal hyperplasia in 2, and metabolic alkalosis in 1).
In elderly people, urinary indices are less reliable because of reduced sodium and urea reabsorption and urinary concentrating capability. Thus, the FENa and FEU are increased, making the standard cutoff values unreliable and unpredictable for distinguishing prerenal from intrinsic causes of acute kidney injury.35
WHICH TEST SHOULD BE USED?
Both the FENa and the FEU have been validated in prospective trials as useful clinical indices in identifying prerenal azotemia. Results of these studies vary as to which index is superior and when. This may be attributable to the various definitions of acute kidney injury and diagnostic criteria used in the studies as well as the heterogeneity of patients in each study.
However, the preponderance of evidence indicates that the FEU is more useful than the FENa in patients on diuretics. Since diuretics are widely used, particularly in acute care settings in which acute kidney injury is prevalent, the FEU is a useful clinical tool and should be utilized in this context accordingly. Specifically, when there is a history of recent diuretic use, the evidence supports ordering the FEU alone, or at least in conjunction with the FENa. If the two indices yield disparate results, the physician should look for circumstances that would alter each one of them, such as sepsis or an unrecognized dose of diuretic.
In managing acute kidney injury, distinguishing prerenal from intrinsic causes is a difficult task, particularly because prolonged prerenal azotemia can develop into acute tubular necrosis. Therefore, a single index, calculated at a specific time, often is insufficient to properly characterize the pathogenesis of acute kidney injury, and a combination of both of these indices may increase diagnostic sensitivity and specificity.36 Moreover, urine samples collected after acute changes in volume or osmolarity, such as blood loss, administration of intravenous fluids or parenteral nutrition, or dialysis may compromise their diagnostic utility, and care must be taken to interpret the results in the appropriate clinical context.
The clinician must be aware of both the respective applications and limitations of these indices when using them to guide management and navigate the differential diagnosis in the appropriate clinical settings.
- Nolan CR, Anderson RJ. Hospital-acquired acute renal failure. J Am Soc Nephrol 1998; 9:710–718.
- Mehta RL, Pascual MT, Soroko S, et al; Program to Improve Care in Acute Renal Disease. Spectrum of acute renal failure in the intensive care unit: the PICARD experience. Kidney Int 2004; 66:1613–1621.
- Myers BD, Miller DC, Mehigan JT, et al. Nature of the renal injury following total renal ischemia in man. J Clin Invest 1984; 73:329–341.
- Ho E, Fard A, Maisel A. Evolving use of biomarkers for kidney injury in acute care settings. Curr Opin Crit Care 2010; 16:399–407.
- Steiner RW. Low fractional excretion of sodium in myoglobinuric acute renal failure. Arch Intern Med 1982; 142:1216–1217.
- Vaz AJ. Low fractional excretion of urine sodium in acute renal failure due to sepsis. Arch Intern Med 1983; 143:738–739.
- Pru C, Kjellstrand CM. The FENa test is of no prognostic value in acute renal failure. Nephron 1984; 36:20–23.
- Pépin MN, Bouchard J, Legault L, Ethier J. Diagnostic performance of fractional excretion of urea and fractional excretion of sodium in the evaluations of patients with acute kidney injury with or without diuretic treatment. Am J Kidney Dis 2007; 50:566–573.
- Bellomo R, Ronco C, Kellum JA, Mehta RL, Palevsky P; Acute Dialysis Quality Initiative workgroup. Acute renal failure—definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group. Crit Care 2004; 8:R204–R212.
- Mehta RL, Kellum JA, Shah SV, et al; Acute Kidney Injury Network. Acute Kidney Injury Network: report of an initiative to improve outcomes in acute kidney injury. Crit Care 2007; 11:R31.
- Stevens PE, Tamimi NA, Al-Hasani MK, et al. Non-specialist management of acute renal failure. QJM 2001; 94:533–540.
- Feest TG, Round A, Hamad S. Incidence of severe acute renal failure in adults: results of a community based study. BMJ 1993; 306:481–483.
- Liaño F, Pascual J. Epidemiology of acute renal failure: a prospective, multicenter, community-based study. Madrid Acute Renal Failure Study Group. Kidney Int 1996; 50:811–818.
- Thadhani R, Pascual M, Bonventre JV. Acute renal failure. N Engl J Med 1996; 334:1448–1460.
- Bagshaw SM, George C, Bellomo R; ANZICS Database Management Committee. Changes in the incidence and outcome for early acute kidney injury in a cohort of Australian intensive care units. Crit Care 2007; 11:R68.
- Sodium homeostasis in chronic renal disease. Kidney Int 1982; 21:886–897.
- Espinel CH. The FENa test. Use in the differential diagnosis of acute renal failure. JAMA 1976; 236:579–581.
- Schrier RW, Wang W, Poole B, Mitra A. Acute renal failure: definitions, diagnosis, pathogenesis, and therapy. J Clin Invest 2004; 114:5–14.
- Miller TR, Anderson RJ, Linas SL, et al. Urinary diagnostic indices in acute renal failure: a prospective study. Ann Intern Med 1978; 89:47–50.
- Zarich S, Fang LS, Diamond JR. Fractional excretion of sodium. Exceptions to its diagnostic value. Arch Intern Med 1985; 145:108–112.
- Mandal AK, Baig M, Koutoubi Z. Management of acute renal failure in the elderly. Treatment options. Drugs Aging 1996; 9:226–250.
- Sands JM. Critical role of urea in the urine-concentrating mechanism. J Am Soc Nephrol 2007; 18:670–671.
- Goldstein MH, Lenz PR, Levitt MF. Effect of urine flow rate on urea reabsorption in man: urea as a “tubular marker”. J Appl Physiol 1969; 26:594–599.
- Fenton RA, Knepper MA. Urea and renal function in the 21st century: insights from knockout mice. J Am Soc Nephrol 2007; 18:679–688.
- Gréhant N. Physiologique des reins par le dosage de l’urée dans le sang et dans l’urine. J Physiol Pathol Gen (Paris) 1904; 6:1–8.
- Dossetor JB. Creatininemia versus uremia. The relative significance of blood urea nitrogen and serum creatinine concentrations in azotemia. Ann Intern Med 1966; 65:1287–1299.
- Kahn S, Sagel J, Eales L, Rabkin R. The significance of serum creatinine and the blood urea-serum creatinine ratio in azotaemia. S Afr Med J 1972; 46:1828–1832.
- Kerr DNS, Davison JM. The assessment of renal function. Br J Hosp Med 1975; 14:360–372.
- Kaplan AA, Kohn OF. Fractional excretion of urea as a guide to renal dysfunction. Am J Nephrol 1992; 12:49–54.
- Carvounis CP, Nisar S, Guro-Razuman S. Significance of the fractional excretion of urea in the differential diagnosis of acute renal failure. Kidney Int 2002; 62:2223–2229.
- Schmidt C, Höcherl K, Bucher M. Cytokine-mediated regulation of urea transporters during experimental endotoxemia. Am J Physiol Renal Physiol 2007; 292:F1479–F1489.
- Lim DH, Jeong JM, Oh SH, et al. Diagnostic performance of fractional excretion of urea in evaluating patients with acute kidney injury with diuretics treatment. Korean J Nephrol 2009; 28:190–198.
- Diskin CJ, Stokes TJ, Dansby LM, Radcliff L, Carter TB. The comparative benefits of the fractional excretion of urea and sodium in various azotemic oliguric states. Nephron Clin Pract 2010; 114:c145–c150.
- Fahimi D, Mohajeri S, Hajizadeh N, et al. Comparison between fractional excretions of urea and sodium in children with acute kidney injury. Pediatr Nephrol 2009; 24:2409–2412.
- Musso CG, Liakopoulos V, Ioannidis I, Eleftheriadis T, Stefanidis I. Acute renal failure in the elderly: particular characteristics. Int Urol Nephrol 2006; 38:787–793.
- Schönermarck U, Kehl K, Samtleben W. Diagnostic performance of fractional excretion of urea and sodium in acute kidney injury. Am J Kidney Dis 2008; 51:870–871.
- Nolan CR, Anderson RJ. Hospital-acquired acute renal failure. J Am Soc Nephrol 1998; 9:710–718.
- Mehta RL, Pascual MT, Soroko S, et al; Program to Improve Care in Acute Renal Disease. Spectrum of acute renal failure in the intensive care unit: the PICARD experience. Kidney Int 2004; 66:1613–1621.
- Myers BD, Miller DC, Mehigan JT, et al. Nature of the renal injury following total renal ischemia in man. J Clin Invest 1984; 73:329–341.
- Ho E, Fard A, Maisel A. Evolving use of biomarkers for kidney injury in acute care settings. Curr Opin Crit Care 2010; 16:399–407.
- Steiner RW. Low fractional excretion of sodium in myoglobinuric acute renal failure. Arch Intern Med 1982; 142:1216–1217.
- Vaz AJ. Low fractional excretion of urine sodium in acute renal failure due to sepsis. Arch Intern Med 1983; 143:738–739.
- Pru C, Kjellstrand CM. The FENa test is of no prognostic value in acute renal failure. Nephron 1984; 36:20–23.
- Pépin MN, Bouchard J, Legault L, Ethier J. Diagnostic performance of fractional excretion of urea and fractional excretion of sodium in the evaluations of patients with acute kidney injury with or without diuretic treatment. Am J Kidney Dis 2007; 50:566–573.
- Bellomo R, Ronco C, Kellum JA, Mehta RL, Palevsky P; Acute Dialysis Quality Initiative workgroup. Acute renal failure—definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group. Crit Care 2004; 8:R204–R212.
- Mehta RL, Kellum JA, Shah SV, et al; Acute Kidney Injury Network. Acute Kidney Injury Network: report of an initiative to improve outcomes in acute kidney injury. Crit Care 2007; 11:R31.
- Stevens PE, Tamimi NA, Al-Hasani MK, et al. Non-specialist management of acute renal failure. QJM 2001; 94:533–540.
- Feest TG, Round A, Hamad S. Incidence of severe acute renal failure in adults: results of a community based study. BMJ 1993; 306:481–483.
- Liaño F, Pascual J. Epidemiology of acute renal failure: a prospective, multicenter, community-based study. Madrid Acute Renal Failure Study Group. Kidney Int 1996; 50:811–818.
- Thadhani R, Pascual M, Bonventre JV. Acute renal failure. N Engl J Med 1996; 334:1448–1460.
- Bagshaw SM, George C, Bellomo R; ANZICS Database Management Committee. Changes in the incidence and outcome for early acute kidney injury in a cohort of Australian intensive care units. Crit Care 2007; 11:R68.
- Sodium homeostasis in chronic renal disease. Kidney Int 1982; 21:886–897.
- Espinel CH. The FENa test. Use in the differential diagnosis of acute renal failure. JAMA 1976; 236:579–581.
- Schrier RW, Wang W, Poole B, Mitra A. Acute renal failure: definitions, diagnosis, pathogenesis, and therapy. J Clin Invest 2004; 114:5–14.
- Miller TR, Anderson RJ, Linas SL, et al. Urinary diagnostic indices in acute renal failure: a prospective study. Ann Intern Med 1978; 89:47–50.
- Zarich S, Fang LS, Diamond JR. Fractional excretion of sodium. Exceptions to its diagnostic value. Arch Intern Med 1985; 145:108–112.
- Mandal AK, Baig M, Koutoubi Z. Management of acute renal failure in the elderly. Treatment options. Drugs Aging 1996; 9:226–250.
- Sands JM. Critical role of urea in the urine-concentrating mechanism. J Am Soc Nephrol 2007; 18:670–671.
- Goldstein MH, Lenz PR, Levitt MF. Effect of urine flow rate on urea reabsorption in man: urea as a “tubular marker”. J Appl Physiol 1969; 26:594–599.
- Fenton RA, Knepper MA. Urea and renal function in the 21st century: insights from knockout mice. J Am Soc Nephrol 2007; 18:679–688.
- Gréhant N. Physiologique des reins par le dosage de l’urée dans le sang et dans l’urine. J Physiol Pathol Gen (Paris) 1904; 6:1–8.
- Dossetor JB. Creatininemia versus uremia. The relative significance of blood urea nitrogen and serum creatinine concentrations in azotemia. Ann Intern Med 1966; 65:1287–1299.
- Kahn S, Sagel J, Eales L, Rabkin R. The significance of serum creatinine and the blood urea-serum creatinine ratio in azotaemia. S Afr Med J 1972; 46:1828–1832.
- Kerr DNS, Davison JM. The assessment of renal function. Br J Hosp Med 1975; 14:360–372.
- Kaplan AA, Kohn OF. Fractional excretion of urea as a guide to renal dysfunction. Am J Nephrol 1992; 12:49–54.
- Carvounis CP, Nisar S, Guro-Razuman S. Significance of the fractional excretion of urea in the differential diagnosis of acute renal failure. Kidney Int 2002; 62:2223–2229.
- Schmidt C, Höcherl K, Bucher M. Cytokine-mediated regulation of urea transporters during experimental endotoxemia. Am J Physiol Renal Physiol 2007; 292:F1479–F1489.
- Lim DH, Jeong JM, Oh SH, et al. Diagnostic performance of fractional excretion of urea in evaluating patients with acute kidney injury with diuretics treatment. Korean J Nephrol 2009; 28:190–198.
- Diskin CJ, Stokes TJ, Dansby LM, Radcliff L, Carter TB. The comparative benefits of the fractional excretion of urea and sodium in various azotemic oliguric states. Nephron Clin Pract 2010; 114:c145–c150.
- Fahimi D, Mohajeri S, Hajizadeh N, et al. Comparison between fractional excretions of urea and sodium in children with acute kidney injury. Pediatr Nephrol 2009; 24:2409–2412.
- Musso CG, Liakopoulos V, Ioannidis I, Eleftheriadis T, Stefanidis I. Acute renal failure in the elderly: particular characteristics. Int Urol Nephrol 2006; 38:787–793.
- Schönermarck U, Kehl K, Samtleben W. Diagnostic performance of fractional excretion of urea and sodium in acute kidney injury. Am J Kidney Dis 2008; 51:870–871.
KEY POINTS
- Finding the cause of acute kidney injury is important, as management strategies differ.
- Although cutoff values differ among studies, in a patient with acute kidney injury, an FENa lower than 1% suggests a prerenal cause, whereas a value higher than 3% suggests an intrinsic cause.
- Similarly, an FEU less than 35% suggests a prerenal cause of acute kidney injury, whereas a value higher than 50% suggests an intrinsic one.
- The FENa can be falsely high in patients taking a diuretic; it can be falsely low in a number of intrinsic renal conditions, such as contrast-induced nephropathy, rhabdomyolysis, and acute glomerulonephritis.
Deep brain stimulation: What can patients expect from it?
Deep brain stimulation is an important therapy for Parkinson disease and other movement disorders. It involves implantation of a pulse generator that can be adjusted by telemetry and can be activated and deactivated by clinicians and patients. It is therefore also a good investigational tool, allowing for double-blind, sham-controlled clinical trials by testing the effects of the stimulation with optimal settings compared with no stimulation.
This article will discuss the approved indications for deep brain stimulation (particularly for managing movement disorders), the benefits that can be expected, the risks, the complications, the maintenance required, how candidates for this treatment are evaluated, and the surgical procedure for implantation of the devices.
DEVICE SIMILAR TO HEART PACEMAKERS
The deep brain stimulation system must be programmed by a physician or midlevel practitioner by observing a symptom and then changing the applied settings to the pulse generator until the symptom improves. This can be a very time-consuming process.
In contrast to heart pacemakers, which run at low frequencies, the brain devices for movement disorders are almost always set to a high frequency, greater than 100 Hz. For this reason, they consume more energy and need larger batteries than those in modern heart pacemakers.
The batteries in these generators typically last 3 to 5 years and are replaced in an outpatient procedure. Newer, smaller, rechargeable devices are expected to last longer but require more maintenance and care by patients, who have to recharge them at home periodically.
INDICATIONS FOR DEEP BRAIN STIMULATION
Deep brain stimulation is approved by the US Food and Drug Administration (FDA) for specific indications:
- Parkinson disease
- Essential tremor
- Primary dystonia (under a humanitarian device exemption)
- Intractable obsessive-compulsive disorder (also under a humanitarian device exemption). We will not discuss this indication further in this paper.
For each of these conditions, deep brain stimulation is considered when nonsurgical management has failed, as is the case for most functional neurosurgical treatments.
Investigations under way in other disorders
Several studies of deep brain stimulation are currently in progress under FDA-approved investigational device exemptions. Some, with funding from industry, are exploring its use in neuropsychiatric conditions other than parkinsonism. Two large clinical trials are evaluating its use for treatment-refractory depression, a common problem and a leading cause of disability in the industrialized world. Multiple investigators are also exploring novel uses of this technology in disorders ranging from obsessive-compulsive disorder to epilepsy.
Investigation is also under way at Cleveland Clinic in a federally funded, prospective, randomized clinical trial of deep brain stimulation for patients with thalamic pain syndrome. The primary hypothesis is that stimulation of the ventral striatal and ventral capsular area will modulate the affective component of this otherwise intractable pain syndrome, reducing pain-related disability and improving quality of life.
DEEP BRAIN STIMULATION VS ABLATION
Before deep brain stimulation became available, the only surgical options for patients with advanced Parkinson disease, tremor, or dystonia were ablative procedures such as pallidotomy (ablation of part of the globus pallidus) and thalamotomy (ablation of part of the thalamus). These procedures had been well known for several decades but fell out of favor when levodopa became available in the 1960s and revolutionized the medical treatment of Parkinson disease.
Surgery for movement disorders, in particular Parkinson disease, had a rebirth in the late 1980s when the limitations and complications associated with the pharmacologic management of Parkinson disease became increasingly evident. Ablative procedures are still used to treat advanced Parkinson disease, but much less commonly in industrialized countries.
Although pallidotomy and thalamotomy can have excellent results, they are not as safe as deep brain stimulation, which has the advantage of being reversible, modulating the function of an area rather than destroying it. Any unwanted effect can be immediately altered or reversed, unlike ablative procedures, in which any change is permanent. In addition, deep brain stimulation is adjustable, and the settings can be optimized as the disease progresses over the years.
Ablative procedures can be risky when performed bilaterally, while deep brain stimulation is routinely done on both hemispheres for patients with bilateral symptoms.
Although deep brain stimulation is today’s surgical treatment of choice, it is not perfect. It has the disadvantage of requiring lifelong maintenance of the hardware, for which the patient remains dependent on a medical center. Patients are usually seen more often at the specialized center in the first few months after surgery for optimization of programming and titration of drugs. (During this time, most patients see a gradual, substantial reduction in medication intake.) They are then followed by their physician and visit the center less often for monitoring of disease status and for further adjustments to the stimulator.
Most patients, to date, receive nonrechargeable pulse generators. As mentioned above, the batteries in these devices typically last 3 to 5 years. Preferably, batteries are replaced before they are completely depleted, to avoid interruption of therapy. Periodic visits to the center allow clinicians to estimate battery expiration ahead of time and plan replacements accordingly.
Rechargeable pulse generators have been recently introduced and are expected to last up to 9 years. They are an option for patients who can comply with the requirements for periodic home recharging of the hardware.
Patients are given a remote control so that they can turn the device on or off and check its status. Most patients keep it turned on all the time, although some turn it off at night to save battery life.
WHAT CAN PARKINSON PATIENTS EXPECT FROM THIS THERAPY?
Typically, some parkinsonian symptoms predominate over others, although some patients with advanced disease present with a severe combination of multiple disabling symptoms. Deep brain stimulation is best suited to address some of the cardinal motor symptoms, particularly tremor, rigidity, and bradykinesia, and motor fluctuations such as “wearing off” and dyskinesia.
Improvement in some motor symptoms
As a general rule, appendicular symptoms such as limb tremor and rigidity are more responsive to this therapy than axial symptoms such as gait and balance problems, but some patients experience improvement in gait as well. Other symptoms, such as swallowing or urinary symptoms, are seldom helped.
Although deep brain stimulation can help manage key motor symptoms and improve quality of life, it does not cure Parkinson disease. Also, there is no evidence to date that it slows disease progression, although this is a topic of ongoing investigation.
Fewer motor fluctuations
A common complaint of patients with advanced Parkinson disease is frequent—and often unpredictable—fluctuations between the “on” state (ie, when the effects of the patient’s levodopa therapy are apparent) and the “off” state (ie, when the levodopa doesn’t seem to be working). Sometimes, in the on state, patients experience involuntary choreic or ballistic movements, called dyskinesias. They also complain that the on time progressively lasts shorter and the day is spent alternating between shorter on states (during which the patient may be dyskinetic) and longer off states, limiting the patient’s independence and quality of life.
Deep brain stimulation can help patients prolong the on time while reducing the amplitude of these fluctuations so that the symptoms are not as severe in the off time and dyskinesias are reduced in the on time.
Some patients undergo deep brain stimulation primarily for managing the adverse effects of levodopa rather than for controlling the symptoms of the disease itself. While these patients need levodopa to address the disabling symptoms of the disease, they also present a greater sensitivity for developing levodopa-induced dyskinesias, quickly fluctuating from a lack of movement (the off state) to a state of uncontrollable movements (during the on state).
Deep brain stimulation typically allows the dosage of levodopa to be significantly reduced and gives patients more on time with fewer side effects and less fluctuation between the on and off states.
Response to levodopa predicts deep brain stimulation’s effects
Whether a patient is likely to be helped by deep brain stimulation can be tested with reasonable predictability by giving a single therapeutic dose of levodopa after the patient has been free of the drug for 12 hours. If there is an obvious difference on objective quantitative testing between the off and on states with a single dose, the patient is likely to benefit from deep brain stimulation. Those who do not respond well or are known to have never been well controlled by levodopa are likely poor candidates.
The test is also used as an indicator of whether the patient’s gait can be improved. Patients whose gait is substantially improved by levodopa, even for only a brief period of time, have a better chance of experiencing improvement in this domain with deep brain stimulation than those who do not show any gait improvement.
An important and notable exception to this rule is tremor control. Even Parkinson patients who do not experience significant improvement in tremor with levodopa (ie, who have medication-resistant tremors) are still likely to benefit from deep brain stimulation. Overall, tremor is the symptom that is most consistently improved with deep brain stimulation.
Results of clinical trials
Several clinical trials have demonstrated that deep brain stimulation plus medication works better than medications alone for advanced Parkinson disease.
Deuschl et al1 conducted a randomized trial in 156 patients with advanced Parkinson disease. Patients receiving subthalamic deep brain stimulation plus medication had significantly greater improvement in motor symptoms as measured by the Unified Parkinson’s Disease Rating Scale as well as in quality-of-life measures than patients receiving medications only.
Krack et al2 reported on the outcomes of 49 patients with advanced Parkinson disease who underwent deep brain stimulation and then were prospectively followed. At 5 years, motor function had improved by approximately 55% from baseline, activities-of-daily-living scores had improved by 49%, and patients continued to need significantly less levodopa and to experience less drug-induced dyskinesia.
Complications related to deep brain stimulation occurred in both studies, including two large intracerebral hemorrhages, one of which was fatal.
Weight gain. During the first 3 months after the device was implanted, patients tended to gain weight (mean 3 kg, maximum 5 kg). Although weight gain is considered an adverse effect, many patients are quite thin by the time they are candidates for deep brain stimulation, and in such cases gaining lean weight can be a benefit.
Patients with poorly controlled Parkinson disease lose weight for several reasons: increased calorie expenditure from shaking and excessive movements; diet modification and protein restriction for some patients who realize that protein competes with levodopa absorption; lack of appetite due to depression or from poor taste sensation (due to anosmia); and decreased overall food consumption due to difficulty swallowing.
DEEP BRAIN STIMULATION FOR ESSENTIAL TREMOR
Essential tremor is more common than Parkinson disease, with a prevalence in the United States estimated at approximately 4,000 per 100,000 people older than 65 years.
The tremor is often bilateral and is characteristically an action tremor, but in many patients it also has a postural, and sometimes a resting, component. It is distinct from parkinsonian tremor, which is usually predominantly a resting tremor. The differential diagnosis includes tremors secondary to central nervous system degenerative disorders as well as psychogenic tremors.
Drinking alcohol tends to relieve essential tremors, a finding that can often be elicited in the patient’s history. Patients whose symptoms improve with an alcoholic beverage are more likely to have essential tremor than another diagnosis.
Response to deep brain stimulation
Most patients with essential tremor respond well to deep brain stimulation of the contralateral ventral intermedius thalamic nucleus.
Treatment is usually started unilaterally, usually aimed at alleviating tremor in the patient’s dominant upper extremity. In selected cases, preference is given to treating the nondominant extremity when it is more severely affected than the dominant extremity.
Implantation of a device on the second side is offered to some patients who continue to be limited in activity and quality of life due to tremor of the untreated extremity. Surgery of the second side can be more complicated than the initial unilateral procedure. In particular, some patients may present with dysarthria, although that seems to be less common in our experience than initially estimated.
In practice, patients with moderate tremors tend to have an excellent response to deep brain stimulation. For this particular indication, if the response is not satisfactory, the treating team tends to consider surgically revising the placement of the lead rather than considering the patient a nonresponder. Patients with very severe tremors may have some residual tremor despite substantial improvement in severity. In our experience, patients with a greater proximal component of tremor tend to have less satisfactory results.
For challenging cases, implantation of additional electrodes in the thalamus or in new targets currently under investigation is sometimes considered, although this is an off-label use.
Treatment of secondary tremors, such as poststroke tremor or tremor due to multiple sclerosis, is sometimes attempted with deep brain stimulation. This is also an off-label option but is considered in selected cases for quality-of-life management.
Patients with axial tremors such as head or voice tremor are less likely to be helped by deep brain stimulation.
DEEP BRAIN STIMULATION FOR PRIMARY DYSTONIA
Generalized dystonia is a less common but severely impairing movement disorder.
Deep brain stimulation is approved for primary dystonia under a humanitarian device exemption, a regulatory mechanism for less common conditions. Deep brain stimulation is an option for patients who have significant impairment related to dystonia and who have not responded to conservative management such as anticholinergic agents, muscle relaxants, benzodiazepines, levodopa, or combinations of these drugs. Surgery has been shown to be effective for patients with primary generalized dystonia, whether or not they tested positive for a dystonia-related gene such as DYT1.
Kupsch et al3 evaluated 40 patients with primary dystonia in a randomized controlled trial of pallidal (globus pallidus pars interna) active deep brain stimulation vs sham stimulation (in which the device was implanted but not activated) for 3 months. Treated patients improved significantly more than controls (39% vs 5%) in the Burke-Fahn- Marsden Dystonia Rating Scale (BFMDRS).4 Similar improvement was noted when those receiving sham stimulation were switched to active stimulation.
During long-term follow-up, the results were generally sustained, with substantial improvement from deep brain stimulation in all movement symptoms evaluated except for speech and swallowing. Unlike improvement in tremor, which is quickly evident during testing in the operating room, the improvement in dystonia occurs gradually, and it may take months for patients to notice a change. Similarly, if stimulation stops because of device malfunction or dead batteries, symptoms sometimes do not recur for weeks or months.
Deep brain stimulation is sometimes offered to patients with dystonia secondary to conditions such as cerebral palsy or trauma (an off-label use). Although benefits are less consistent, deep brain stimulation remains an option for these individuals, aimed at alleviating some of the disabling symptoms. In patients with cerebral palsy or other secondary dystonias, it is sometimes difficult to distinguish how much of the disability is related to spasticity vs dystonia. Deep brain stimulation aims to alleviate the dystonic component; the spasticity may be managed with other options such as intrathecal baclofen (Lioresal).
Patients with tardive dystonia, which is usually secondary to treatment with antipsychotic agents, have been reported to respond well to bilateral deep brain stimulation. Gruber et al5 reported on a series of nine patients with a mean follow-up of 41 months. Patients improved by a mean of approximately 74% on the BFMDRS after 3 to 6 months of deep brain stimulation compared with baseline. None of the patients presented with long-term adverse effects, and quality of life and disability scores also improved significantly.
CANDIDATES ARE EVALUATED BY A MULTIDISCIPLINARY TEAM
Cleveland Clinic conducts a comprehensive 2-day evaluation for patients being considered for deep brain stimulation surgery, including consultations with specialists in neurology, neurosurgery, neuropsychology, and psychiatry.
Patients with significant cognitive deficits—near or meeting the diagnostic criteria for dementia—are usually not recommended to have surgery for Parkinson disease. Deep brain stimulation is not aimed at alleviating cognitive issues related to Parkinson disease or other concomitant dementia. In addition, there is a risk that neurostimulation could further worsen cognitive function in the already compromised brain. Moreover, patients with significant abnormalities detected by neuroimaging may have their diagnosis reconsidered in some cases, and some patients may not be deemed ideal candidates for surgery.
An important part of the process is a discussion with the patient and family about the risks and the potential short-term and long-term benefits. Informed consent requires a good understanding of this equation. Patients are counseled to have realistic expectations about what the procedure can offer. Deep brain stimulation can help some of the symptoms of Parkinson disease but will not cure it, and there is no evidence to date that it reduces its progress. At 5 or 10 years after surgery, patients are expected to be worse overall than they were in the first year after surgery, because of disease progression. However, patients who receive this treatment are expected, in general, to be doing better 5 or 10 years later (or longer) than those who do not receive it.
In addition to the discussion about risks, benefits, and expectations, a careful discussion is also devoted to hardware maintenance, including how to change the batteries. Particularly, younger patients should be informed about the risk of breakage of the leads and the extension wire, as they are likely to outlive their implant. Patients and caregivers should be able to come to the specialized center should hardware malfunction occur.
Patients are also informed that after the system is implanted they cannot undergo magnetic resonance imaging (MRI) except of the head, performed with a specific head coil and under specific parameters. MRI of any other body part and with a body coil is contraindicated.
HOW THE DEVICE IS IMPLANTED
There are several options for implanting a deep brain stimulation device.
Implantation with the patient awake, using a stereotactic headframe
At Cleveland Clinic, we usually prefer implantation with a stereotactic headframe. The base or “halo” of the frame is applied to the head under local anesthesia, followed by imaging via computed tomography (Figure 1). Typically, the tomographic image is fused to a previously acquired MRI image, but the MRI is sometimes either initially performed or repeated on the day of surgery.
Patients are sedated for the beginning of the procedure, while the surgical team is opening the skin and drilling the opening in the skull for placement of the lead. The patient is awakened for placement of the electrodes, which is not painful.
Microelectrode recording is typically performed in order to refine the targeting based on the stereotactic coordinates derived from neuroimaging. Although cadaver atlases exist and provide a guide to the stereotactic localization of subcortical structures, they are not completely accurate in representing the brain anatomy of all patients.
By “listening” to cells and knowing their characteristic signals in specific areas, landmarks can be created, forming an individualized map of the patient’s brain target. Microelectrode recording is invasive and has risks, including the risk of a brain hemorrhage. It is routinely done in most specialized deep brain stimulation centers because it can provide better accuracy and precision in lead placement.
When the target has been located and refined by microelectrode recording, the permanent electrode is inserted. Fluoroscopy is usually used to verify the direction and stability of placement during the procedure.
An intraoperative test of the effects of deep brain stimulation is routinely performed to verify that some benefits can be achieved with the brain lead in its location, to determine the threshold for side effects, or both. For example, the patient may be asked to hold a cup as if trying to drink from it and to write or to draw a spiral on a clipboard to assess for improvements in tremor. Rigidity and bradykinesia can also be tested for improvements.
This intraoperative test is not aimed at assessing the best possible outcome of deep brain stimulation, and not even to see an improvement in all symptoms that burden the patient. Rather, it is to evaluate the likelihood that programming will be feasible with the implanted lead.
Subsequently, implantation of the pulse generator in the chest and connection to the brain lead is completed, usually with the patient under general anesthesia.
Implantation under general anesthesia, with intraoperative MRI
A new alternative to “awake stereotactic surgery” is implantation with the patient under general anesthesia, with intraoperative MRI. We have started to do this procedure in a new operating suite that is attached to an MRI suite. The magnet can be taken in and out of the operating room, allowing the surgeon to verify the location of the implanted leads right at the time of the procedure. In this fashion, intraoperative images are used to guide implantation instead of awake microelectrode recording. This is a new option for patients who cannot tolerate awake surgery and for those who have a contraindication to the regular stereotactic procedure with the patient awake.
Risks of bleeding and infection
The potential complications of implanting a device and leads in the brain can be significant.
Hemorrhage can occur, resulting in a superficial or deep hematoma.
Infection and erosion may require removal of the hardware for antibiotic treatment and possible reimplantation.
Other risks include those related to tunneling the wires from the head to the chest, to implanting the device in the chest, and to serious medical complications after surgery. Hardware failure can occur and requires additional surgery. Finally, environmental risks and risks related to medical devices such as MRI, electrocautery, and cardioversion should also be considered.
Deep brain stimulation is advantageous for its reversibility. If during postoperative programming the brain leads are considered not to be ideally placed, revisions can be done to reposition the leads.
- Deuschl G, Schade-Brittinger C, Krack P, et al; German Parkinson Study Group, Neurostimulation Section. A randomized trial of deep-brain stimulation for Parkinson’s disease. N Engl J Med 2006; 355:896–908.
- Krack P, Batir A, Van Blercom N, et al. Five-year followup of bilateral stimulation of the subthalamic nucleus in advanced Parkinson’s disease. N Engl J Med 2003; 349:1925–1934.
- Kupsch A, Benecke R, Müller J, et al; Deep-Brain Stimulation for Dystonia Study Group. Pallidal deep-brain stimulation in primary generalized or segmental dystonia. N Engl J Med 2006; 355:1978–1990.
- Burke RE, Fahn S, Marsden CD, Bressman SB, Moskowitz C, Friedman J. Validity and reliability of a rating scle for the primary torsion dystonias. Neurology 1985; 35:73–77.
- Gruber D, Trottenberg T, Kivi A, et al. Long-term effects of pallidal deep brain stimulation in tardive dystonia. Neurology 2009; 73:53–58.
Deep brain stimulation is an important therapy for Parkinson disease and other movement disorders. It involves implantation of a pulse generator that can be adjusted by telemetry and can be activated and deactivated by clinicians and patients. It is therefore also a good investigational tool, allowing for double-blind, sham-controlled clinical trials by testing the effects of the stimulation with optimal settings compared with no stimulation.
This article will discuss the approved indications for deep brain stimulation (particularly for managing movement disorders), the benefits that can be expected, the risks, the complications, the maintenance required, how candidates for this treatment are evaluated, and the surgical procedure for implantation of the devices.
DEVICE SIMILAR TO HEART PACEMAKERS
The deep brain stimulation system must be programmed by a physician or midlevel practitioner by observing a symptom and then changing the applied settings to the pulse generator until the symptom improves. This can be a very time-consuming process.
In contrast to heart pacemakers, which run at low frequencies, the brain devices for movement disorders are almost always set to a high frequency, greater than 100 Hz. For this reason, they consume more energy and need larger batteries than those in modern heart pacemakers.
The batteries in these generators typically last 3 to 5 years and are replaced in an outpatient procedure. Newer, smaller, rechargeable devices are expected to last longer but require more maintenance and care by patients, who have to recharge them at home periodically.
INDICATIONS FOR DEEP BRAIN STIMULATION
Deep brain stimulation is approved by the US Food and Drug Administration (FDA) for specific indications:
- Parkinson disease
- Essential tremor
- Primary dystonia (under a humanitarian device exemption)
- Intractable obsessive-compulsive disorder (also under a humanitarian device exemption). We will not discuss this indication further in this paper.
For each of these conditions, deep brain stimulation is considered when nonsurgical management has failed, as is the case for most functional neurosurgical treatments.
Investigations under way in other disorders
Several studies of deep brain stimulation are currently in progress under FDA-approved investigational device exemptions. Some, with funding from industry, are exploring its use in neuropsychiatric conditions other than parkinsonism. Two large clinical trials are evaluating its use for treatment-refractory depression, a common problem and a leading cause of disability in the industrialized world. Multiple investigators are also exploring novel uses of this technology in disorders ranging from obsessive-compulsive disorder to epilepsy.
Investigation is also under way at Cleveland Clinic in a federally funded, prospective, randomized clinical trial of deep brain stimulation for patients with thalamic pain syndrome. The primary hypothesis is that stimulation of the ventral striatal and ventral capsular area will modulate the affective component of this otherwise intractable pain syndrome, reducing pain-related disability and improving quality of life.
DEEP BRAIN STIMULATION VS ABLATION
Before deep brain stimulation became available, the only surgical options for patients with advanced Parkinson disease, tremor, or dystonia were ablative procedures such as pallidotomy (ablation of part of the globus pallidus) and thalamotomy (ablation of part of the thalamus). These procedures had been well known for several decades but fell out of favor when levodopa became available in the 1960s and revolutionized the medical treatment of Parkinson disease.
Surgery for movement disorders, in particular Parkinson disease, had a rebirth in the late 1980s when the limitations and complications associated with the pharmacologic management of Parkinson disease became increasingly evident. Ablative procedures are still used to treat advanced Parkinson disease, but much less commonly in industrialized countries.
Although pallidotomy and thalamotomy can have excellent results, they are not as safe as deep brain stimulation, which has the advantage of being reversible, modulating the function of an area rather than destroying it. Any unwanted effect can be immediately altered or reversed, unlike ablative procedures, in which any change is permanent. In addition, deep brain stimulation is adjustable, and the settings can be optimized as the disease progresses over the years.
Ablative procedures can be risky when performed bilaterally, while deep brain stimulation is routinely done on both hemispheres for patients with bilateral symptoms.
Although deep brain stimulation is today’s surgical treatment of choice, it is not perfect. It has the disadvantage of requiring lifelong maintenance of the hardware, for which the patient remains dependent on a medical center. Patients are usually seen more often at the specialized center in the first few months after surgery for optimization of programming and titration of drugs. (During this time, most patients see a gradual, substantial reduction in medication intake.) They are then followed by their physician and visit the center less often for monitoring of disease status and for further adjustments to the stimulator.
Most patients, to date, receive nonrechargeable pulse generators. As mentioned above, the batteries in these devices typically last 3 to 5 years. Preferably, batteries are replaced before they are completely depleted, to avoid interruption of therapy. Periodic visits to the center allow clinicians to estimate battery expiration ahead of time and plan replacements accordingly.
Rechargeable pulse generators have been recently introduced and are expected to last up to 9 years. They are an option for patients who can comply with the requirements for periodic home recharging of the hardware.
Patients are given a remote control so that they can turn the device on or off and check its status. Most patients keep it turned on all the time, although some turn it off at night to save battery life.
WHAT CAN PARKINSON PATIENTS EXPECT FROM THIS THERAPY?
Typically, some parkinsonian symptoms predominate over others, although some patients with advanced disease present with a severe combination of multiple disabling symptoms. Deep brain stimulation is best suited to address some of the cardinal motor symptoms, particularly tremor, rigidity, and bradykinesia, and motor fluctuations such as “wearing off” and dyskinesia.
Improvement in some motor symptoms
As a general rule, appendicular symptoms such as limb tremor and rigidity are more responsive to this therapy than axial symptoms such as gait and balance problems, but some patients experience improvement in gait as well. Other symptoms, such as swallowing or urinary symptoms, are seldom helped.
Although deep brain stimulation can help manage key motor symptoms and improve quality of life, it does not cure Parkinson disease. Also, there is no evidence to date that it slows disease progression, although this is a topic of ongoing investigation.
Fewer motor fluctuations
A common complaint of patients with advanced Parkinson disease is frequent—and often unpredictable—fluctuations between the “on” state (ie, when the effects of the patient’s levodopa therapy are apparent) and the “off” state (ie, when the levodopa doesn’t seem to be working). Sometimes, in the on state, patients experience involuntary choreic or ballistic movements, called dyskinesias. They also complain that the on time progressively lasts shorter and the day is spent alternating between shorter on states (during which the patient may be dyskinetic) and longer off states, limiting the patient’s independence and quality of life.
Deep brain stimulation can help patients prolong the on time while reducing the amplitude of these fluctuations so that the symptoms are not as severe in the off time and dyskinesias are reduced in the on time.
Some patients undergo deep brain stimulation primarily for managing the adverse effects of levodopa rather than for controlling the symptoms of the disease itself. While these patients need levodopa to address the disabling symptoms of the disease, they also present a greater sensitivity for developing levodopa-induced dyskinesias, quickly fluctuating from a lack of movement (the off state) to a state of uncontrollable movements (during the on state).
Deep brain stimulation typically allows the dosage of levodopa to be significantly reduced and gives patients more on time with fewer side effects and less fluctuation between the on and off states.
Response to levodopa predicts deep brain stimulation’s effects
Whether a patient is likely to be helped by deep brain stimulation can be tested with reasonable predictability by giving a single therapeutic dose of levodopa after the patient has been free of the drug for 12 hours. If there is an obvious difference on objective quantitative testing between the off and on states with a single dose, the patient is likely to benefit from deep brain stimulation. Those who do not respond well or are known to have never been well controlled by levodopa are likely poor candidates.
The test is also used as an indicator of whether the patient’s gait can be improved. Patients whose gait is substantially improved by levodopa, even for only a brief period of time, have a better chance of experiencing improvement in this domain with deep brain stimulation than those who do not show any gait improvement.
An important and notable exception to this rule is tremor control. Even Parkinson patients who do not experience significant improvement in tremor with levodopa (ie, who have medication-resistant tremors) are still likely to benefit from deep brain stimulation. Overall, tremor is the symptom that is most consistently improved with deep brain stimulation.
Results of clinical trials
Several clinical trials have demonstrated that deep brain stimulation plus medication works better than medications alone for advanced Parkinson disease.
Deuschl et al1 conducted a randomized trial in 156 patients with advanced Parkinson disease. Patients receiving subthalamic deep brain stimulation plus medication had significantly greater improvement in motor symptoms as measured by the Unified Parkinson’s Disease Rating Scale as well as in quality-of-life measures than patients receiving medications only.
Krack et al2 reported on the outcomes of 49 patients with advanced Parkinson disease who underwent deep brain stimulation and then were prospectively followed. At 5 years, motor function had improved by approximately 55% from baseline, activities-of-daily-living scores had improved by 49%, and patients continued to need significantly less levodopa and to experience less drug-induced dyskinesia.
Complications related to deep brain stimulation occurred in both studies, including two large intracerebral hemorrhages, one of which was fatal.
Weight gain. During the first 3 months after the device was implanted, patients tended to gain weight (mean 3 kg, maximum 5 kg). Although weight gain is considered an adverse effect, many patients are quite thin by the time they are candidates for deep brain stimulation, and in such cases gaining lean weight can be a benefit.
Patients with poorly controlled Parkinson disease lose weight for several reasons: increased calorie expenditure from shaking and excessive movements; diet modification and protein restriction for some patients who realize that protein competes with levodopa absorption; lack of appetite due to depression or from poor taste sensation (due to anosmia); and decreased overall food consumption due to difficulty swallowing.
DEEP BRAIN STIMULATION FOR ESSENTIAL TREMOR
Essential tremor is more common than Parkinson disease, with a prevalence in the United States estimated at approximately 4,000 per 100,000 people older than 65 years.
The tremor is often bilateral and is characteristically an action tremor, but in many patients it also has a postural, and sometimes a resting, component. It is distinct from parkinsonian tremor, which is usually predominantly a resting tremor. The differential diagnosis includes tremors secondary to central nervous system degenerative disorders as well as psychogenic tremors.
Drinking alcohol tends to relieve essential tremors, a finding that can often be elicited in the patient’s history. Patients whose symptoms improve with an alcoholic beverage are more likely to have essential tremor than another diagnosis.
Response to deep brain stimulation
Most patients with essential tremor respond well to deep brain stimulation of the contralateral ventral intermedius thalamic nucleus.
Treatment is usually started unilaterally, usually aimed at alleviating tremor in the patient’s dominant upper extremity. In selected cases, preference is given to treating the nondominant extremity when it is more severely affected than the dominant extremity.
Implantation of a device on the second side is offered to some patients who continue to be limited in activity and quality of life due to tremor of the untreated extremity. Surgery of the second side can be more complicated than the initial unilateral procedure. In particular, some patients may present with dysarthria, although that seems to be less common in our experience than initially estimated.
In practice, patients with moderate tremors tend to have an excellent response to deep brain stimulation. For this particular indication, if the response is not satisfactory, the treating team tends to consider surgically revising the placement of the lead rather than considering the patient a nonresponder. Patients with very severe tremors may have some residual tremor despite substantial improvement in severity. In our experience, patients with a greater proximal component of tremor tend to have less satisfactory results.
For challenging cases, implantation of additional electrodes in the thalamus or in new targets currently under investigation is sometimes considered, although this is an off-label use.
Treatment of secondary tremors, such as poststroke tremor or tremor due to multiple sclerosis, is sometimes attempted with deep brain stimulation. This is also an off-label option but is considered in selected cases for quality-of-life management.
Patients with axial tremors such as head or voice tremor are less likely to be helped by deep brain stimulation.
DEEP BRAIN STIMULATION FOR PRIMARY DYSTONIA
Generalized dystonia is a less common but severely impairing movement disorder.
Deep brain stimulation is approved for primary dystonia under a humanitarian device exemption, a regulatory mechanism for less common conditions. Deep brain stimulation is an option for patients who have significant impairment related to dystonia and who have not responded to conservative management such as anticholinergic agents, muscle relaxants, benzodiazepines, levodopa, or combinations of these drugs. Surgery has been shown to be effective for patients with primary generalized dystonia, whether or not they tested positive for a dystonia-related gene such as DYT1.
Kupsch et al3 evaluated 40 patients with primary dystonia in a randomized controlled trial of pallidal (globus pallidus pars interna) active deep brain stimulation vs sham stimulation (in which the device was implanted but not activated) for 3 months. Treated patients improved significantly more than controls (39% vs 5%) in the Burke-Fahn- Marsden Dystonia Rating Scale (BFMDRS).4 Similar improvement was noted when those receiving sham stimulation were switched to active stimulation.
During long-term follow-up, the results were generally sustained, with substantial improvement from deep brain stimulation in all movement symptoms evaluated except for speech and swallowing. Unlike improvement in tremor, which is quickly evident during testing in the operating room, the improvement in dystonia occurs gradually, and it may take months for patients to notice a change. Similarly, if stimulation stops because of device malfunction or dead batteries, symptoms sometimes do not recur for weeks or months.
Deep brain stimulation is sometimes offered to patients with dystonia secondary to conditions such as cerebral palsy or trauma (an off-label use). Although benefits are less consistent, deep brain stimulation remains an option for these individuals, aimed at alleviating some of the disabling symptoms. In patients with cerebral palsy or other secondary dystonias, it is sometimes difficult to distinguish how much of the disability is related to spasticity vs dystonia. Deep brain stimulation aims to alleviate the dystonic component; the spasticity may be managed with other options such as intrathecal baclofen (Lioresal).
Patients with tardive dystonia, which is usually secondary to treatment with antipsychotic agents, have been reported to respond well to bilateral deep brain stimulation. Gruber et al5 reported on a series of nine patients with a mean follow-up of 41 months. Patients improved by a mean of approximately 74% on the BFMDRS after 3 to 6 months of deep brain stimulation compared with baseline. None of the patients presented with long-term adverse effects, and quality of life and disability scores also improved significantly.
CANDIDATES ARE EVALUATED BY A MULTIDISCIPLINARY TEAM
Cleveland Clinic conducts a comprehensive 2-day evaluation for patients being considered for deep brain stimulation surgery, including consultations with specialists in neurology, neurosurgery, neuropsychology, and psychiatry.
Patients with significant cognitive deficits—near or meeting the diagnostic criteria for dementia—are usually not recommended to have surgery for Parkinson disease. Deep brain stimulation is not aimed at alleviating cognitive issues related to Parkinson disease or other concomitant dementia. In addition, there is a risk that neurostimulation could further worsen cognitive function in the already compromised brain. Moreover, patients with significant abnormalities detected by neuroimaging may have their diagnosis reconsidered in some cases, and some patients may not be deemed ideal candidates for surgery.
An important part of the process is a discussion with the patient and family about the risks and the potential short-term and long-term benefits. Informed consent requires a good understanding of this equation. Patients are counseled to have realistic expectations about what the procedure can offer. Deep brain stimulation can help some of the symptoms of Parkinson disease but will not cure it, and there is no evidence to date that it reduces its progress. At 5 or 10 years after surgery, patients are expected to be worse overall than they were in the first year after surgery, because of disease progression. However, patients who receive this treatment are expected, in general, to be doing better 5 or 10 years later (or longer) than those who do not receive it.
In addition to the discussion about risks, benefits, and expectations, a careful discussion is also devoted to hardware maintenance, including how to change the batteries. Particularly, younger patients should be informed about the risk of breakage of the leads and the extension wire, as they are likely to outlive their implant. Patients and caregivers should be able to come to the specialized center should hardware malfunction occur.
Patients are also informed that after the system is implanted they cannot undergo magnetic resonance imaging (MRI) except of the head, performed with a specific head coil and under specific parameters. MRI of any other body part and with a body coil is contraindicated.
HOW THE DEVICE IS IMPLANTED
There are several options for implanting a deep brain stimulation device.
Implantation with the patient awake, using a stereotactic headframe
At Cleveland Clinic, we usually prefer implantation with a stereotactic headframe. The base or “halo” of the frame is applied to the head under local anesthesia, followed by imaging via computed tomography (Figure 1). Typically, the tomographic image is fused to a previously acquired MRI image, but the MRI is sometimes either initially performed or repeated on the day of surgery.
Patients are sedated for the beginning of the procedure, while the surgical team is opening the skin and drilling the opening in the skull for placement of the lead. The patient is awakened for placement of the electrodes, which is not painful.
Microelectrode recording is typically performed in order to refine the targeting based on the stereotactic coordinates derived from neuroimaging. Although cadaver atlases exist and provide a guide to the stereotactic localization of subcortical structures, they are not completely accurate in representing the brain anatomy of all patients.
By “listening” to cells and knowing their characteristic signals in specific areas, landmarks can be created, forming an individualized map of the patient’s brain target. Microelectrode recording is invasive and has risks, including the risk of a brain hemorrhage. It is routinely done in most specialized deep brain stimulation centers because it can provide better accuracy and precision in lead placement.
When the target has been located and refined by microelectrode recording, the permanent electrode is inserted. Fluoroscopy is usually used to verify the direction and stability of placement during the procedure.
An intraoperative test of the effects of deep brain stimulation is routinely performed to verify that some benefits can be achieved with the brain lead in its location, to determine the threshold for side effects, or both. For example, the patient may be asked to hold a cup as if trying to drink from it and to write or to draw a spiral on a clipboard to assess for improvements in tremor. Rigidity and bradykinesia can also be tested for improvements.
This intraoperative test is not aimed at assessing the best possible outcome of deep brain stimulation, and not even to see an improvement in all symptoms that burden the patient. Rather, it is to evaluate the likelihood that programming will be feasible with the implanted lead.
Subsequently, implantation of the pulse generator in the chest and connection to the brain lead is completed, usually with the patient under general anesthesia.
Implantation under general anesthesia, with intraoperative MRI
A new alternative to “awake stereotactic surgery” is implantation with the patient under general anesthesia, with intraoperative MRI. We have started to do this procedure in a new operating suite that is attached to an MRI suite. The magnet can be taken in and out of the operating room, allowing the surgeon to verify the location of the implanted leads right at the time of the procedure. In this fashion, intraoperative images are used to guide implantation instead of awake microelectrode recording. This is a new option for patients who cannot tolerate awake surgery and for those who have a contraindication to the regular stereotactic procedure with the patient awake.
Risks of bleeding and infection
The potential complications of implanting a device and leads in the brain can be significant.
Hemorrhage can occur, resulting in a superficial or deep hematoma.
Infection and erosion may require removal of the hardware for antibiotic treatment and possible reimplantation.
Other risks include those related to tunneling the wires from the head to the chest, to implanting the device in the chest, and to serious medical complications after surgery. Hardware failure can occur and requires additional surgery. Finally, environmental risks and risks related to medical devices such as MRI, electrocautery, and cardioversion should also be considered.
Deep brain stimulation is advantageous for its reversibility. If during postoperative programming the brain leads are considered not to be ideally placed, revisions can be done to reposition the leads.
Deep brain stimulation is an important therapy for Parkinson disease and other movement disorders. It involves implantation of a pulse generator that can be adjusted by telemetry and can be activated and deactivated by clinicians and patients. It is therefore also a good investigational tool, allowing for double-blind, sham-controlled clinical trials by testing the effects of the stimulation with optimal settings compared with no stimulation.
This article will discuss the approved indications for deep brain stimulation (particularly for managing movement disorders), the benefits that can be expected, the risks, the complications, the maintenance required, how candidates for this treatment are evaluated, and the surgical procedure for implantation of the devices.
DEVICE SIMILAR TO HEART PACEMAKERS
The deep brain stimulation system must be programmed by a physician or midlevel practitioner by observing a symptom and then changing the applied settings to the pulse generator until the symptom improves. This can be a very time-consuming process.
In contrast to heart pacemakers, which run at low frequencies, the brain devices for movement disorders are almost always set to a high frequency, greater than 100 Hz. For this reason, they consume more energy and need larger batteries than those in modern heart pacemakers.
The batteries in these generators typically last 3 to 5 years and are replaced in an outpatient procedure. Newer, smaller, rechargeable devices are expected to last longer but require more maintenance and care by patients, who have to recharge them at home periodically.
INDICATIONS FOR DEEP BRAIN STIMULATION
Deep brain stimulation is approved by the US Food and Drug Administration (FDA) for specific indications:
- Parkinson disease
- Essential tremor
- Primary dystonia (under a humanitarian device exemption)
- Intractable obsessive-compulsive disorder (also under a humanitarian device exemption). We will not discuss this indication further in this paper.
For each of these conditions, deep brain stimulation is considered when nonsurgical management has failed, as is the case for most functional neurosurgical treatments.
Investigations under way in other disorders
Several studies of deep brain stimulation are currently in progress under FDA-approved investigational device exemptions. Some, with funding from industry, are exploring its use in neuropsychiatric conditions other than parkinsonism. Two large clinical trials are evaluating its use for treatment-refractory depression, a common problem and a leading cause of disability in the industrialized world. Multiple investigators are also exploring novel uses of this technology in disorders ranging from obsessive-compulsive disorder to epilepsy.
Investigation is also under way at Cleveland Clinic in a federally funded, prospective, randomized clinical trial of deep brain stimulation for patients with thalamic pain syndrome. The primary hypothesis is that stimulation of the ventral striatal and ventral capsular area will modulate the affective component of this otherwise intractable pain syndrome, reducing pain-related disability and improving quality of life.
DEEP BRAIN STIMULATION VS ABLATION
Before deep brain stimulation became available, the only surgical options for patients with advanced Parkinson disease, tremor, or dystonia were ablative procedures such as pallidotomy (ablation of part of the globus pallidus) and thalamotomy (ablation of part of the thalamus). These procedures had been well known for several decades but fell out of favor when levodopa became available in the 1960s and revolutionized the medical treatment of Parkinson disease.
Surgery for movement disorders, in particular Parkinson disease, had a rebirth in the late 1980s when the limitations and complications associated with the pharmacologic management of Parkinson disease became increasingly evident. Ablative procedures are still used to treat advanced Parkinson disease, but much less commonly in industrialized countries.
Although pallidotomy and thalamotomy can have excellent results, they are not as safe as deep brain stimulation, which has the advantage of being reversible, modulating the function of an area rather than destroying it. Any unwanted effect can be immediately altered or reversed, unlike ablative procedures, in which any change is permanent. In addition, deep brain stimulation is adjustable, and the settings can be optimized as the disease progresses over the years.
Ablative procedures can be risky when performed bilaterally, while deep brain stimulation is routinely done on both hemispheres for patients with bilateral symptoms.
Although deep brain stimulation is today’s surgical treatment of choice, it is not perfect. It has the disadvantage of requiring lifelong maintenance of the hardware, for which the patient remains dependent on a medical center. Patients are usually seen more often at the specialized center in the first few months after surgery for optimization of programming and titration of drugs. (During this time, most patients see a gradual, substantial reduction in medication intake.) They are then followed by their physician and visit the center less often for monitoring of disease status and for further adjustments to the stimulator.
Most patients, to date, receive nonrechargeable pulse generators. As mentioned above, the batteries in these devices typically last 3 to 5 years. Preferably, batteries are replaced before they are completely depleted, to avoid interruption of therapy. Periodic visits to the center allow clinicians to estimate battery expiration ahead of time and plan replacements accordingly.
Rechargeable pulse generators have been recently introduced and are expected to last up to 9 years. They are an option for patients who can comply with the requirements for periodic home recharging of the hardware.
Patients are given a remote control so that they can turn the device on or off and check its status. Most patients keep it turned on all the time, although some turn it off at night to save battery life.
WHAT CAN PARKINSON PATIENTS EXPECT FROM THIS THERAPY?
Typically, some parkinsonian symptoms predominate over others, although some patients with advanced disease present with a severe combination of multiple disabling symptoms. Deep brain stimulation is best suited to address some of the cardinal motor symptoms, particularly tremor, rigidity, and bradykinesia, and motor fluctuations such as “wearing off” and dyskinesia.
Improvement in some motor symptoms
As a general rule, appendicular symptoms such as limb tremor and rigidity are more responsive to this therapy than axial symptoms such as gait and balance problems, but some patients experience improvement in gait as well. Other symptoms, such as swallowing or urinary symptoms, are seldom helped.
Although deep brain stimulation can help manage key motor symptoms and improve quality of life, it does not cure Parkinson disease. Also, there is no evidence to date that it slows disease progression, although this is a topic of ongoing investigation.
Fewer motor fluctuations
A common complaint of patients with advanced Parkinson disease is frequent—and often unpredictable—fluctuations between the “on” state (ie, when the effects of the patient’s levodopa therapy are apparent) and the “off” state (ie, when the levodopa doesn’t seem to be working). Sometimes, in the on state, patients experience involuntary choreic or ballistic movements, called dyskinesias. They also complain that the on time progressively lasts shorter and the day is spent alternating between shorter on states (during which the patient may be dyskinetic) and longer off states, limiting the patient’s independence and quality of life.
Deep brain stimulation can help patients prolong the on time while reducing the amplitude of these fluctuations so that the symptoms are not as severe in the off time and dyskinesias are reduced in the on time.
Some patients undergo deep brain stimulation primarily for managing the adverse effects of levodopa rather than for controlling the symptoms of the disease itself. While these patients need levodopa to address the disabling symptoms of the disease, they also present a greater sensitivity for developing levodopa-induced dyskinesias, quickly fluctuating from a lack of movement (the off state) to a state of uncontrollable movements (during the on state).
Deep brain stimulation typically allows the dosage of levodopa to be significantly reduced and gives patients more on time with fewer side effects and less fluctuation between the on and off states.
Response to levodopa predicts deep brain stimulation’s effects
Whether a patient is likely to be helped by deep brain stimulation can be tested with reasonable predictability by giving a single therapeutic dose of levodopa after the patient has been free of the drug for 12 hours. If there is an obvious difference on objective quantitative testing between the off and on states with a single dose, the patient is likely to benefit from deep brain stimulation. Those who do not respond well or are known to have never been well controlled by levodopa are likely poor candidates.
The test is also used as an indicator of whether the patient’s gait can be improved. Patients whose gait is substantially improved by levodopa, even for only a brief period of time, have a better chance of experiencing improvement in this domain with deep brain stimulation than those who do not show any gait improvement.
An important and notable exception to this rule is tremor control. Even Parkinson patients who do not experience significant improvement in tremor with levodopa (ie, who have medication-resistant tremors) are still likely to benefit from deep brain stimulation. Overall, tremor is the symptom that is most consistently improved with deep brain stimulation.
Results of clinical trials
Several clinical trials have demonstrated that deep brain stimulation plus medication works better than medications alone for advanced Parkinson disease.
Deuschl et al1 conducted a randomized trial in 156 patients with advanced Parkinson disease. Patients receiving subthalamic deep brain stimulation plus medication had significantly greater improvement in motor symptoms as measured by the Unified Parkinson’s Disease Rating Scale as well as in quality-of-life measures than patients receiving medications only.
Krack et al2 reported on the outcomes of 49 patients with advanced Parkinson disease who underwent deep brain stimulation and then were prospectively followed. At 5 years, motor function had improved by approximately 55% from baseline, activities-of-daily-living scores had improved by 49%, and patients continued to need significantly less levodopa and to experience less drug-induced dyskinesia.
Complications related to deep brain stimulation occurred in both studies, including two large intracerebral hemorrhages, one of which was fatal.
Weight gain. During the first 3 months after the device was implanted, patients tended to gain weight (mean 3 kg, maximum 5 kg). Although weight gain is considered an adverse effect, many patients are quite thin by the time they are candidates for deep brain stimulation, and in such cases gaining lean weight can be a benefit.
Patients with poorly controlled Parkinson disease lose weight for several reasons: increased calorie expenditure from shaking and excessive movements; diet modification and protein restriction for some patients who realize that protein competes with levodopa absorption; lack of appetite due to depression or from poor taste sensation (due to anosmia); and decreased overall food consumption due to difficulty swallowing.
DEEP BRAIN STIMULATION FOR ESSENTIAL TREMOR
Essential tremor is more common than Parkinson disease, with a prevalence in the United States estimated at approximately 4,000 per 100,000 people older than 65 years.
The tremor is often bilateral and is characteristically an action tremor, but in many patients it also has a postural, and sometimes a resting, component. It is distinct from parkinsonian tremor, which is usually predominantly a resting tremor. The differential diagnosis includes tremors secondary to central nervous system degenerative disorders as well as psychogenic tremors.
Drinking alcohol tends to relieve essential tremors, a finding that can often be elicited in the patient’s history. Patients whose symptoms improve with an alcoholic beverage are more likely to have essential tremor than another diagnosis.
Response to deep brain stimulation
Most patients with essential tremor respond well to deep brain stimulation of the contralateral ventral intermedius thalamic nucleus.
Treatment is usually started unilaterally, usually aimed at alleviating tremor in the patient’s dominant upper extremity. In selected cases, preference is given to treating the nondominant extremity when it is more severely affected than the dominant extremity.
Implantation of a device on the second side is offered to some patients who continue to be limited in activity and quality of life due to tremor of the untreated extremity. Surgery of the second side can be more complicated than the initial unilateral procedure. In particular, some patients may present with dysarthria, although that seems to be less common in our experience than initially estimated.
In practice, patients with moderate tremors tend to have an excellent response to deep brain stimulation. For this particular indication, if the response is not satisfactory, the treating team tends to consider surgically revising the placement of the lead rather than considering the patient a nonresponder. Patients with very severe tremors may have some residual tremor despite substantial improvement in severity. In our experience, patients with a greater proximal component of tremor tend to have less satisfactory results.
For challenging cases, implantation of additional electrodes in the thalamus or in new targets currently under investigation is sometimes considered, although this is an off-label use.
Treatment of secondary tremors, such as poststroke tremor or tremor due to multiple sclerosis, is sometimes attempted with deep brain stimulation. This is also an off-label option but is considered in selected cases for quality-of-life management.
Patients with axial tremors such as head or voice tremor are less likely to be helped by deep brain stimulation.
DEEP BRAIN STIMULATION FOR PRIMARY DYSTONIA
Generalized dystonia is a less common but severely impairing movement disorder.
Deep brain stimulation is approved for primary dystonia under a humanitarian device exemption, a regulatory mechanism for less common conditions. Deep brain stimulation is an option for patients who have significant impairment related to dystonia and who have not responded to conservative management such as anticholinergic agents, muscle relaxants, benzodiazepines, levodopa, or combinations of these drugs. Surgery has been shown to be effective for patients with primary generalized dystonia, whether or not they tested positive for a dystonia-related gene such as DYT1.
Kupsch et al3 evaluated 40 patients with primary dystonia in a randomized controlled trial of pallidal (globus pallidus pars interna) active deep brain stimulation vs sham stimulation (in which the device was implanted but not activated) for 3 months. Treated patients improved significantly more than controls (39% vs 5%) in the Burke-Fahn- Marsden Dystonia Rating Scale (BFMDRS).4 Similar improvement was noted when those receiving sham stimulation were switched to active stimulation.
During long-term follow-up, the results were generally sustained, with substantial improvement from deep brain stimulation in all movement symptoms evaluated except for speech and swallowing. Unlike improvement in tremor, which is quickly evident during testing in the operating room, the improvement in dystonia occurs gradually, and it may take months for patients to notice a change. Similarly, if stimulation stops because of device malfunction or dead batteries, symptoms sometimes do not recur for weeks or months.
Deep brain stimulation is sometimes offered to patients with dystonia secondary to conditions such as cerebral palsy or trauma (an off-label use). Although benefits are less consistent, deep brain stimulation remains an option for these individuals, aimed at alleviating some of the disabling symptoms. In patients with cerebral palsy or other secondary dystonias, it is sometimes difficult to distinguish how much of the disability is related to spasticity vs dystonia. Deep brain stimulation aims to alleviate the dystonic component; the spasticity may be managed with other options such as intrathecal baclofen (Lioresal).
Patients with tardive dystonia, which is usually secondary to treatment with antipsychotic agents, have been reported to respond well to bilateral deep brain stimulation. Gruber et al5 reported on a series of nine patients with a mean follow-up of 41 months. Patients improved by a mean of approximately 74% on the BFMDRS after 3 to 6 months of deep brain stimulation compared with baseline. None of the patients presented with long-term adverse effects, and quality of life and disability scores also improved significantly.
CANDIDATES ARE EVALUATED BY A MULTIDISCIPLINARY TEAM
Cleveland Clinic conducts a comprehensive 2-day evaluation for patients being considered for deep brain stimulation surgery, including consultations with specialists in neurology, neurosurgery, neuropsychology, and psychiatry.
Patients with significant cognitive deficits—near or meeting the diagnostic criteria for dementia—are usually not recommended to have surgery for Parkinson disease. Deep brain stimulation is not aimed at alleviating cognitive issues related to Parkinson disease or other concomitant dementia. In addition, there is a risk that neurostimulation could further worsen cognitive function in the already compromised brain. Moreover, patients with significant abnormalities detected by neuroimaging may have their diagnosis reconsidered in some cases, and some patients may not be deemed ideal candidates for surgery.
An important part of the process is a discussion with the patient and family about the risks and the potential short-term and long-term benefits. Informed consent requires a good understanding of this equation. Patients are counseled to have realistic expectations about what the procedure can offer. Deep brain stimulation can help some of the symptoms of Parkinson disease but will not cure it, and there is no evidence to date that it reduces its progress. At 5 or 10 years after surgery, patients are expected to be worse overall than they were in the first year after surgery, because of disease progression. However, patients who receive this treatment are expected, in general, to be doing better 5 or 10 years later (or longer) than those who do not receive it.
In addition to the discussion about risks, benefits, and expectations, a careful discussion is also devoted to hardware maintenance, including how to change the batteries. Particularly, younger patients should be informed about the risk of breakage of the leads and the extension wire, as they are likely to outlive their implant. Patients and caregivers should be able to come to the specialized center should hardware malfunction occur.
Patients are also informed that after the system is implanted they cannot undergo magnetic resonance imaging (MRI) except of the head, performed with a specific head coil and under specific parameters. MRI of any other body part and with a body coil is contraindicated.
HOW THE DEVICE IS IMPLANTED
There are several options for implanting a deep brain stimulation device.
Implantation with the patient awake, using a stereotactic headframe
At Cleveland Clinic, we usually prefer implantation with a stereotactic headframe. The base or “halo” of the frame is applied to the head under local anesthesia, followed by imaging via computed tomography (Figure 1). Typically, the tomographic image is fused to a previously acquired MRI image, but the MRI is sometimes either initially performed or repeated on the day of surgery.
Patients are sedated for the beginning of the procedure, while the surgical team is opening the skin and drilling the opening in the skull for placement of the lead. The patient is awakened for placement of the electrodes, which is not painful.
Microelectrode recording is typically performed in order to refine the targeting based on the stereotactic coordinates derived from neuroimaging. Although cadaver atlases exist and provide a guide to the stereotactic localization of subcortical structures, they are not completely accurate in representing the brain anatomy of all patients.
By “listening” to cells and knowing their characteristic signals in specific areas, landmarks can be created, forming an individualized map of the patient’s brain target. Microelectrode recording is invasive and has risks, including the risk of a brain hemorrhage. It is routinely done in most specialized deep brain stimulation centers because it can provide better accuracy and precision in lead placement.
When the target has been located and refined by microelectrode recording, the permanent electrode is inserted. Fluoroscopy is usually used to verify the direction and stability of placement during the procedure.
An intraoperative test of the effects of deep brain stimulation is routinely performed to verify that some benefits can be achieved with the brain lead in its location, to determine the threshold for side effects, or both. For example, the patient may be asked to hold a cup as if trying to drink from it and to write or to draw a spiral on a clipboard to assess for improvements in tremor. Rigidity and bradykinesia can also be tested for improvements.
This intraoperative test is not aimed at assessing the best possible outcome of deep brain stimulation, and not even to see an improvement in all symptoms that burden the patient. Rather, it is to evaluate the likelihood that programming will be feasible with the implanted lead.
Subsequently, implantation of the pulse generator in the chest and connection to the brain lead is completed, usually with the patient under general anesthesia.
Implantation under general anesthesia, with intraoperative MRI
A new alternative to “awake stereotactic surgery” is implantation with the patient under general anesthesia, with intraoperative MRI. We have started to do this procedure in a new operating suite that is attached to an MRI suite. The magnet can be taken in and out of the operating room, allowing the surgeon to verify the location of the implanted leads right at the time of the procedure. In this fashion, intraoperative images are used to guide implantation instead of awake microelectrode recording. This is a new option for patients who cannot tolerate awake surgery and for those who have a contraindication to the regular stereotactic procedure with the patient awake.
Risks of bleeding and infection
The potential complications of implanting a device and leads in the brain can be significant.
Hemorrhage can occur, resulting in a superficial or deep hematoma.
Infection and erosion may require removal of the hardware for antibiotic treatment and possible reimplantation.
Other risks include those related to tunneling the wires from the head to the chest, to implanting the device in the chest, and to serious medical complications after surgery. Hardware failure can occur and requires additional surgery. Finally, environmental risks and risks related to medical devices such as MRI, electrocautery, and cardioversion should also be considered.
Deep brain stimulation is advantageous for its reversibility. If during postoperative programming the brain leads are considered not to be ideally placed, revisions can be done to reposition the leads.
- Deuschl G, Schade-Brittinger C, Krack P, et al; German Parkinson Study Group, Neurostimulation Section. A randomized trial of deep-brain stimulation for Parkinson’s disease. N Engl J Med 2006; 355:896–908.
- Krack P, Batir A, Van Blercom N, et al. Five-year followup of bilateral stimulation of the subthalamic nucleus in advanced Parkinson’s disease. N Engl J Med 2003; 349:1925–1934.
- Kupsch A, Benecke R, Müller J, et al; Deep-Brain Stimulation for Dystonia Study Group. Pallidal deep-brain stimulation in primary generalized or segmental dystonia. N Engl J Med 2006; 355:1978–1990.
- Burke RE, Fahn S, Marsden CD, Bressman SB, Moskowitz C, Friedman J. Validity and reliability of a rating scle for the primary torsion dystonias. Neurology 1985; 35:73–77.
- Gruber D, Trottenberg T, Kivi A, et al. Long-term effects of pallidal deep brain stimulation in tardive dystonia. Neurology 2009; 73:53–58.
- Deuschl G, Schade-Brittinger C, Krack P, et al; German Parkinson Study Group, Neurostimulation Section. A randomized trial of deep-brain stimulation for Parkinson’s disease. N Engl J Med 2006; 355:896–908.
- Krack P, Batir A, Van Blercom N, et al. Five-year followup of bilateral stimulation of the subthalamic nucleus in advanced Parkinson’s disease. N Engl J Med 2003; 349:1925–1934.
- Kupsch A, Benecke R, Müller J, et al; Deep-Brain Stimulation for Dystonia Study Group. Pallidal deep-brain stimulation in primary generalized or segmental dystonia. N Engl J Med 2006; 355:1978–1990.
- Burke RE, Fahn S, Marsden CD, Bressman SB, Moskowitz C, Friedman J. Validity and reliability of a rating scle for the primary torsion dystonias. Neurology 1985; 35:73–77.
- Gruber D, Trottenberg T, Kivi A, et al. Long-term effects of pallidal deep brain stimulation in tardive dystonia. Neurology 2009; 73:53–58.
KEY POINTS
- Compared with ablative procedures, deep brain stimulation has the advantage of being reversible and adjustable. It is considered safer than ablative surgery, in particular for bilateral procedures, which are often needed for patients with advanced Parkinson disease and other movement disorders.
- For Parkinson disease, deep brain stimulation improves the cardinal motor symptoms, extends medication “on” time, and reduces motor fluctuations during the day.
- In general, patients with Parkinson disease are likely to benefit from this therapy if they show a clear response to levodopa. Patients are therefore asked to stop their Parkinson medications overnight to permit a formal evaluation of their motor response before and after a dose of levodopa.
- Candidates require a thorough evaluation to assess whether they are likely to benefit from deep brain stimulation and if they can comply with the maintenance often required for a successful outcome.
Chest pain followed by sudden collapse
Q: Given what we know so far, what is the most likely cause of the ST segment elevation in leads V1 and V2?
- Brugada syndrome
- Pulmonary embolism
- Right ventricular injury
- Anterior myocardial infarction
A: The correct answer is right ventricular injury (discussed below).
Brugada syndrome is a genetic disorder caused by a mutation in the cardiac sodium channel gene. It is characterized by a pronounced elevation of the J point, a coved-type ST segment elevation in leads V1 and V2, and a propensity to develop malignant ventricular arrhythmias and sudden cardiac death.
In this patient, the pattern of ST segment elevation in leads V1 and V2 may be falsely interpreted as the classic type 1 Brugada electrocardiographic pattern. However, the classic type 1 Brugada electrocardiogram is characterized by a coved ST elevation followed by a negative T wave.1 The absence of T-wave inversion following ST segment elevation in this patient excluded Brugada syndrome. Moreover, the main presentation in patients with Brugada syndrome is either syncopy or sudden cardiac death.
Pulmonary embolism can present with various electrocardiographic patterns. ST segment elevation in the antroseptal leads is an extremely rare sign and has been demonstrated in a few reports.2,3 Pulmonary embolism can also present with abnormal Q waves in leads III and aVF but not in lead II.4 The initial electrocardiographic rhythm in patients who present with cardiac arrest is usually pulseless electrical activity; however, the combination of increased right ventricular oxygen consumption due to increased right ventricular afterload and right ventricular hypoperfusion due to hypotension can lead to right ventricular ischemia and subsequent arrhythmias. Mittal and Arora5 described a case of submassive pulmonary embolism with right ventricular infarction presenting with sustained ventricular tachycardia.
The prognosis is usually poor in patients with cardiac arrest due to pulmonary embolism, which is usually caused by a massive embolus and usually necessitates thrombolytic therapy.
In the patient described here, pulmonary embolism was part of the differential diagnosis, given the presence of ST segment elevation in leads V1 and V2 in the context of the clinical scenario. However, the restoration of spontaneous circulation without any specific treatment for pulmonary embolism and the normal oxygenation after cardiac arrest excluded pulmonary embolism.
Right ventricular myocardial injury is important to recognize for therapeutic and prognostic reasons. It is usually associated with inferior infarction because it is typically secondary to an acute occlusion of the proximal right coronary artery proximal to the take-off of the right ventricular marginal branch. In the described scenario, the presence of ST segment elevation and Q waves in the inferior leads together with reciprocal ST segment depression in leads I and aVL represents an inferior myocardial infarct. ST segment elevation in the right precordial leads V3R and V4R is a marker for right ventricular injury—especially in V4R, in which it is a powerful predictor of right ventricular involvement. ST segment elevation in leads V1 and V2 is not usually demonstrated in patients with right ventricular injury because the electrical current of injury from the left ventricle inferior myocardial infarction dominates the right ventricular electrical forces, blocking the appearance of ST segment elevation in these leads.6 Data from the Hirulog and Early Reperfusion or Occlusion-2 trial showed that ST segment elevation of 1 mm or greater in lead V1 is associated with an increased risk of death in patients with acute inferior myocardial infarction.7 Furthermore, the presence of ST-segment elevation in lead V6 in patients with acute Q-wave inferior myocardial infarction, as evident in the first electrocardiogram, is associated with larger infarct size and a greater incidence of major arrhythmias.8
DETERMINING THE CULPRIT VESSEL
In the scenario described here, differentiating between right ventricular injury and anterior myocardial infarction is important to determine the culprit vessel.
CASE CONCLUDED
- Antzelevitch C, Brugada P, Borggrefe M, et al. Brugada syndrome: report of the second consensus conference: endorsed by the Heart Rhythm Society and the European Heart Rhythm Association. Circulation 2005; 111:659–670.
- Livaditis IG, Paraschos M, Dimopoulos K. Massive pulmonary embolism with ST elevation in leads V1–V3 and successful thrombolysis with tenecteplase. Heart 2004; 90:e41.
- Falterman TJ, Martinez JA, Daberkow D, Weiss LD. Pulmonary embolism with ST segment elevation in leads V1 to V4: case report and review of the literature regarding electrocardiographic changes in acute pulmonary embolism. J Emerg Med 2001; 21:255–261.
- Sreeram N, Cheriex EC, Smeets JL, Gorgels AP, Wellens HJ. Value of the 12-lead electrocardiogram at hospital admission in the diagnosis of pulmonary embolism. Am J Cardiol 1994; 73:298–303.
- Mittal SR, Arora H. Pulmonary embolism with isolated right ventricular infarction. Indian Heart J 2001; 53:218–220.
- Geft IL, Shah PK, Rodriguez L, et al. ST elevations in leads V1 to V5 may be caused by right coronary artery occlusion and acute right ventricular infarction. Am J Cardiol 1984; 53:991–996.
- Wong CK, Gao W, Stewart RA, et al; Hirulog and Early Reperfusion or Occlusion-2 Investigators. Prognostic value of lead V1 ST elevation during acute inferior myocardial infarction. Circulation 2010; 122:463–469.
- Tsuka Y, Sugiura T, Hatada K, Abe Y, Takahashi N, Iwasaka T. Clinical characteristics of ST-segment elevation in lead V6 in patients with Q-wave acute inferior wall myocardial infarction. Coron Artery Dis 1999; 10:465–469.
Q: Given what we know so far, what is the most likely cause of the ST segment elevation in leads V1 and V2?
- Brugada syndrome
- Pulmonary embolism
- Right ventricular injury
- Anterior myocardial infarction
A: The correct answer is right ventricular injury (discussed below).
Brugada syndrome is a genetic disorder caused by a mutation in the cardiac sodium channel gene. It is characterized by a pronounced elevation of the J point, a coved-type ST segment elevation in leads V1 and V2, and a propensity to develop malignant ventricular arrhythmias and sudden cardiac death.
In this patient, the pattern of ST segment elevation in leads V1 and V2 may be falsely interpreted as the classic type 1 Brugada electrocardiographic pattern. However, the classic type 1 Brugada electrocardiogram is characterized by a coved ST elevation followed by a negative T wave.1 The absence of T-wave inversion following ST segment elevation in this patient excluded Brugada syndrome. Moreover, the main presentation in patients with Brugada syndrome is either syncopy or sudden cardiac death.
Pulmonary embolism can present with various electrocardiographic patterns. ST segment elevation in the antroseptal leads is an extremely rare sign and has been demonstrated in a few reports.2,3 Pulmonary embolism can also present with abnormal Q waves in leads III and aVF but not in lead II.4 The initial electrocardiographic rhythm in patients who present with cardiac arrest is usually pulseless electrical activity; however, the combination of increased right ventricular oxygen consumption due to increased right ventricular afterload and right ventricular hypoperfusion due to hypotension can lead to right ventricular ischemia and subsequent arrhythmias. Mittal and Arora5 described a case of submassive pulmonary embolism with right ventricular infarction presenting with sustained ventricular tachycardia.
The prognosis is usually poor in patients with cardiac arrest due to pulmonary embolism, which is usually caused by a massive embolus and usually necessitates thrombolytic therapy.
In the patient described here, pulmonary embolism was part of the differential diagnosis, given the presence of ST segment elevation in leads V1 and V2 in the context of the clinical scenario. However, the restoration of spontaneous circulation without any specific treatment for pulmonary embolism and the normal oxygenation after cardiac arrest excluded pulmonary embolism.
Right ventricular myocardial injury is important to recognize for therapeutic and prognostic reasons. It is usually associated with inferior infarction because it is typically secondary to an acute occlusion of the proximal right coronary artery proximal to the take-off of the right ventricular marginal branch. In the described scenario, the presence of ST segment elevation and Q waves in the inferior leads together with reciprocal ST segment depression in leads I and aVL represents an inferior myocardial infarct. ST segment elevation in the right precordial leads V3R and V4R is a marker for right ventricular injury—especially in V4R, in which it is a powerful predictor of right ventricular involvement. ST segment elevation in leads V1 and V2 is not usually demonstrated in patients with right ventricular injury because the electrical current of injury from the left ventricle inferior myocardial infarction dominates the right ventricular electrical forces, blocking the appearance of ST segment elevation in these leads.6 Data from the Hirulog and Early Reperfusion or Occlusion-2 trial showed that ST segment elevation of 1 mm or greater in lead V1 is associated with an increased risk of death in patients with acute inferior myocardial infarction.7 Furthermore, the presence of ST-segment elevation in lead V6 in patients with acute Q-wave inferior myocardial infarction, as evident in the first electrocardiogram, is associated with larger infarct size and a greater incidence of major arrhythmias.8
DETERMINING THE CULPRIT VESSEL
In the scenario described here, differentiating between right ventricular injury and anterior myocardial infarction is important to determine the culprit vessel.
CASE CONCLUDED
Q: Given what we know so far, what is the most likely cause of the ST segment elevation in leads V1 and V2?
- Brugada syndrome
- Pulmonary embolism
- Right ventricular injury
- Anterior myocardial infarction
A: The correct answer is right ventricular injury (discussed below).
Brugada syndrome is a genetic disorder caused by a mutation in the cardiac sodium channel gene. It is characterized by a pronounced elevation of the J point, a coved-type ST segment elevation in leads V1 and V2, and a propensity to develop malignant ventricular arrhythmias and sudden cardiac death.
In this patient, the pattern of ST segment elevation in leads V1 and V2 may be falsely interpreted as the classic type 1 Brugada electrocardiographic pattern. However, the classic type 1 Brugada electrocardiogram is characterized by a coved ST elevation followed by a negative T wave.1 The absence of T-wave inversion following ST segment elevation in this patient excluded Brugada syndrome. Moreover, the main presentation in patients with Brugada syndrome is either syncopy or sudden cardiac death.
Pulmonary embolism can present with various electrocardiographic patterns. ST segment elevation in the antroseptal leads is an extremely rare sign and has been demonstrated in a few reports.2,3 Pulmonary embolism can also present with abnormal Q waves in leads III and aVF but not in lead II.4 The initial electrocardiographic rhythm in patients who present with cardiac arrest is usually pulseless electrical activity; however, the combination of increased right ventricular oxygen consumption due to increased right ventricular afterload and right ventricular hypoperfusion due to hypotension can lead to right ventricular ischemia and subsequent arrhythmias. Mittal and Arora5 described a case of submassive pulmonary embolism with right ventricular infarction presenting with sustained ventricular tachycardia.
The prognosis is usually poor in patients with cardiac arrest due to pulmonary embolism, which is usually caused by a massive embolus and usually necessitates thrombolytic therapy.
In the patient described here, pulmonary embolism was part of the differential diagnosis, given the presence of ST segment elevation in leads V1 and V2 in the context of the clinical scenario. However, the restoration of spontaneous circulation without any specific treatment for pulmonary embolism and the normal oxygenation after cardiac arrest excluded pulmonary embolism.
Right ventricular myocardial injury is important to recognize for therapeutic and prognostic reasons. It is usually associated with inferior infarction because it is typically secondary to an acute occlusion of the proximal right coronary artery proximal to the take-off of the right ventricular marginal branch. In the described scenario, the presence of ST segment elevation and Q waves in the inferior leads together with reciprocal ST segment depression in leads I and aVL represents an inferior myocardial infarct. ST segment elevation in the right precordial leads V3R and V4R is a marker for right ventricular injury—especially in V4R, in which it is a powerful predictor of right ventricular involvement. ST segment elevation in leads V1 and V2 is not usually demonstrated in patients with right ventricular injury because the electrical current of injury from the left ventricle inferior myocardial infarction dominates the right ventricular electrical forces, blocking the appearance of ST segment elevation in these leads.6 Data from the Hirulog and Early Reperfusion or Occlusion-2 trial showed that ST segment elevation of 1 mm or greater in lead V1 is associated with an increased risk of death in patients with acute inferior myocardial infarction.7 Furthermore, the presence of ST-segment elevation in lead V6 in patients with acute Q-wave inferior myocardial infarction, as evident in the first electrocardiogram, is associated with larger infarct size and a greater incidence of major arrhythmias.8
DETERMINING THE CULPRIT VESSEL
In the scenario described here, differentiating between right ventricular injury and anterior myocardial infarction is important to determine the culprit vessel.
CASE CONCLUDED
- Antzelevitch C, Brugada P, Borggrefe M, et al. Brugada syndrome: report of the second consensus conference: endorsed by the Heart Rhythm Society and the European Heart Rhythm Association. Circulation 2005; 111:659–670.
- Livaditis IG, Paraschos M, Dimopoulos K. Massive pulmonary embolism with ST elevation in leads V1–V3 and successful thrombolysis with tenecteplase. Heart 2004; 90:e41.
- Falterman TJ, Martinez JA, Daberkow D, Weiss LD. Pulmonary embolism with ST segment elevation in leads V1 to V4: case report and review of the literature regarding electrocardiographic changes in acute pulmonary embolism. J Emerg Med 2001; 21:255–261.
- Sreeram N, Cheriex EC, Smeets JL, Gorgels AP, Wellens HJ. Value of the 12-lead electrocardiogram at hospital admission in the diagnosis of pulmonary embolism. Am J Cardiol 1994; 73:298–303.
- Mittal SR, Arora H. Pulmonary embolism with isolated right ventricular infarction. Indian Heart J 2001; 53:218–220.
- Geft IL, Shah PK, Rodriguez L, et al. ST elevations in leads V1 to V5 may be caused by right coronary artery occlusion and acute right ventricular infarction. Am J Cardiol 1984; 53:991–996.
- Wong CK, Gao W, Stewart RA, et al; Hirulog and Early Reperfusion or Occlusion-2 Investigators. Prognostic value of lead V1 ST elevation during acute inferior myocardial infarction. Circulation 2010; 122:463–469.
- Tsuka Y, Sugiura T, Hatada K, Abe Y, Takahashi N, Iwasaka T. Clinical characteristics of ST-segment elevation in lead V6 in patients with Q-wave acute inferior wall myocardial infarction. Coron Artery Dis 1999; 10:465–469.
- Antzelevitch C, Brugada P, Borggrefe M, et al. Brugada syndrome: report of the second consensus conference: endorsed by the Heart Rhythm Society and the European Heart Rhythm Association. Circulation 2005; 111:659–670.
- Livaditis IG, Paraschos M, Dimopoulos K. Massive pulmonary embolism with ST elevation in leads V1–V3 and successful thrombolysis with tenecteplase. Heart 2004; 90:e41.
- Falterman TJ, Martinez JA, Daberkow D, Weiss LD. Pulmonary embolism with ST segment elevation in leads V1 to V4: case report and review of the literature regarding electrocardiographic changes in acute pulmonary embolism. J Emerg Med 2001; 21:255–261.
- Sreeram N, Cheriex EC, Smeets JL, Gorgels AP, Wellens HJ. Value of the 12-lead electrocardiogram at hospital admission in the diagnosis of pulmonary embolism. Am J Cardiol 1994; 73:298–303.
- Mittal SR, Arora H. Pulmonary embolism with isolated right ventricular infarction. Indian Heart J 2001; 53:218–220.
- Geft IL, Shah PK, Rodriguez L, et al. ST elevations in leads V1 to V5 may be caused by right coronary artery occlusion and acute right ventricular infarction. Am J Cardiol 1984; 53:991–996.
- Wong CK, Gao W, Stewart RA, et al; Hirulog and Early Reperfusion or Occlusion-2 Investigators. Prognostic value of lead V1 ST elevation during acute inferior myocardial infarction. Circulation 2010; 122:463–469.
- Tsuka Y, Sugiura T, Hatada K, Abe Y, Takahashi N, Iwasaka T. Clinical characteristics of ST-segment elevation in lead V6 in patients with Q-wave acute inferior wall myocardial infarction. Coron Artery Dis 1999; 10:465–469.
A 48-year-old woman with an ecchymotic rash
She had no constitutional symptoms and no history of venous thromboembolism, stroke, pregnancy loss, recent anticoagulation, or endovascular procedures.
Q: What is the most likely diagnosis?
- Chronic meningococcemia
- Cholesterol embolism
- Antiphospholipid syndrome
- Cryoglobulinemic vasculitis
- Heterozygous protein C deficiency
A: The most likely diagnosis is skin necrosis due to intravascular thrombosis, consistent with antiphospholipid syndrome. By clinical and laboratory criteria, the patient has systemic lupus erythematosus. Pain and swelling in multiple joints is indicative of polyarthritis associated with lupus. Retesting 12 weeks later again detected lupus anticoagulant, confirming the diagnosis of antiphospholipid syndrome.2
In the hospital, the patient was started on unfractionated heparin, later switched to warfarin. Her skin lesions gradually cleared, her pain diminished significantly, and no new lesions appeared after the start of anticoagulation therapy. For her lupus, she was started on hydroxychloroquine (Plaquenil), which has been suggested to also have an adjuvant antithrombotic role in antiphospholipid syndrome.2 On a follow-up visit 3 months later, she was doing well.
MORE ABOUT ANTIPHOSPHOLIPID SYNDROME
Antiphospholipid syndrome is termed primary when no underlying disease is identified, and secondary when it occurs in conjunction with an autoimmune rheumatologic disease, an infection, malignancy, or certain drugs.3 It is the most common cause of acquired thrombophilia.4 Arterial or venous thromboses and recurrent miscarriages are salient clinical features.
Laboratory abnormalities include the presence of a lupus anticoagulant and anticardiolipin and beta-2-glycoprotein 1 antibodies.
Skin manifestations include livedo reticularis, purpuric macular lesions, atrophie blanche, cutaneous infarcts, ulceration, and painful nodules.5 Livedo reticularis, a violaceous, lace-like cutaneous discoloration, is the most commonly described skin lesion, present in 20% to 50% of cases.5,6 Cutaneous necrosis may involve the legs, face, and ears, or it may be generalized.6
The prothrombotic state is believed to be immune-mediated, with complement activation.2 Endothelial cells and monocytes are activated by antiphospholipid antibodies with activity against beta-2-glycoprotein 1, resulting in up-regulation of tissue factor and in platelet activation.2 Histopathologic examination reveals noninflammatory vascular thromboses with endothelial damage.5
Although antiphospholipid syndrome seems to be immune-mediated, immunosuppressive therapy has not proved very effective,3 and anticoagulation is the recommended treatment.3,7
THE OTHER DIAGNOSTIC POSSIBILITIES
Chronic meningococcemia, sometimes associated with terminal complement deficiency, is associated with a petechial rash in 50% to 80% of cases. The rash can become confluent, resulting in hemorrhagic patches with central necrosis, resembling the lesions in our patient.
However, these skin lesions are due to thrombi in the dermal vessels, associated with leukocytoclastic vasculitis. These dermatopathologic changes were not seen in our patient. Moreover, meningococci were not identified in blood cultures or in the luminal thrombi and vessel walls.
Cholesterol embolism occurs when cholesterol crystals break off from severely atherosclerotic plaques, either spontaneously or after local trauma induced by angiography or aortic injury. The crystals shower downstream through the arterial system, often immediately occluding arterioles 100 to 200 μm in diameter.
Our patient had no such history, and the skin biopsy did not show the characteristic “cholesterol clefts”—biconvex, needle-shaped clefts left by the dissolved crystals of cholesterol within the occluded vessels.
Cryoglobulinemic vasculitis is an immune-complex-mediated condition involving small- to medium-size vessels, often associated with hepatitis C virus infection. Skin lesions appear in dependent areas and include erythematous macules and purpuric papules.
Cryoglobulins were not detected in our patient’s sera, nor did the skin biopsy indicate the typical leukocytoclastic vasculitis seen in this condition.
Heterozygous protein C deficiency causes venous thromboembolism and warfarin-induced skin necrosis. Spontaneous thrombosis of cutaneous arterioles (as in our patient) is not a usual manifestation. Also, our patient had normal protein C levels and no history of warfarin use before the skin lesions developed.
Acknowledgment: The authors are grateful to Dr. Judith Drazba, PhD, of Research Core Services (Imaging) at Cleveland Clinic for help in the preparation of the photomicrographs.
- Brandt JT, Triplett DA, Alving B, Scharrer I. Criteria for the diagnosis of lupus anticoagulants: an update. On behalf of the Subcommittee on Lupus Anticoagulant/Antiphospholipid Antibody of the Scientific and Standardisation Committee of the ISTH. Thromb Haemost 1995; 74:1185–1190.
- Ruiz-Irastorza G, Crowther M, Branch W, Khamashta MA. Antiphospholipid syndrome. Lancet 2010; 376:1498–1509.
- Myones BL, McCurdy D. The antiphospholipid syndrome: immunologic and clinical aspects. Clinical spectrum and treatment. J Rheumatol Suppl 2000; 58:20–28.
- Bick RL, Baker WF. Antiphospholipid syndrome and thrombosis. Semin Thromb Hemost 1999; 25:333–350.
- Gibson GE, Su WP, Pittelkow MR. Antiphospholipid syndrome and the skin. J Am Acad Dermatol 1997; 36:970–982.
- Nahass GT. Antiphospholipid antibodies and the antiphospholipid antibody syndrome. J Am Acad Dermatol 1997; 36:149–168.
- Petri M. Pathogenesis and treatment of the antiphospholipid antibody syndrome. Med Clin North Am 1997; 81:151–177.
She had no constitutional symptoms and no history of venous thromboembolism, stroke, pregnancy loss, recent anticoagulation, or endovascular procedures.
Q: What is the most likely diagnosis?
- Chronic meningococcemia
- Cholesterol embolism
- Antiphospholipid syndrome
- Cryoglobulinemic vasculitis
- Heterozygous protein C deficiency
A: The most likely diagnosis is skin necrosis due to intravascular thrombosis, consistent with antiphospholipid syndrome. By clinical and laboratory criteria, the patient has systemic lupus erythematosus. Pain and swelling in multiple joints is indicative of polyarthritis associated with lupus. Retesting 12 weeks later again detected lupus anticoagulant, confirming the diagnosis of antiphospholipid syndrome.2
In the hospital, the patient was started on unfractionated heparin, later switched to warfarin. Her skin lesions gradually cleared, her pain diminished significantly, and no new lesions appeared after the start of anticoagulation therapy. For her lupus, she was started on hydroxychloroquine (Plaquenil), which has been suggested to also have an adjuvant antithrombotic role in antiphospholipid syndrome.2 On a follow-up visit 3 months later, she was doing well.
MORE ABOUT ANTIPHOSPHOLIPID SYNDROME
Antiphospholipid syndrome is termed primary when no underlying disease is identified, and secondary when it occurs in conjunction with an autoimmune rheumatologic disease, an infection, malignancy, or certain drugs.3 It is the most common cause of acquired thrombophilia.4 Arterial or venous thromboses and recurrent miscarriages are salient clinical features.
Laboratory abnormalities include the presence of a lupus anticoagulant and anticardiolipin and beta-2-glycoprotein 1 antibodies.
Skin manifestations include livedo reticularis, purpuric macular lesions, atrophie blanche, cutaneous infarcts, ulceration, and painful nodules.5 Livedo reticularis, a violaceous, lace-like cutaneous discoloration, is the most commonly described skin lesion, present in 20% to 50% of cases.5,6 Cutaneous necrosis may involve the legs, face, and ears, or it may be generalized.6
The prothrombotic state is believed to be immune-mediated, with complement activation.2 Endothelial cells and monocytes are activated by antiphospholipid antibodies with activity against beta-2-glycoprotein 1, resulting in up-regulation of tissue factor and in platelet activation.2 Histopathologic examination reveals noninflammatory vascular thromboses with endothelial damage.5
Although antiphospholipid syndrome seems to be immune-mediated, immunosuppressive therapy has not proved very effective,3 and anticoagulation is the recommended treatment.3,7
THE OTHER DIAGNOSTIC POSSIBILITIES
Chronic meningococcemia, sometimes associated with terminal complement deficiency, is associated with a petechial rash in 50% to 80% of cases. The rash can become confluent, resulting in hemorrhagic patches with central necrosis, resembling the lesions in our patient.
However, these skin lesions are due to thrombi in the dermal vessels, associated with leukocytoclastic vasculitis. These dermatopathologic changes were not seen in our patient. Moreover, meningococci were not identified in blood cultures or in the luminal thrombi and vessel walls.
Cholesterol embolism occurs when cholesterol crystals break off from severely atherosclerotic plaques, either spontaneously or after local trauma induced by angiography or aortic injury. The crystals shower downstream through the arterial system, often immediately occluding arterioles 100 to 200 μm in diameter.
Our patient had no such history, and the skin biopsy did not show the characteristic “cholesterol clefts”—biconvex, needle-shaped clefts left by the dissolved crystals of cholesterol within the occluded vessels.
Cryoglobulinemic vasculitis is an immune-complex-mediated condition involving small- to medium-size vessels, often associated with hepatitis C virus infection. Skin lesions appear in dependent areas and include erythematous macules and purpuric papules.
Cryoglobulins were not detected in our patient’s sera, nor did the skin biopsy indicate the typical leukocytoclastic vasculitis seen in this condition.
Heterozygous protein C deficiency causes venous thromboembolism and warfarin-induced skin necrosis. Spontaneous thrombosis of cutaneous arterioles (as in our patient) is not a usual manifestation. Also, our patient had normal protein C levels and no history of warfarin use before the skin lesions developed.
Acknowledgment: The authors are grateful to Dr. Judith Drazba, PhD, of Research Core Services (Imaging) at Cleveland Clinic for help in the preparation of the photomicrographs.
She had no constitutional symptoms and no history of venous thromboembolism, stroke, pregnancy loss, recent anticoagulation, or endovascular procedures.
Q: What is the most likely diagnosis?
- Chronic meningococcemia
- Cholesterol embolism
- Antiphospholipid syndrome
- Cryoglobulinemic vasculitis
- Heterozygous protein C deficiency
A: The most likely diagnosis is skin necrosis due to intravascular thrombosis, consistent with antiphospholipid syndrome. By clinical and laboratory criteria, the patient has systemic lupus erythematosus. Pain and swelling in multiple joints is indicative of polyarthritis associated with lupus. Retesting 12 weeks later again detected lupus anticoagulant, confirming the diagnosis of antiphospholipid syndrome.2
In the hospital, the patient was started on unfractionated heparin, later switched to warfarin. Her skin lesions gradually cleared, her pain diminished significantly, and no new lesions appeared after the start of anticoagulation therapy. For her lupus, she was started on hydroxychloroquine (Plaquenil), which has been suggested to also have an adjuvant antithrombotic role in antiphospholipid syndrome.2 On a follow-up visit 3 months later, she was doing well.
MORE ABOUT ANTIPHOSPHOLIPID SYNDROME
Antiphospholipid syndrome is termed primary when no underlying disease is identified, and secondary when it occurs in conjunction with an autoimmune rheumatologic disease, an infection, malignancy, or certain drugs.3 It is the most common cause of acquired thrombophilia.4 Arterial or venous thromboses and recurrent miscarriages are salient clinical features.
Laboratory abnormalities include the presence of a lupus anticoagulant and anticardiolipin and beta-2-glycoprotein 1 antibodies.
Skin manifestations include livedo reticularis, purpuric macular lesions, atrophie blanche, cutaneous infarcts, ulceration, and painful nodules.5 Livedo reticularis, a violaceous, lace-like cutaneous discoloration, is the most commonly described skin lesion, present in 20% to 50% of cases.5,6 Cutaneous necrosis may involve the legs, face, and ears, or it may be generalized.6
The prothrombotic state is believed to be immune-mediated, with complement activation.2 Endothelial cells and monocytes are activated by antiphospholipid antibodies with activity against beta-2-glycoprotein 1, resulting in up-regulation of tissue factor and in platelet activation.2 Histopathologic examination reveals noninflammatory vascular thromboses with endothelial damage.5
Although antiphospholipid syndrome seems to be immune-mediated, immunosuppressive therapy has not proved very effective,3 and anticoagulation is the recommended treatment.3,7
THE OTHER DIAGNOSTIC POSSIBILITIES
Chronic meningococcemia, sometimes associated with terminal complement deficiency, is associated with a petechial rash in 50% to 80% of cases. The rash can become confluent, resulting in hemorrhagic patches with central necrosis, resembling the lesions in our patient.
However, these skin lesions are due to thrombi in the dermal vessels, associated with leukocytoclastic vasculitis. These dermatopathologic changes were not seen in our patient. Moreover, meningococci were not identified in blood cultures or in the luminal thrombi and vessel walls.
Cholesterol embolism occurs when cholesterol crystals break off from severely atherosclerotic plaques, either spontaneously or after local trauma induced by angiography or aortic injury. The crystals shower downstream through the arterial system, often immediately occluding arterioles 100 to 200 μm in diameter.
Our patient had no such history, and the skin biopsy did not show the characteristic “cholesterol clefts”—biconvex, needle-shaped clefts left by the dissolved crystals of cholesterol within the occluded vessels.
Cryoglobulinemic vasculitis is an immune-complex-mediated condition involving small- to medium-size vessels, often associated with hepatitis C virus infection. Skin lesions appear in dependent areas and include erythematous macules and purpuric papules.
Cryoglobulins were not detected in our patient’s sera, nor did the skin biopsy indicate the typical leukocytoclastic vasculitis seen in this condition.
Heterozygous protein C deficiency causes venous thromboembolism and warfarin-induced skin necrosis. Spontaneous thrombosis of cutaneous arterioles (as in our patient) is not a usual manifestation. Also, our patient had normal protein C levels and no history of warfarin use before the skin lesions developed.
Acknowledgment: The authors are grateful to Dr. Judith Drazba, PhD, of Research Core Services (Imaging) at Cleveland Clinic for help in the preparation of the photomicrographs.
- Brandt JT, Triplett DA, Alving B, Scharrer I. Criteria for the diagnosis of lupus anticoagulants: an update. On behalf of the Subcommittee on Lupus Anticoagulant/Antiphospholipid Antibody of the Scientific and Standardisation Committee of the ISTH. Thromb Haemost 1995; 74:1185–1190.
- Ruiz-Irastorza G, Crowther M, Branch W, Khamashta MA. Antiphospholipid syndrome. Lancet 2010; 376:1498–1509.
- Myones BL, McCurdy D. The antiphospholipid syndrome: immunologic and clinical aspects. Clinical spectrum and treatment. J Rheumatol Suppl 2000; 58:20–28.
- Bick RL, Baker WF. Antiphospholipid syndrome and thrombosis. Semin Thromb Hemost 1999; 25:333–350.
- Gibson GE, Su WP, Pittelkow MR. Antiphospholipid syndrome and the skin. J Am Acad Dermatol 1997; 36:970–982.
- Nahass GT. Antiphospholipid antibodies and the antiphospholipid antibody syndrome. J Am Acad Dermatol 1997; 36:149–168.
- Petri M. Pathogenesis and treatment of the antiphospholipid antibody syndrome. Med Clin North Am 1997; 81:151–177.
- Brandt JT, Triplett DA, Alving B, Scharrer I. Criteria for the diagnosis of lupus anticoagulants: an update. On behalf of the Subcommittee on Lupus Anticoagulant/Antiphospholipid Antibody of the Scientific and Standardisation Committee of the ISTH. Thromb Haemost 1995; 74:1185–1190.
- Ruiz-Irastorza G, Crowther M, Branch W, Khamashta MA. Antiphospholipid syndrome. Lancet 2010; 376:1498–1509.
- Myones BL, McCurdy D. The antiphospholipid syndrome: immunologic and clinical aspects. Clinical spectrum and treatment. J Rheumatol Suppl 2000; 58:20–28.
- Bick RL, Baker WF. Antiphospholipid syndrome and thrombosis. Semin Thromb Hemost 1999; 25:333–350.
- Gibson GE, Su WP, Pittelkow MR. Antiphospholipid syndrome and the skin. J Am Acad Dermatol 1997; 36:970–982.
- Nahass GT. Antiphospholipid antibodies and the antiphospholipid antibody syndrome. J Am Acad Dermatol 1997; 36:149–168.
- Petri M. Pathogenesis and treatment of the antiphospholipid antibody syndrome. Med Clin North Am 1997; 81:151–177.
Posttraumatic stress disorder, depression, and suicide in veterans
In military veterans, depression, posttraumatic stress disorder (PTSD), and suicidal thoughts are common and closely linked. Veterans are less likely to seek care and more likely to act successfully on suicidal thoughts. Therefore, screening, timely diagnosis, and effective intervention are critical.1
In this article, we review the signs and symptoms of depression and PTSD, the relationship of these conditions to suicidality in veterans, and the role of the non-mental-health clinician in detecting suicidal ideation early and then taking appropriate action. Early identification of suicidality may help save lives of those who otherwise may not seek care.
FROM IDEA TO PLAN TO ACTION
Suicide can be viewed as a process that begins with suicidal ideation, followed by planning and then by a suicidal act,2–9 and suicidal ideation can be prompted by depression or PTSD.
Suicidal ideation, defined as any thought of being the agent of one’s own death,2 is relatively common. Most people who attempt suicide report a history of suicidal ideation.10 In fact, current suicidal ideation increases suicide risk,11,12 and death from suicide is especially correlated with the worst previous suicidal ideation.3
Suicidal ideation is an important predictor of suicidal acts in all major psychiatric conditions.3,13–17 In a longitudinal study in a community sample, adolescents who had suicidal ideation at age 15 were more likely to have attempted suicide by age 30.5
The annual incidence of suicidal ideation in the United States is estimated to be 5.6%,18 while its estimated lifetime prevalence in Western countries ranges from 2.09% to 18.51%.19 A national survey found that 13.5% of Americans had suicidal ideation at some point during their lifetime.20 About 34% of people who think about suicide report going from seriously thinking about it to making a plan, and 72% of planners move from a plan to an attempt.20 In the European Study of the Epidemiology of Mental Disorders,21 the lifetime prevalence of suicidal ideation was 7.8%, and of suicide attempts 1.3%. Being female, younger, divorced, or widowed was associated with a higher prevalence of suicide ideation and attempts.
Although terms such as “acute suicidal ideation,” “chronic suicidal ideation,” “active suicidal ideation,” and “passive suicidal ideation” are used in the clinical and research literature, the difference between them is not clear. Regardless of the term one uses, any suicidal ideation should be taken very seriously.
HABITUATION IN VETERANS
Interestingly, according to the Interpersonal-Psychological Theory of Suicide,22 the suicidal process is related to feelings that one does not belong with other people, feelings that one is a burden on others or society, and an acquired capability to overcome the fear of pain associated with suicide.22 Veterans are likely to have acquired this capability as the result of military training and combat exposure, which may cause habituation to fear of painful experiences, including suicide.
FEATURES AND CAUSES OF PTSD
PTSD—a severe, multifaceted disorder precipitated by exposure to a psychologically distressing experience—first appeared in the Diagnostic and Statistical Manual of Psychiatric Disorders (DSM-III) in 1980,23,24 arising from studies of veterans of the Vietnam war and of civilian victims of natural and man-made disasters.44,45 However, the study of PTSD dates back more than 100 years. Before 1980, posttraumatic syndromes were recognized by various names, including railway spine, shell shock, traumatic (war) neurosis, concentration-camp syndrome, and rape-trauma syndrome.24,25 The symptoms described in these syndromes overlap considerably with what we now recognize as PTSD.
According to the most recent edition of the Diagnostic and Statistical Manual, DSM-IV-TR,27 the basic feature of PTSD is the development of characteristic symptoms following exposure to a stressor event. Examples include:
- Direct personal experience of an event that involves actual or threatened death or serious injury, or other threat to one’s physical integrity
- Witnessing an event that involves death, injury, or a threat to the physical integrity of another person
- Learning about unexpected or violent death, serious harm, or threat of death or injury experienced by a family member or other close associate.
People react to the event with fear and helplessness and try to avoid being reminded of it.
Traumatic events leading to PTSD include military combat, violent personal assault, being kidnapped or taken hostage, experiencing a terrorist attack, torture, incarceration, a natural or man-made disaster, or an automobile accident, or being diagnosed with a life-threatening illness.
PTSD is a potentially fatal disorder through suicide. There may be differences in the psychobiology of PTSD and suicidal behavior between war veterans and civilians.28
PTSD often coexists with other psychiatric illnesses29,30: the National Comorbidity Survey found that about 80% of patients with PTSD meet the criteria for at least one other psychiatric disorder.30 Symptoms of PTSD and depression overlap significantly. Common features include diminished interest or participation in significant activities; irritability; sleep disturbance; difficulty concentrating; restricted range of affect; and social detachment.
PTSD also often coexists with traumatic brain injury and other neurologic and medical conditions.31,32 The clinician is more often than not faced with a PTSD patient with multiple diagnoses—psychiatric and medical.
Unfortunately, studies show that PTSD often goes unrecognized by non-mental-health practitioners.31,33 In a national cohort of primary care patients in Israel, 9% met criteria for current PTSD, but only 2% of actual cases were recognized by their treating physician.33
SUICIDE RISK IN VETERANS
Suicidal behavior is a critical problem in war veterans. During the wars in Iraq and Afghanistan, the US Army’s suicide rate has increased from 12.4 per 100,000 in 2003 to 18.1 per 100,000 in 2008.34 In the United Kingdom, more veterans have committed suicide since the end of the 1982 Falklands War than the number of servicemen killed in action during the Falklands War.35 The South Atlantic Medal Association, which represents and helps Falklands veterans, believes that 264 veterans had taken their own lives by 2002, a number exceeding the 255 who died in active service. The suicide rate in Falklands War veterans is about three times higher than the rate in those who left the UK armed forces from 1996 to 2005.36,37
Observations have suggested a relatively high prevalence of suicide ideation and attempts in different generations of war veterans and in different countries.38
Suicidal ideation is more dangerous in war veterans than in the general population because they know how to use firearms and they often own them. In other words, they often possess the lethal means to act on their suicidal thoughts.
And female veterans may be more likely to commit suicide with a firearm. A US study39 observed that female veterans who committed suicide were 1.6 times more likely to have used a firearm and male veterans were 1.3 more likely, compared with nonveterans and adjusting for age, marital status, race, and region of residence.
DEPRESSION, PTSD, AND SUICIDE RISK
Suicidal ideation in war veterans is often associated with PTSD and depression, conditions that often coexist. And PTSD has been shown to be a risk factor for suicidal ideation in American veterans of the wars in Iraq and Afghanistan.40 In a survey of 407 veterans, those who screened positive for PTSD (n = 202) were more than four times as likely to endorse having suicidal ideation compared with veterans who screened negative for PTSD. In veterans who screened positive for PTSD, the risk of suicidal ideation was 5.7 times higher in those with two or more coexisting psychiatric disorders compared with veterans with PTSD alone.40
Additional risk factors
Factors contributing to the risk of suicidal ideation and behavior in patients with PTSD include comorbid disorders (especially depression and substance abuse), impulsive behavior, feelings of guilt or shame, re-experiencing symptoms, and prewar traumatic experiences.41–45
Recent studies have analyzed factors associated with suicidal ideation in US veterans of the wars in Iraq and Afghanistan. Pietrzak et al46 surveyed 272 veterans, of whom 34 (12.5%) reported contemplating suicide in the 2 weeks prior to completing the survey. Screening positive for PTSD and depression and having psychosocial difficulties were associated with suicidal ideation, while postdeployment social support and a sense of purpose and control were negatively associated with it.
Other authors47 found that only the “emotional numbing” cluster of PTSD symptoms and the “cognitive-affective” cluster of depression symptoms were distinctively associated with suicidal ideation. Maguen et al48 recently reported that 2.8% of newly discharged US soldiers endorsed suicidal ideation. Prior suicide attempts, prior psychiatric medication, and killing in combat were each significantly associated with suicidal ideation, with killing exerting a mediated effect through depression and PTSD symptoms.
Another recent study49 suggests that veterans reporting subthreshold PTSD (ie, having symptoms of PTSD but not meeting all the criteria for the diagnosis) were three times more likely to admit to having suicidal ideation compared with veterans without PTSD,49 which indicates that subthreshold PTSD may increase suicide risk.
Lemaire and Graham50 reported that prior exposure to physical or sexual abuse and having a history of a prior suicide attempt, a current diagnosis of a psychotic disorder, a depressive disorder, and PTSD were associated with current suicidal ideation. Other factors related to suicidal ideation were female sex, deployment concerns related to training (a protective factor—ie, it reduces suicide risk by enhancing resilience and by counterbalancing risk factors), the deployment environment, family concerns, postdeployment support (a protective factor), and postdeployment stressors.
PTSD and depression: An additive effect
These findings also suggest that the coexistence of PTSD and depression increases the risk of suicidal ideation more than PTSD or depression alone. This is consistent with the concept of posttraumatic mood disorder, ie, that when these diagnoses coexist, they are different than when they occur alone, and that the coexistence increases the risk of suicidal ideation and behavior.51,52
HOW TO ASSESS SUICIDE RISK
Physicians are in a key position to screen for depression and PTSD in all their patients, including those who are veterans.31,53
Traumatic events of adulthood can be asked about directly. For example, “Have you ever been physically attacked or assaulted? Have you ever been in an automobile accident? Have you ever been in a war or a disaster?” A positive response should alert the physician to inquire further about the relationship between the event and any current symptoms.
Traumatic childhood experiences require reassuring statements of normality to put the patient at ease. For example, “Many people continue to think about frightening aspects of their childhood. Do you?”
Physicians working with war veterans suffering from PTSD or depression should regularly inquire about suicidal ideation, and if the patient admits to having suicidal ideation, the physician should ask about the possession of firearms or other lethal means.
This type of screening has limitations. Fear of being socially stigmatized or of appearing weak may prevent veterans from disclosing thoughts of suicide. And one study54 found little evidence to suggest that inquiring about suicide successfully identifies veterans most at risk of suicide.
Indirect indicators of suicidality
Identifying indirect indicators of suicidal thoughts is also important: these can include pill-seeking behavior; talking or writing about death, dying, or suicide; hopelessness; rage or uncontrolled anger; seeking revenge; reckless or risky behaviors or activities; feeling trapped; and saying or feeling there is no reason for living.55
Other warning signs include depressed mood, anhedonia, insomnia, severe anxiety, and panic attacks.56 A prior suicide attempt, a family history of suicidal behavior, and comorbidity of depression and alcoholism are associated with a high suicide risk.56–59
Suicidal behavior is more common after recent, severe, stressful life events and in physical illnesses such as HIV/AIDS, Huntington disease, malignant neoplasm, multiple sclerosis, peptic ulcer, renal disease, spinal cord injury, and systemic lupus erythematosus. This is true in both veterans and nonveterans.60
Useful questions
Useful questions in the assessment of suicidal risk can be formulated as follows61:
- How have you reacted to stress in the past, and how effective are your usual coping strategies?
- Have you contemplated or attempted suicide in the past? If so, how many times and under what circumstances? And how is your current situation compared with past situations when you considered or attempted suicide?
- Do you ever feel hopeless, helpless, powerless, or extremely angry?
- Do you ever have hallucinations or delusions?
The role of guilt
It is important to ask about guilt feelings. Hendin and Haas62 observed that in veterans with PTSD related to combat experience, combat-related guilt was the most significant predictor of suicide attempts and of preoccupation with suicide after discharge. Combat veterans may feel guilt about surviving when others have died, acts of omission and commission, and thoughts or feelings.63 Some have suggested that guilt may be a mechanism through which violence is related to PTSD and major depressive disorder in combat veterans.64
INTERVENTIONS
Patients with comorbid depression, PTSD, and suicidal ideation are usually very sick and should be referred to a psychiatrist. They are usually treated with antidepressants, such as paroxetine (Paxil) or sertraline (Zoloft), and psychotherapy.65 Patients who have a suicidal intent or a plan should be referred to an emergency department for evaluation or hospitalization. All veterans should be given the toll-free phone number of the Veterans Crisis Line (1-800-273-8255), a US Department of Veterans Affairs (VA) resource that connects veterans in crisis and their families and friends, with qualified VA professionals.
As with many illnesses, such as cancer, suicidal behavior is most treatable and yields the best outcome when diagnosed and treated early.66 And the earliest manifestation of suicidal behavior is suicidal ideation.
The association of suicidal ideation with PTSD and depression underlines the importance of the timely diagnosis and effective treatment of these conditions among war veterans. Veterans experiencing subthreshold PTSD or depression may be less likely to receive mental health treatment. This indicates that non-mental-health clinicians should be educated about how to detect PTSD and depression symptoms. They may also help to detect suicidality early, which may help save lives.
Promoting social, emotional, and spiritual wellness
Our patients remind us every day that the work we do matters, that we have much more to learn, and that the more we understand suicidal behavior in veterans, the more we can do to reduce their suffering. We need to promote their social, emotional, and spiritual wellness. Encouraging resilience, optimism, and mental health can protect them from depression, suicidal ideation and behavior. Resilience can be promoted by teaching patients to:
- Build relationships with family members and friends who can provide support
- Think well about themselves and identify their areas of strength
- Invest time and energy in developing new skills
- Challenge negative thoughts; try to find optimistic ways of viewing any situation
- Look after their physical health and exercise regularly
- Get involved in community activities to help counter feelings of isolation
- Ask for assistance and support when they need it.67
Our knowledge about what works and what does not work in suicide prevention in veterans is evolving. Research addressing combat-related PTSD, depression, and suicidal behavior in war veterans is critically needed to better understand the nature of these conditions.
- Mann JJ. Searching for triggers of suicidal behavior. Am J Psychiatry 2004; 161:395–397.
- American Psychiatric Association. Practice Guideline For The Assessment and Treatment of Patients with Suicidal Behaviors. Arlington, VA: American Psychiatric Publishing, Inc.; 2003.
- Beck AT, Brown GK, Steer RA, Dahlsgaard KK, Grisham JR. Suicide ideation at its worst point: a predictor of eventual suicide in psychiatric outpatients. Suicide Life Threat Behav 1999; 29:1–9.
- Beck AT, Steer RA, Kovacs M, Garrison B. Hopelessness and eventual suicide: a 10-year prospective study of patients hospitalized with suicidal ideation. Am J Psychiatry 1985; 142:559–563.
- Reinherz HZ, Tanner JL, Berger SR, Beardslee WR, Fitzmaurice GM. Adolescent suicidal ideation as predictive of psychopathology, suicidal behavior, and compromised functioning at age 30. Am J Psychiatry 2006; 163:1226–1232.
- Vilhjalmsson R, Kristjansdottir G, Sveinbjarnardottir E. Factors associated with suicide ideation in adults. Soc Psychiatry Psychiatr Epidemiol 1998; 33:97–103.
- Miotto P, De Coppi M, Frezza M, Petretto D, Masala C, Preti A. Suicidal ideation and aggressiveness in school-aged youths. Psychiatry Res 2003; 120:247–255.
- De Man AF, Leduc CP. Suicidal ideation in high school students: depression and other correlates. J Clin Psychol 1995; 51:173–181.
- Chioqueta AP, Stiles TC. The relationship between psychological buffers, hopelessness, and suicidal ideation: identification of protective factors. Crisis 2007; 28:67–73.
- Hatcher-Kay C, King CA. Depression and suicide. Pediatr Rev 2003; 24:363–371.
- Brown GK, Beck AT, Steer RA, Grisham JR. Risk factors for suicide in psychiatric outpatients: a 20-year prospective study. J Consult Clin Psychol 2000; 68:371–377.
- Fawcett J, Scheftner WA, Fogg L, et al. Time-related predictors of suicide in major affective disorder. Am J Psychiatry 1990; 147:1189–1194.
- Bulik CM, Carpenter LL, Kupfer DJ, Frank E. Features associated with suicide attempts in recurrent major depression. J Affect Disord 1990; 18:29–37.
- Drake RE, Gates C, Cotton PG, Whitaker A. Suicide among schizophrenics. Who is at risk? J Nerv Ment Dis 1984; 172:613–617.
- Oquendo MA, Galfalvy H, Russo S, et al. Prospective study of clinical predictors of suicidal acts after a major depressive episode in patients with major depressive disorder or bipolar disorder. Am J Psychiatry 2004; 161:1433–1441.
- Mann JJ, Ellis SP, Waternaux CM, et al. Classification trees distinguish suicide attempters in major psychiatric disorders: a model of clinical decision making. J Clin Psychiatry 2008; 69:23–31.
- Galfalvy HC, Oquendo MA, Mann JJ. Evaluation of clinical prognostic models for suicide attempts after a major depressive episode. Acta Psychiatr Scand 2008; 117:244–252.
- Crosby AE, Cheltenham MP, Sacks JJ. Incidence of suicidal ideation and behavior in the United States, 1994. Suicide Life Threat Behav 1999; 29:131–140.
- Weissman MM, Bland RC, Canino GJ, et al. Prevalence of suicide ideation and suicide attempts in nine countries. Psychol Med 1999; 29:9–17.
- Kessler RC, Borges G, Walters EE. Prevalence of and risk factors for lifetime suicide attempts in the National Comorbidity Survey. Arch Gen Psychiatry 1999; 56:617–626.
- Bernal M, Haro JM, Bernert S, et al; ESEMED/MHEDEA Investigators. Risk factors for suicidality in Europe: results from the ESEMED study. J Affect Disord 2007; 101:27–34.
- Selby EA, Anestis MD, Bender TW, et al. Overcoming the fear of lethal injury: evaluating suicidal behavior in the military through the lens of the Interpersonal-Psychological Theory of Suicide. Clin Psychol Rev 2010; 30:298–307.
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 3rd ed. Washington, DC: American Psychiatric Association; 1980:236–238.
- Schnurr PP, Friedman MJ, Bernardy NC. Research on posttraumatic stress disorder: epidemiology, pathophysiology, and assessment. J Clin Psychol 2002; 58:877–889.
- Saigh PA, Bremner JD. The history of posttraumatic stress disorder. In:Saigh PA, Bremner JD, eds. Posttraumatic Stress Disorder. A Comprehensive Text. Boston, MA: Allyn & Bacon; 1999:1–17.
- Hageman I, Andersen HS, Jørgensen MB. Post-traumatic stress disorder: a review of psychobiology and pharmacotherapy. Acta Psychiatr Scand 2001; 104:411–422.
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. Text Revision. 4th ed. Washington, DC: American Psychiatric Association; 2000:463–468.
- Sher L, Yehuda R. Preventing suicide among returning combat veterans: a moral imperative. Mil Med 2011; 176:601–602.
- Davidson JR, Hughes D, Blazer DG, George LK. Post-traumatic stress disorder in the community: an epidemiological study. Psychol Med 1991; 21:713–721.
- Kessler RC, Sonnega A, Bromet E, Hughes M, Nelson CB. Posttraumatic stress disorder in the National Comorbidity Survey. Arch Gen Psychiatry 1995; 52:1048–1060.
- Sher L. Recognizing post-traumatic stress disorder. QJM 2004; 97:1–5.
- Kaplan GB, Vasterling JJ, Vedak PC. Brain-derived neurotrophic factor in traumatic brain injury, post-traumatic stress disorder, and their comorbid conditions: role in pathogenesis and treatment. Behav Pharmacol 2010; 21:427–437.
- Taubman-Ben-Ari O, Rabinowitz J, Feldman D, Vaturi R. Post-traumatic stress disorder in primary-care settings: prevalence and physicians’ detection. Psychol Med 2001; 31:555–560.
- Tanielian T, Jaycox LH, editors. Invisible Wounds of War. Psychological and Cognitive Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: RAND Corporation; 2008.
- Spooner MH. Suicide claiming more British Falkland veterans than fighting did. CMAJ 2002; 166:1453.
- Kapur N, While D, Blatchley N, Bray I, Harrison K. Suicide after leaving the UK armed forces—a cohort study. PLoS Med 2009; 6:e26.
- A brief history of the Falklands Islands. Part 7— The 1982 War and Beyond. http://www.falklands.info/history/history7.html. Accessed January 5, 2012.
- Sher L, Vilens A, editors. War and Suicide. Hauppauge, New York: Nova Science Publishers; 2009.
- Kaplan MS, McFarland BH, Huguet N. Firearm suicide among veterans in the general population: findings from the National Violent Death Reporting System. J Trauma 2009; 67:503–507.
- Jakupcak M, Cook J, Imel Z, Fontana A, Rosenheck R, McFall M. Posttraumatic stress disorder as a risk factor for suicidal ideation in Iraq and Afghanistan War veterans. J Trauma Stress 2009; 22:303–306.
- Tarrier N, Gregg L. Suicide risk in civilian PTSD patients—predictors of suicidal ideation, planning and attempts. Soc Psychiatry Psychiatr Epidemiol 2004; 39:655–661.
- Bell JB, Nye EC. Specific symptoms predict suicidal ideation in Vietnam combat veterans with chronic post-traumatic stress disorder. Mil Med 2007; 172:1144–1147.
- Kramer TL, Lindy JD, Green BL, Grace MC, Leonard AC. The comorbidity of post-traumatic stress disorder and suicidality in Vietnam veterans. Suicide Life Threat Behav 1994; 24:58–67.
- Ferrada-Noli M, Asberg M, Ormstad K. Suicidal behavior after severe trauma. Part 2: The association between methods of torture and of suicidal ideation in posttraumatic stress disorder. J Trauma Stress 1998; 11:113–124.
- Tiet QQ, Finney JW, Moos RH. Recent sexual abuse, physical abuse, and suicide attempts among male veterans seeking psychiatric treatment. Psychiatr Serv 2006; 57:107–113.
- Pietrzak RH, Goldstein MB, Malley JC, Rivers AJ, Johnson DC, Southwick SM. Risk and protective factors associated with suicidal ideation in veterans of Operations Enduring Freedom and Iraqi Freedom. J Affect Disord 2010; 123:102–107.
- Guerra VS, Calhoun PS; Mid-Atlantic Mental Illness Research, Education and Clinical Center Workgroup. Examining the relation between posttraumatic stress disorder and suicidal ideation in an OEF/OIF veteran sample. J Anxiety Disord 2011; 25:12–18.
- Maguen S, Luxton DD, Skopp NA, et al. Killing in combat, mental health symptoms, and suicidal ideation in Iraq war veterans. J Anxiety Disord 2011; 25:563–567.
- Jakupcak M, Hoerster KD, Varra A, Vannoy S, Felker B, Hunt S. Hopelessness and suicidal ideation in Iraq and Afghanistan War Veterans reporting subthreshold and threshold posttraumatic stress disorder. J Nerv Ment Dis 2011; 199:272–275.
- Lemaire CM, Graham DP. Factors associated with suicidal ideation in OEF/OIF veterans. J Affect Disord 2011; 130:231–238.
- Sher L. The concept of post-traumatic mood disorder. Med Hypotheses 2005; 65:205–210.
- Sher L. Suicide in war veterans: the role of comorbidity of PTSD and depression. Expert Rev Neurother 2009; 9:921–923.
- Blank AS. Clinical detection, diagnosis, and differential diagnosis of posttraumatic stress disorder. Psychiatr Clin North Am 1994; 17:351–383.
- Denneson LM, Basham C, Dickinson KC, et al. Suicide risk assessment and content of VA health care contacts before suicide completion by veterans in Oregon. Psychiatr Serv 2010; 61:1192–1197.
- US Department of Veterans Affairs. Mental Health Suicide Prevention. http://www.mentalhealth.va.gov/suicide_prevention. Accessed December 8, 2011.
- Gliatto MF, Rai AK. Evaluation and treatment of patients with suicidal ideation. Am Fam Physician 1999; 59:1500–1506.
- Sher L, Oquendo MA, Mann JJ. Risk of suicide in mood disorders. Clin Neurosci Res 2001; 1:337–344.
- Oquendo MA, Currier D, Mann JJ. Prospective studies of suicidal behavior in major depressive and bipolar disorders: what is the evidence for predictive risk factors? Acta Psychiatr Scand 2006; 114:151–158.
- Sher L. Alcoholism and suicidal behavior: a clinical overview. Acta Psychiatr Scand 2006; 113:13–22.
- Moscicki EK. Identification of suicide risk factors using epidemiologic studies. Psychiatr Clin North Am 1997; 20:499–517.
- Goldman HH. Review of General Psychiatry, 5th ed. New York, NY: Lange Medical Books/McGraw-Hill; 2000.
- Hendin H, Haas AP. Suicide and guilt as manifestations of PTSD in Vietnam combat veterans. Am J Psychiatry 1991; 148:586–591.
- Henning KR, Frueh BC. Combat guilt and its relationship to PTSD symptoms. J Clin Psychol 1997; 53:801–808.
- Marx BP, Foley KM, Feinstein BA, Wolf EJ, Kaloupek DG, Keane TM. Combat-related guilt mediates the relations between exposure to combat-related abusive violence and psychiatric diagnoses. Depress Anxiety 2010; 27:287–293.
- Hetrick SE, Purcell R, Garner B, Parslow R. Combined pharmacotherapy and psychological therapies for post traumatic stress disorder (PTSD). Cochrane Database Syst Rev 2010; ( 7):CD007316.
- Brent DA, Oquendo M, Birmaher B, et al. Familial pathways to early-onset suicide attempt: risk for suicidal behavior in offspring of mood-disordered suicide attempters. Arch Gen Psychiatry 2002; 59:801–807.
- Australian Department of Health and Ageing. Fact sheet 6: Resilience, vulnerability, and suicide prevention. Living is for Everyone (LIFE) fact sheets. www.livingisforeveryone.com.au/LIFE-Fact-sheets.html. Accessed December 8, 2011.
In military veterans, depression, posttraumatic stress disorder (PTSD), and suicidal thoughts are common and closely linked. Veterans are less likely to seek care and more likely to act successfully on suicidal thoughts. Therefore, screening, timely diagnosis, and effective intervention are critical.1
In this article, we review the signs and symptoms of depression and PTSD, the relationship of these conditions to suicidality in veterans, and the role of the non-mental-health clinician in detecting suicidal ideation early and then taking appropriate action. Early identification of suicidality may help save lives of those who otherwise may not seek care.
FROM IDEA TO PLAN TO ACTION
Suicide can be viewed as a process that begins with suicidal ideation, followed by planning and then by a suicidal act,2–9 and suicidal ideation can be prompted by depression or PTSD.
Suicidal ideation, defined as any thought of being the agent of one’s own death,2 is relatively common. Most people who attempt suicide report a history of suicidal ideation.10 In fact, current suicidal ideation increases suicide risk,11,12 and death from suicide is especially correlated with the worst previous suicidal ideation.3
Suicidal ideation is an important predictor of suicidal acts in all major psychiatric conditions.3,13–17 In a longitudinal study in a community sample, adolescents who had suicidal ideation at age 15 were more likely to have attempted suicide by age 30.5
The annual incidence of suicidal ideation in the United States is estimated to be 5.6%,18 while its estimated lifetime prevalence in Western countries ranges from 2.09% to 18.51%.19 A national survey found that 13.5% of Americans had suicidal ideation at some point during their lifetime.20 About 34% of people who think about suicide report going from seriously thinking about it to making a plan, and 72% of planners move from a plan to an attempt.20 In the European Study of the Epidemiology of Mental Disorders,21 the lifetime prevalence of suicidal ideation was 7.8%, and of suicide attempts 1.3%. Being female, younger, divorced, or widowed was associated with a higher prevalence of suicide ideation and attempts.
Although terms such as “acute suicidal ideation,” “chronic suicidal ideation,” “active suicidal ideation,” and “passive suicidal ideation” are used in the clinical and research literature, the difference between them is not clear. Regardless of the term one uses, any suicidal ideation should be taken very seriously.
HABITUATION IN VETERANS
Interestingly, according to the Interpersonal-Psychological Theory of Suicide,22 the suicidal process is related to feelings that one does not belong with other people, feelings that one is a burden on others or society, and an acquired capability to overcome the fear of pain associated with suicide.22 Veterans are likely to have acquired this capability as the result of military training and combat exposure, which may cause habituation to fear of painful experiences, including suicide.
FEATURES AND CAUSES OF PTSD
PTSD—a severe, multifaceted disorder precipitated by exposure to a psychologically distressing experience—first appeared in the Diagnostic and Statistical Manual of Psychiatric Disorders (DSM-III) in 1980,23,24 arising from studies of veterans of the Vietnam war and of civilian victims of natural and man-made disasters.44,45 However, the study of PTSD dates back more than 100 years. Before 1980, posttraumatic syndromes were recognized by various names, including railway spine, shell shock, traumatic (war) neurosis, concentration-camp syndrome, and rape-trauma syndrome.24,25 The symptoms described in these syndromes overlap considerably with what we now recognize as PTSD.
According to the most recent edition of the Diagnostic and Statistical Manual, DSM-IV-TR,27 the basic feature of PTSD is the development of characteristic symptoms following exposure to a stressor event. Examples include:
- Direct personal experience of an event that involves actual or threatened death or serious injury, or other threat to one’s physical integrity
- Witnessing an event that involves death, injury, or a threat to the physical integrity of another person
- Learning about unexpected or violent death, serious harm, or threat of death or injury experienced by a family member or other close associate.
People react to the event with fear and helplessness and try to avoid being reminded of it.
Traumatic events leading to PTSD include military combat, violent personal assault, being kidnapped or taken hostage, experiencing a terrorist attack, torture, incarceration, a natural or man-made disaster, or an automobile accident, or being diagnosed with a life-threatening illness.
PTSD is a potentially fatal disorder through suicide. There may be differences in the psychobiology of PTSD and suicidal behavior between war veterans and civilians.28
PTSD often coexists with other psychiatric illnesses29,30: the National Comorbidity Survey found that about 80% of patients with PTSD meet the criteria for at least one other psychiatric disorder.30 Symptoms of PTSD and depression overlap significantly. Common features include diminished interest or participation in significant activities; irritability; sleep disturbance; difficulty concentrating; restricted range of affect; and social detachment.
PTSD also often coexists with traumatic brain injury and other neurologic and medical conditions.31,32 The clinician is more often than not faced with a PTSD patient with multiple diagnoses—psychiatric and medical.
Unfortunately, studies show that PTSD often goes unrecognized by non-mental-health practitioners.31,33 In a national cohort of primary care patients in Israel, 9% met criteria for current PTSD, but only 2% of actual cases were recognized by their treating physician.33
SUICIDE RISK IN VETERANS
Suicidal behavior is a critical problem in war veterans. During the wars in Iraq and Afghanistan, the US Army’s suicide rate has increased from 12.4 per 100,000 in 2003 to 18.1 per 100,000 in 2008.34 In the United Kingdom, more veterans have committed suicide since the end of the 1982 Falklands War than the number of servicemen killed in action during the Falklands War.35 The South Atlantic Medal Association, which represents and helps Falklands veterans, believes that 264 veterans had taken their own lives by 2002, a number exceeding the 255 who died in active service. The suicide rate in Falklands War veterans is about three times higher than the rate in those who left the UK armed forces from 1996 to 2005.36,37
Observations have suggested a relatively high prevalence of suicide ideation and attempts in different generations of war veterans and in different countries.38
Suicidal ideation is more dangerous in war veterans than in the general population because they know how to use firearms and they often own them. In other words, they often possess the lethal means to act on their suicidal thoughts.
And female veterans may be more likely to commit suicide with a firearm. A US study39 observed that female veterans who committed suicide were 1.6 times more likely to have used a firearm and male veterans were 1.3 more likely, compared with nonveterans and adjusting for age, marital status, race, and region of residence.
DEPRESSION, PTSD, AND SUICIDE RISK
Suicidal ideation in war veterans is often associated with PTSD and depression, conditions that often coexist. And PTSD has been shown to be a risk factor for suicidal ideation in American veterans of the wars in Iraq and Afghanistan.40 In a survey of 407 veterans, those who screened positive for PTSD (n = 202) were more than four times as likely to endorse having suicidal ideation compared with veterans who screened negative for PTSD. In veterans who screened positive for PTSD, the risk of suicidal ideation was 5.7 times higher in those with two or more coexisting psychiatric disorders compared with veterans with PTSD alone.40
Additional risk factors
Factors contributing to the risk of suicidal ideation and behavior in patients with PTSD include comorbid disorders (especially depression and substance abuse), impulsive behavior, feelings of guilt or shame, re-experiencing symptoms, and prewar traumatic experiences.41–45
Recent studies have analyzed factors associated with suicidal ideation in US veterans of the wars in Iraq and Afghanistan. Pietrzak et al46 surveyed 272 veterans, of whom 34 (12.5%) reported contemplating suicide in the 2 weeks prior to completing the survey. Screening positive for PTSD and depression and having psychosocial difficulties were associated with suicidal ideation, while postdeployment social support and a sense of purpose and control were negatively associated with it.
Other authors47 found that only the “emotional numbing” cluster of PTSD symptoms and the “cognitive-affective” cluster of depression symptoms were distinctively associated with suicidal ideation. Maguen et al48 recently reported that 2.8% of newly discharged US soldiers endorsed suicidal ideation. Prior suicide attempts, prior psychiatric medication, and killing in combat were each significantly associated with suicidal ideation, with killing exerting a mediated effect through depression and PTSD symptoms.
Another recent study49 suggests that veterans reporting subthreshold PTSD (ie, having symptoms of PTSD but not meeting all the criteria for the diagnosis) were three times more likely to admit to having suicidal ideation compared with veterans without PTSD,49 which indicates that subthreshold PTSD may increase suicide risk.
Lemaire and Graham50 reported that prior exposure to physical or sexual abuse and having a history of a prior suicide attempt, a current diagnosis of a psychotic disorder, a depressive disorder, and PTSD were associated with current suicidal ideation. Other factors related to suicidal ideation were female sex, deployment concerns related to training (a protective factor—ie, it reduces suicide risk by enhancing resilience and by counterbalancing risk factors), the deployment environment, family concerns, postdeployment support (a protective factor), and postdeployment stressors.
PTSD and depression: An additive effect
These findings also suggest that the coexistence of PTSD and depression increases the risk of suicidal ideation more than PTSD or depression alone. This is consistent with the concept of posttraumatic mood disorder, ie, that when these diagnoses coexist, they are different than when they occur alone, and that the coexistence increases the risk of suicidal ideation and behavior.51,52
HOW TO ASSESS SUICIDE RISK
Physicians are in a key position to screen for depression and PTSD in all their patients, including those who are veterans.31,53
Traumatic events of adulthood can be asked about directly. For example, “Have you ever been physically attacked or assaulted? Have you ever been in an automobile accident? Have you ever been in a war or a disaster?” A positive response should alert the physician to inquire further about the relationship between the event and any current symptoms.
Traumatic childhood experiences require reassuring statements of normality to put the patient at ease. For example, “Many people continue to think about frightening aspects of their childhood. Do you?”
Physicians working with war veterans suffering from PTSD or depression should regularly inquire about suicidal ideation, and if the patient admits to having suicidal ideation, the physician should ask about the possession of firearms or other lethal means.
This type of screening has limitations. Fear of being socially stigmatized or of appearing weak may prevent veterans from disclosing thoughts of suicide. And one study54 found little evidence to suggest that inquiring about suicide successfully identifies veterans most at risk of suicide.
Indirect indicators of suicidality
Identifying indirect indicators of suicidal thoughts is also important: these can include pill-seeking behavior; talking or writing about death, dying, or suicide; hopelessness; rage or uncontrolled anger; seeking revenge; reckless or risky behaviors or activities; feeling trapped; and saying or feeling there is no reason for living.55
Other warning signs include depressed mood, anhedonia, insomnia, severe anxiety, and panic attacks.56 A prior suicide attempt, a family history of suicidal behavior, and comorbidity of depression and alcoholism are associated with a high suicide risk.56–59
Suicidal behavior is more common after recent, severe, stressful life events and in physical illnesses such as HIV/AIDS, Huntington disease, malignant neoplasm, multiple sclerosis, peptic ulcer, renal disease, spinal cord injury, and systemic lupus erythematosus. This is true in both veterans and nonveterans.60
Useful questions
Useful questions in the assessment of suicidal risk can be formulated as follows61:
- How have you reacted to stress in the past, and how effective are your usual coping strategies?
- Have you contemplated or attempted suicide in the past? If so, how many times and under what circumstances? And how is your current situation compared with past situations when you considered or attempted suicide?
- Do you ever feel hopeless, helpless, powerless, or extremely angry?
- Do you ever have hallucinations or delusions?
The role of guilt
It is important to ask about guilt feelings. Hendin and Haas62 observed that in veterans with PTSD related to combat experience, combat-related guilt was the most significant predictor of suicide attempts and of preoccupation with suicide after discharge. Combat veterans may feel guilt about surviving when others have died, acts of omission and commission, and thoughts or feelings.63 Some have suggested that guilt may be a mechanism through which violence is related to PTSD and major depressive disorder in combat veterans.64
INTERVENTIONS
Patients with comorbid depression, PTSD, and suicidal ideation are usually very sick and should be referred to a psychiatrist. They are usually treated with antidepressants, such as paroxetine (Paxil) or sertraline (Zoloft), and psychotherapy.65 Patients who have a suicidal intent or a plan should be referred to an emergency department for evaluation or hospitalization. All veterans should be given the toll-free phone number of the Veterans Crisis Line (1-800-273-8255), a US Department of Veterans Affairs (VA) resource that connects veterans in crisis and their families and friends, with qualified VA professionals.
As with many illnesses, such as cancer, suicidal behavior is most treatable and yields the best outcome when diagnosed and treated early.66 And the earliest manifestation of suicidal behavior is suicidal ideation.
The association of suicidal ideation with PTSD and depression underlines the importance of the timely diagnosis and effective treatment of these conditions among war veterans. Veterans experiencing subthreshold PTSD or depression may be less likely to receive mental health treatment. This indicates that non-mental-health clinicians should be educated about how to detect PTSD and depression symptoms. They may also help to detect suicidality early, which may help save lives.
Promoting social, emotional, and spiritual wellness
Our patients remind us every day that the work we do matters, that we have much more to learn, and that the more we understand suicidal behavior in veterans, the more we can do to reduce their suffering. We need to promote their social, emotional, and spiritual wellness. Encouraging resilience, optimism, and mental health can protect them from depression, suicidal ideation and behavior. Resilience can be promoted by teaching patients to:
- Build relationships with family members and friends who can provide support
- Think well about themselves and identify their areas of strength
- Invest time and energy in developing new skills
- Challenge negative thoughts; try to find optimistic ways of viewing any situation
- Look after their physical health and exercise regularly
- Get involved in community activities to help counter feelings of isolation
- Ask for assistance and support when they need it.67
Our knowledge about what works and what does not work in suicide prevention in veterans is evolving. Research addressing combat-related PTSD, depression, and suicidal behavior in war veterans is critically needed to better understand the nature of these conditions.
In military veterans, depression, posttraumatic stress disorder (PTSD), and suicidal thoughts are common and closely linked. Veterans are less likely to seek care and more likely to act successfully on suicidal thoughts. Therefore, screening, timely diagnosis, and effective intervention are critical.1
In this article, we review the signs and symptoms of depression and PTSD, the relationship of these conditions to suicidality in veterans, and the role of the non-mental-health clinician in detecting suicidal ideation early and then taking appropriate action. Early identification of suicidality may help save lives of those who otherwise may not seek care.
FROM IDEA TO PLAN TO ACTION
Suicide can be viewed as a process that begins with suicidal ideation, followed by planning and then by a suicidal act,2–9 and suicidal ideation can be prompted by depression or PTSD.
Suicidal ideation, defined as any thought of being the agent of one’s own death,2 is relatively common. Most people who attempt suicide report a history of suicidal ideation.10 In fact, current suicidal ideation increases suicide risk,11,12 and death from suicide is especially correlated with the worst previous suicidal ideation.3
Suicidal ideation is an important predictor of suicidal acts in all major psychiatric conditions.3,13–17 In a longitudinal study in a community sample, adolescents who had suicidal ideation at age 15 were more likely to have attempted suicide by age 30.5
The annual incidence of suicidal ideation in the United States is estimated to be 5.6%,18 while its estimated lifetime prevalence in Western countries ranges from 2.09% to 18.51%.19 A national survey found that 13.5% of Americans had suicidal ideation at some point during their lifetime.20 About 34% of people who think about suicide report going from seriously thinking about it to making a plan, and 72% of planners move from a plan to an attempt.20 In the European Study of the Epidemiology of Mental Disorders,21 the lifetime prevalence of suicidal ideation was 7.8%, and of suicide attempts 1.3%. Being female, younger, divorced, or widowed was associated with a higher prevalence of suicide ideation and attempts.
Although terms such as “acute suicidal ideation,” “chronic suicidal ideation,” “active suicidal ideation,” and “passive suicidal ideation” are used in the clinical and research literature, the difference between them is not clear. Regardless of the term one uses, any suicidal ideation should be taken very seriously.
HABITUATION IN VETERANS
Interestingly, according to the Interpersonal-Psychological Theory of Suicide,22 the suicidal process is related to feelings that one does not belong with other people, feelings that one is a burden on others or society, and an acquired capability to overcome the fear of pain associated with suicide.22 Veterans are likely to have acquired this capability as the result of military training and combat exposure, which may cause habituation to fear of painful experiences, including suicide.
FEATURES AND CAUSES OF PTSD
PTSD—a severe, multifaceted disorder precipitated by exposure to a psychologically distressing experience—first appeared in the Diagnostic and Statistical Manual of Psychiatric Disorders (DSM-III) in 1980,23,24 arising from studies of veterans of the Vietnam war and of civilian victims of natural and man-made disasters.44,45 However, the study of PTSD dates back more than 100 years. Before 1980, posttraumatic syndromes were recognized by various names, including railway spine, shell shock, traumatic (war) neurosis, concentration-camp syndrome, and rape-trauma syndrome.24,25 The symptoms described in these syndromes overlap considerably with what we now recognize as PTSD.
According to the most recent edition of the Diagnostic and Statistical Manual, DSM-IV-TR,27 the basic feature of PTSD is the development of characteristic symptoms following exposure to a stressor event. Examples include:
- Direct personal experience of an event that involves actual or threatened death or serious injury, or other threat to one’s physical integrity
- Witnessing an event that involves death, injury, or a threat to the physical integrity of another person
- Learning about unexpected or violent death, serious harm, or threat of death or injury experienced by a family member or other close associate.
People react to the event with fear and helplessness and try to avoid being reminded of it.
Traumatic events leading to PTSD include military combat, violent personal assault, being kidnapped or taken hostage, experiencing a terrorist attack, torture, incarceration, a natural or man-made disaster, or an automobile accident, or being diagnosed with a life-threatening illness.
PTSD is a potentially fatal disorder through suicide. There may be differences in the psychobiology of PTSD and suicidal behavior between war veterans and civilians.28
PTSD often coexists with other psychiatric illnesses29,30: the National Comorbidity Survey found that about 80% of patients with PTSD meet the criteria for at least one other psychiatric disorder.30 Symptoms of PTSD and depression overlap significantly. Common features include diminished interest or participation in significant activities; irritability; sleep disturbance; difficulty concentrating; restricted range of affect; and social detachment.
PTSD also often coexists with traumatic brain injury and other neurologic and medical conditions.31,32 The clinician is more often than not faced with a PTSD patient with multiple diagnoses—psychiatric and medical.
Unfortunately, studies show that PTSD often goes unrecognized by non-mental-health practitioners.31,33 In a national cohort of primary care patients in Israel, 9% met criteria for current PTSD, but only 2% of actual cases were recognized by their treating physician.33
SUICIDE RISK IN VETERANS
Suicidal behavior is a critical problem in war veterans. During the wars in Iraq and Afghanistan, the US Army’s suicide rate has increased from 12.4 per 100,000 in 2003 to 18.1 per 100,000 in 2008.34 In the United Kingdom, more veterans have committed suicide since the end of the 1982 Falklands War than the number of servicemen killed in action during the Falklands War.35 The South Atlantic Medal Association, which represents and helps Falklands veterans, believes that 264 veterans had taken their own lives by 2002, a number exceeding the 255 who died in active service. The suicide rate in Falklands War veterans is about three times higher than the rate in those who left the UK armed forces from 1996 to 2005.36,37
Observations have suggested a relatively high prevalence of suicide ideation and attempts in different generations of war veterans and in different countries.38
Suicidal ideation is more dangerous in war veterans than in the general population because they know how to use firearms and they often own them. In other words, they often possess the lethal means to act on their suicidal thoughts.
And female veterans may be more likely to commit suicide with a firearm. A US study39 observed that female veterans who committed suicide were 1.6 times more likely to have used a firearm and male veterans were 1.3 more likely, compared with nonveterans and adjusting for age, marital status, race, and region of residence.
DEPRESSION, PTSD, AND SUICIDE RISK
Suicidal ideation in war veterans is often associated with PTSD and depression, conditions that often coexist. And PTSD has been shown to be a risk factor for suicidal ideation in American veterans of the wars in Iraq and Afghanistan.40 In a survey of 407 veterans, those who screened positive for PTSD (n = 202) were more than four times as likely to endorse having suicidal ideation compared with veterans who screened negative for PTSD. In veterans who screened positive for PTSD, the risk of suicidal ideation was 5.7 times higher in those with two or more coexisting psychiatric disorders compared with veterans with PTSD alone.40
Additional risk factors
Factors contributing to the risk of suicidal ideation and behavior in patients with PTSD include comorbid disorders (especially depression and substance abuse), impulsive behavior, feelings of guilt or shame, re-experiencing symptoms, and prewar traumatic experiences.41–45
Recent studies have analyzed factors associated with suicidal ideation in US veterans of the wars in Iraq and Afghanistan. Pietrzak et al46 surveyed 272 veterans, of whom 34 (12.5%) reported contemplating suicide in the 2 weeks prior to completing the survey. Screening positive for PTSD and depression and having psychosocial difficulties were associated with suicidal ideation, while postdeployment social support and a sense of purpose and control were negatively associated with it.
Other authors47 found that only the “emotional numbing” cluster of PTSD symptoms and the “cognitive-affective” cluster of depression symptoms were distinctively associated with suicidal ideation. Maguen et al48 recently reported that 2.8% of newly discharged US soldiers endorsed suicidal ideation. Prior suicide attempts, prior psychiatric medication, and killing in combat were each significantly associated with suicidal ideation, with killing exerting a mediated effect through depression and PTSD symptoms.
Another recent study49 suggests that veterans reporting subthreshold PTSD (ie, having symptoms of PTSD but not meeting all the criteria for the diagnosis) were three times more likely to admit to having suicidal ideation compared with veterans without PTSD,49 which indicates that subthreshold PTSD may increase suicide risk.
Lemaire and Graham50 reported that prior exposure to physical or sexual abuse and having a history of a prior suicide attempt, a current diagnosis of a psychotic disorder, a depressive disorder, and PTSD were associated with current suicidal ideation. Other factors related to suicidal ideation were female sex, deployment concerns related to training (a protective factor—ie, it reduces suicide risk by enhancing resilience and by counterbalancing risk factors), the deployment environment, family concerns, postdeployment support (a protective factor), and postdeployment stressors.
PTSD and depression: An additive effect
These findings also suggest that the coexistence of PTSD and depression increases the risk of suicidal ideation more than PTSD or depression alone. This is consistent with the concept of posttraumatic mood disorder, ie, that when these diagnoses coexist, they are different than when they occur alone, and that the coexistence increases the risk of suicidal ideation and behavior.51,52
HOW TO ASSESS SUICIDE RISK
Physicians are in a key position to screen for depression and PTSD in all their patients, including those who are veterans.31,53
Traumatic events of adulthood can be asked about directly. For example, “Have you ever been physically attacked or assaulted? Have you ever been in an automobile accident? Have you ever been in a war or a disaster?” A positive response should alert the physician to inquire further about the relationship between the event and any current symptoms.
Traumatic childhood experiences require reassuring statements of normality to put the patient at ease. For example, “Many people continue to think about frightening aspects of their childhood. Do you?”
Physicians working with war veterans suffering from PTSD or depression should regularly inquire about suicidal ideation, and if the patient admits to having suicidal ideation, the physician should ask about the possession of firearms or other lethal means.
This type of screening has limitations. Fear of being socially stigmatized or of appearing weak may prevent veterans from disclosing thoughts of suicide. And one study54 found little evidence to suggest that inquiring about suicide successfully identifies veterans most at risk of suicide.
Indirect indicators of suicidality
Identifying indirect indicators of suicidal thoughts is also important: these can include pill-seeking behavior; talking or writing about death, dying, or suicide; hopelessness; rage or uncontrolled anger; seeking revenge; reckless or risky behaviors or activities; feeling trapped; and saying or feeling there is no reason for living.55
Other warning signs include depressed mood, anhedonia, insomnia, severe anxiety, and panic attacks.56 A prior suicide attempt, a family history of suicidal behavior, and comorbidity of depression and alcoholism are associated with a high suicide risk.56–59
Suicidal behavior is more common after recent, severe, stressful life events and in physical illnesses such as HIV/AIDS, Huntington disease, malignant neoplasm, multiple sclerosis, peptic ulcer, renal disease, spinal cord injury, and systemic lupus erythematosus. This is true in both veterans and nonveterans.60
Useful questions
Useful questions in the assessment of suicidal risk can be formulated as follows61:
- How have you reacted to stress in the past, and how effective are your usual coping strategies?
- Have you contemplated or attempted suicide in the past? If so, how many times and under what circumstances? And how is your current situation compared with past situations when you considered or attempted suicide?
- Do you ever feel hopeless, helpless, powerless, or extremely angry?
- Do you ever have hallucinations or delusions?
The role of guilt
It is important to ask about guilt feelings. Hendin and Haas62 observed that in veterans with PTSD related to combat experience, combat-related guilt was the most significant predictor of suicide attempts and of preoccupation with suicide after discharge. Combat veterans may feel guilt about surviving when others have died, acts of omission and commission, and thoughts or feelings.63 Some have suggested that guilt may be a mechanism through which violence is related to PTSD and major depressive disorder in combat veterans.64
INTERVENTIONS
Patients with comorbid depression, PTSD, and suicidal ideation are usually very sick and should be referred to a psychiatrist. They are usually treated with antidepressants, such as paroxetine (Paxil) or sertraline (Zoloft), and psychotherapy.65 Patients who have a suicidal intent or a plan should be referred to an emergency department for evaluation or hospitalization. All veterans should be given the toll-free phone number of the Veterans Crisis Line (1-800-273-8255), a US Department of Veterans Affairs (VA) resource that connects veterans in crisis and their families and friends, with qualified VA professionals.
As with many illnesses, such as cancer, suicidal behavior is most treatable and yields the best outcome when diagnosed and treated early.66 And the earliest manifestation of suicidal behavior is suicidal ideation.
The association of suicidal ideation with PTSD and depression underlines the importance of the timely diagnosis and effective treatment of these conditions among war veterans. Veterans experiencing subthreshold PTSD or depression may be less likely to receive mental health treatment. This indicates that non-mental-health clinicians should be educated about how to detect PTSD and depression symptoms. They may also help to detect suicidality early, which may help save lives.
Promoting social, emotional, and spiritual wellness
Our patients remind us every day that the work we do matters, that we have much more to learn, and that the more we understand suicidal behavior in veterans, the more we can do to reduce their suffering. We need to promote their social, emotional, and spiritual wellness. Encouraging resilience, optimism, and mental health can protect them from depression, suicidal ideation and behavior. Resilience can be promoted by teaching patients to:
- Build relationships with family members and friends who can provide support
- Think well about themselves and identify their areas of strength
- Invest time and energy in developing new skills
- Challenge negative thoughts; try to find optimistic ways of viewing any situation
- Look after their physical health and exercise regularly
- Get involved in community activities to help counter feelings of isolation
- Ask for assistance and support when they need it.67
Our knowledge about what works and what does not work in suicide prevention in veterans is evolving. Research addressing combat-related PTSD, depression, and suicidal behavior in war veterans is critically needed to better understand the nature of these conditions.
- Mann JJ. Searching for triggers of suicidal behavior. Am J Psychiatry 2004; 161:395–397.
- American Psychiatric Association. Practice Guideline For The Assessment and Treatment of Patients with Suicidal Behaviors. Arlington, VA: American Psychiatric Publishing, Inc.; 2003.
- Beck AT, Brown GK, Steer RA, Dahlsgaard KK, Grisham JR. Suicide ideation at its worst point: a predictor of eventual suicide in psychiatric outpatients. Suicide Life Threat Behav 1999; 29:1–9.
- Beck AT, Steer RA, Kovacs M, Garrison B. Hopelessness and eventual suicide: a 10-year prospective study of patients hospitalized with suicidal ideation. Am J Psychiatry 1985; 142:559–563.
- Reinherz HZ, Tanner JL, Berger SR, Beardslee WR, Fitzmaurice GM. Adolescent suicidal ideation as predictive of psychopathology, suicidal behavior, and compromised functioning at age 30. Am J Psychiatry 2006; 163:1226–1232.
- Vilhjalmsson R, Kristjansdottir G, Sveinbjarnardottir E. Factors associated with suicide ideation in adults. Soc Psychiatry Psychiatr Epidemiol 1998; 33:97–103.
- Miotto P, De Coppi M, Frezza M, Petretto D, Masala C, Preti A. Suicidal ideation and aggressiveness in school-aged youths. Psychiatry Res 2003; 120:247–255.
- De Man AF, Leduc CP. Suicidal ideation in high school students: depression and other correlates. J Clin Psychol 1995; 51:173–181.
- Chioqueta AP, Stiles TC. The relationship between psychological buffers, hopelessness, and suicidal ideation: identification of protective factors. Crisis 2007; 28:67–73.
- Hatcher-Kay C, King CA. Depression and suicide. Pediatr Rev 2003; 24:363–371.
- Brown GK, Beck AT, Steer RA, Grisham JR. Risk factors for suicide in psychiatric outpatients: a 20-year prospective study. J Consult Clin Psychol 2000; 68:371–377.
- Fawcett J, Scheftner WA, Fogg L, et al. Time-related predictors of suicide in major affective disorder. Am J Psychiatry 1990; 147:1189–1194.
- Bulik CM, Carpenter LL, Kupfer DJ, Frank E. Features associated with suicide attempts in recurrent major depression. J Affect Disord 1990; 18:29–37.
- Drake RE, Gates C, Cotton PG, Whitaker A. Suicide among schizophrenics. Who is at risk? J Nerv Ment Dis 1984; 172:613–617.
- Oquendo MA, Galfalvy H, Russo S, et al. Prospective study of clinical predictors of suicidal acts after a major depressive episode in patients with major depressive disorder or bipolar disorder. Am J Psychiatry 2004; 161:1433–1441.
- Mann JJ, Ellis SP, Waternaux CM, et al. Classification trees distinguish suicide attempters in major psychiatric disorders: a model of clinical decision making. J Clin Psychiatry 2008; 69:23–31.
- Galfalvy HC, Oquendo MA, Mann JJ. Evaluation of clinical prognostic models for suicide attempts after a major depressive episode. Acta Psychiatr Scand 2008; 117:244–252.
- Crosby AE, Cheltenham MP, Sacks JJ. Incidence of suicidal ideation and behavior in the United States, 1994. Suicide Life Threat Behav 1999; 29:131–140.
- Weissman MM, Bland RC, Canino GJ, et al. Prevalence of suicide ideation and suicide attempts in nine countries. Psychol Med 1999; 29:9–17.
- Kessler RC, Borges G, Walters EE. Prevalence of and risk factors for lifetime suicide attempts in the National Comorbidity Survey. Arch Gen Psychiatry 1999; 56:617–626.
- Bernal M, Haro JM, Bernert S, et al; ESEMED/MHEDEA Investigators. Risk factors for suicidality in Europe: results from the ESEMED study. J Affect Disord 2007; 101:27–34.
- Selby EA, Anestis MD, Bender TW, et al. Overcoming the fear of lethal injury: evaluating suicidal behavior in the military through the lens of the Interpersonal-Psychological Theory of Suicide. Clin Psychol Rev 2010; 30:298–307.
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 3rd ed. Washington, DC: American Psychiatric Association; 1980:236–238.
- Schnurr PP, Friedman MJ, Bernardy NC. Research on posttraumatic stress disorder: epidemiology, pathophysiology, and assessment. J Clin Psychol 2002; 58:877–889.
- Saigh PA, Bremner JD. The history of posttraumatic stress disorder. In:Saigh PA, Bremner JD, eds. Posttraumatic Stress Disorder. A Comprehensive Text. Boston, MA: Allyn & Bacon; 1999:1–17.
- Hageman I, Andersen HS, Jørgensen MB. Post-traumatic stress disorder: a review of psychobiology and pharmacotherapy. Acta Psychiatr Scand 2001; 104:411–422.
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. Text Revision. 4th ed. Washington, DC: American Psychiatric Association; 2000:463–468.
- Sher L, Yehuda R. Preventing suicide among returning combat veterans: a moral imperative. Mil Med 2011; 176:601–602.
- Davidson JR, Hughes D, Blazer DG, George LK. Post-traumatic stress disorder in the community: an epidemiological study. Psychol Med 1991; 21:713–721.
- Kessler RC, Sonnega A, Bromet E, Hughes M, Nelson CB. Posttraumatic stress disorder in the National Comorbidity Survey. Arch Gen Psychiatry 1995; 52:1048–1060.
- Sher L. Recognizing post-traumatic stress disorder. QJM 2004; 97:1–5.
- Kaplan GB, Vasterling JJ, Vedak PC. Brain-derived neurotrophic factor in traumatic brain injury, post-traumatic stress disorder, and their comorbid conditions: role in pathogenesis and treatment. Behav Pharmacol 2010; 21:427–437.
- Taubman-Ben-Ari O, Rabinowitz J, Feldman D, Vaturi R. Post-traumatic stress disorder in primary-care settings: prevalence and physicians’ detection. Psychol Med 2001; 31:555–560.
- Tanielian T, Jaycox LH, editors. Invisible Wounds of War. Psychological and Cognitive Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: RAND Corporation; 2008.
- Spooner MH. Suicide claiming more British Falkland veterans than fighting did. CMAJ 2002; 166:1453.
- Kapur N, While D, Blatchley N, Bray I, Harrison K. Suicide after leaving the UK armed forces—a cohort study. PLoS Med 2009; 6:e26.
- A brief history of the Falklands Islands. Part 7— The 1982 War and Beyond. http://www.falklands.info/history/history7.html. Accessed January 5, 2012.
- Sher L, Vilens A, editors. War and Suicide. Hauppauge, New York: Nova Science Publishers; 2009.
- Kaplan MS, McFarland BH, Huguet N. Firearm suicide among veterans in the general population: findings from the National Violent Death Reporting System. J Trauma 2009; 67:503–507.
- Jakupcak M, Cook J, Imel Z, Fontana A, Rosenheck R, McFall M. Posttraumatic stress disorder as a risk factor for suicidal ideation in Iraq and Afghanistan War veterans. J Trauma Stress 2009; 22:303–306.
- Tarrier N, Gregg L. Suicide risk in civilian PTSD patients—predictors of suicidal ideation, planning and attempts. Soc Psychiatry Psychiatr Epidemiol 2004; 39:655–661.
- Bell JB, Nye EC. Specific symptoms predict suicidal ideation in Vietnam combat veterans with chronic post-traumatic stress disorder. Mil Med 2007; 172:1144–1147.
- Kramer TL, Lindy JD, Green BL, Grace MC, Leonard AC. The comorbidity of post-traumatic stress disorder and suicidality in Vietnam veterans. Suicide Life Threat Behav 1994; 24:58–67.
- Ferrada-Noli M, Asberg M, Ormstad K. Suicidal behavior after severe trauma. Part 2: The association between methods of torture and of suicidal ideation in posttraumatic stress disorder. J Trauma Stress 1998; 11:113–124.
- Tiet QQ, Finney JW, Moos RH. Recent sexual abuse, physical abuse, and suicide attempts among male veterans seeking psychiatric treatment. Psychiatr Serv 2006; 57:107–113.
- Pietrzak RH, Goldstein MB, Malley JC, Rivers AJ, Johnson DC, Southwick SM. Risk and protective factors associated with suicidal ideation in veterans of Operations Enduring Freedom and Iraqi Freedom. J Affect Disord 2010; 123:102–107.
- Guerra VS, Calhoun PS; Mid-Atlantic Mental Illness Research, Education and Clinical Center Workgroup. Examining the relation between posttraumatic stress disorder and suicidal ideation in an OEF/OIF veteran sample. J Anxiety Disord 2011; 25:12–18.
- Maguen S, Luxton DD, Skopp NA, et al. Killing in combat, mental health symptoms, and suicidal ideation in Iraq war veterans. J Anxiety Disord 2011; 25:563–567.
- Jakupcak M, Hoerster KD, Varra A, Vannoy S, Felker B, Hunt S. Hopelessness and suicidal ideation in Iraq and Afghanistan War Veterans reporting subthreshold and threshold posttraumatic stress disorder. J Nerv Ment Dis 2011; 199:272–275.
- Lemaire CM, Graham DP. Factors associated with suicidal ideation in OEF/OIF veterans. J Affect Disord 2011; 130:231–238.
- Sher L. The concept of post-traumatic mood disorder. Med Hypotheses 2005; 65:205–210.
- Sher L. Suicide in war veterans: the role of comorbidity of PTSD and depression. Expert Rev Neurother 2009; 9:921–923.
- Blank AS. Clinical detection, diagnosis, and differential diagnosis of posttraumatic stress disorder. Psychiatr Clin North Am 1994; 17:351–383.
- Denneson LM, Basham C, Dickinson KC, et al. Suicide risk assessment and content of VA health care contacts before suicide completion by veterans in Oregon. Psychiatr Serv 2010; 61:1192–1197.
- US Department of Veterans Affairs. Mental Health Suicide Prevention. http://www.mentalhealth.va.gov/suicide_prevention. Accessed December 8, 2011.
- Gliatto MF, Rai AK. Evaluation and treatment of patients with suicidal ideation. Am Fam Physician 1999; 59:1500–1506.
- Sher L, Oquendo MA, Mann JJ. Risk of suicide in mood disorders. Clin Neurosci Res 2001; 1:337–344.
- Oquendo MA, Currier D, Mann JJ. Prospective studies of suicidal behavior in major depressive and bipolar disorders: what is the evidence for predictive risk factors? Acta Psychiatr Scand 2006; 114:151–158.
- Sher L. Alcoholism and suicidal behavior: a clinical overview. Acta Psychiatr Scand 2006; 113:13–22.
- Moscicki EK. Identification of suicide risk factors using epidemiologic studies. Psychiatr Clin North Am 1997; 20:499–517.
- Goldman HH. Review of General Psychiatry, 5th ed. New York, NY: Lange Medical Books/McGraw-Hill; 2000.
- Hendin H, Haas AP. Suicide and guilt as manifestations of PTSD in Vietnam combat veterans. Am J Psychiatry 1991; 148:586–591.
- Henning KR, Frueh BC. Combat guilt and its relationship to PTSD symptoms. J Clin Psychol 1997; 53:801–808.
- Marx BP, Foley KM, Feinstein BA, Wolf EJ, Kaloupek DG, Keane TM. Combat-related guilt mediates the relations between exposure to combat-related abusive violence and psychiatric diagnoses. Depress Anxiety 2010; 27:287–293.
- Hetrick SE, Purcell R, Garner B, Parslow R. Combined pharmacotherapy and psychological therapies for post traumatic stress disorder (PTSD). Cochrane Database Syst Rev 2010; ( 7):CD007316.
- Brent DA, Oquendo M, Birmaher B, et al. Familial pathways to early-onset suicide attempt: risk for suicidal behavior in offspring of mood-disordered suicide attempters. Arch Gen Psychiatry 2002; 59:801–807.
- Australian Department of Health and Ageing. Fact sheet 6: Resilience, vulnerability, and suicide prevention. Living is for Everyone (LIFE) fact sheets. www.livingisforeveryone.com.au/LIFE-Fact-sheets.html. Accessed December 8, 2011.
- Mann JJ. Searching for triggers of suicidal behavior. Am J Psychiatry 2004; 161:395–397.
- American Psychiatric Association. Practice Guideline For The Assessment and Treatment of Patients with Suicidal Behaviors. Arlington, VA: American Psychiatric Publishing, Inc.; 2003.
- Beck AT, Brown GK, Steer RA, Dahlsgaard KK, Grisham JR. Suicide ideation at its worst point: a predictor of eventual suicide in psychiatric outpatients. Suicide Life Threat Behav 1999; 29:1–9.
- Beck AT, Steer RA, Kovacs M, Garrison B. Hopelessness and eventual suicide: a 10-year prospective study of patients hospitalized with suicidal ideation. Am J Psychiatry 1985; 142:559–563.
- Reinherz HZ, Tanner JL, Berger SR, Beardslee WR, Fitzmaurice GM. Adolescent suicidal ideation as predictive of psychopathology, suicidal behavior, and compromised functioning at age 30. Am J Psychiatry 2006; 163:1226–1232.
- Vilhjalmsson R, Kristjansdottir G, Sveinbjarnardottir E. Factors associated with suicide ideation in adults. Soc Psychiatry Psychiatr Epidemiol 1998; 33:97–103.
- Miotto P, De Coppi M, Frezza M, Petretto D, Masala C, Preti A. Suicidal ideation and aggressiveness in school-aged youths. Psychiatry Res 2003; 120:247–255.
- De Man AF, Leduc CP. Suicidal ideation in high school students: depression and other correlates. J Clin Psychol 1995; 51:173–181.
- Chioqueta AP, Stiles TC. The relationship between psychological buffers, hopelessness, and suicidal ideation: identification of protective factors. Crisis 2007; 28:67–73.
- Hatcher-Kay C, King CA. Depression and suicide. Pediatr Rev 2003; 24:363–371.
- Brown GK, Beck AT, Steer RA, Grisham JR. Risk factors for suicide in psychiatric outpatients: a 20-year prospective study. J Consult Clin Psychol 2000; 68:371–377.
- Fawcett J, Scheftner WA, Fogg L, et al. Time-related predictors of suicide in major affective disorder. Am J Psychiatry 1990; 147:1189–1194.
- Bulik CM, Carpenter LL, Kupfer DJ, Frank E. Features associated with suicide attempts in recurrent major depression. J Affect Disord 1990; 18:29–37.
- Drake RE, Gates C, Cotton PG, Whitaker A. Suicide among schizophrenics. Who is at risk? J Nerv Ment Dis 1984; 172:613–617.
- Oquendo MA, Galfalvy H, Russo S, et al. Prospective study of clinical predictors of suicidal acts after a major depressive episode in patients with major depressive disorder or bipolar disorder. Am J Psychiatry 2004; 161:1433–1441.
- Mann JJ, Ellis SP, Waternaux CM, et al. Classification trees distinguish suicide attempters in major psychiatric disorders: a model of clinical decision making. J Clin Psychiatry 2008; 69:23–31.
- Galfalvy HC, Oquendo MA, Mann JJ. Evaluation of clinical prognostic models for suicide attempts after a major depressive episode. Acta Psychiatr Scand 2008; 117:244–252.
- Crosby AE, Cheltenham MP, Sacks JJ. Incidence of suicidal ideation and behavior in the United States, 1994. Suicide Life Threat Behav 1999; 29:131–140.
- Weissman MM, Bland RC, Canino GJ, et al. Prevalence of suicide ideation and suicide attempts in nine countries. Psychol Med 1999; 29:9–17.
- Kessler RC, Borges G, Walters EE. Prevalence of and risk factors for lifetime suicide attempts in the National Comorbidity Survey. Arch Gen Psychiatry 1999; 56:617–626.
- Bernal M, Haro JM, Bernert S, et al; ESEMED/MHEDEA Investigators. Risk factors for suicidality in Europe: results from the ESEMED study. J Affect Disord 2007; 101:27–34.
- Selby EA, Anestis MD, Bender TW, et al. Overcoming the fear of lethal injury: evaluating suicidal behavior in the military through the lens of the Interpersonal-Psychological Theory of Suicide. Clin Psychol Rev 2010; 30:298–307.
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 3rd ed. Washington, DC: American Psychiatric Association; 1980:236–238.
- Schnurr PP, Friedman MJ, Bernardy NC. Research on posttraumatic stress disorder: epidemiology, pathophysiology, and assessment. J Clin Psychol 2002; 58:877–889.
- Saigh PA, Bremner JD. The history of posttraumatic stress disorder. In:Saigh PA, Bremner JD, eds. Posttraumatic Stress Disorder. A Comprehensive Text. Boston, MA: Allyn & Bacon; 1999:1–17.
- Hageman I, Andersen HS, Jørgensen MB. Post-traumatic stress disorder: a review of psychobiology and pharmacotherapy. Acta Psychiatr Scand 2001; 104:411–422.
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. Text Revision. 4th ed. Washington, DC: American Psychiatric Association; 2000:463–468.
- Sher L, Yehuda R. Preventing suicide among returning combat veterans: a moral imperative. Mil Med 2011; 176:601–602.
- Davidson JR, Hughes D, Blazer DG, George LK. Post-traumatic stress disorder in the community: an epidemiological study. Psychol Med 1991; 21:713–721.
- Kessler RC, Sonnega A, Bromet E, Hughes M, Nelson CB. Posttraumatic stress disorder in the National Comorbidity Survey. Arch Gen Psychiatry 1995; 52:1048–1060.
- Sher L. Recognizing post-traumatic stress disorder. QJM 2004; 97:1–5.
- Kaplan GB, Vasterling JJ, Vedak PC. Brain-derived neurotrophic factor in traumatic brain injury, post-traumatic stress disorder, and their comorbid conditions: role in pathogenesis and treatment. Behav Pharmacol 2010; 21:427–437.
- Taubman-Ben-Ari O, Rabinowitz J, Feldman D, Vaturi R. Post-traumatic stress disorder in primary-care settings: prevalence and physicians’ detection. Psychol Med 2001; 31:555–560.
- Tanielian T, Jaycox LH, editors. Invisible Wounds of War. Psychological and Cognitive Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: RAND Corporation; 2008.
- Spooner MH. Suicide claiming more British Falkland veterans than fighting did. CMAJ 2002; 166:1453.
- Kapur N, While D, Blatchley N, Bray I, Harrison K. Suicide after leaving the UK armed forces—a cohort study. PLoS Med 2009; 6:e26.
- A brief history of the Falklands Islands. Part 7— The 1982 War and Beyond. http://www.falklands.info/history/history7.html. Accessed January 5, 2012.
- Sher L, Vilens A, editors. War and Suicide. Hauppauge, New York: Nova Science Publishers; 2009.
- Kaplan MS, McFarland BH, Huguet N. Firearm suicide among veterans in the general population: findings from the National Violent Death Reporting System. J Trauma 2009; 67:503–507.
- Jakupcak M, Cook J, Imel Z, Fontana A, Rosenheck R, McFall M. Posttraumatic stress disorder as a risk factor for suicidal ideation in Iraq and Afghanistan War veterans. J Trauma Stress 2009; 22:303–306.
- Tarrier N, Gregg L. Suicide risk in civilian PTSD patients—predictors of suicidal ideation, planning and attempts. Soc Psychiatry Psychiatr Epidemiol 2004; 39:655–661.
- Bell JB, Nye EC. Specific symptoms predict suicidal ideation in Vietnam combat veterans with chronic post-traumatic stress disorder. Mil Med 2007; 172:1144–1147.
- Kramer TL, Lindy JD, Green BL, Grace MC, Leonard AC. The comorbidity of post-traumatic stress disorder and suicidality in Vietnam veterans. Suicide Life Threat Behav 1994; 24:58–67.
- Ferrada-Noli M, Asberg M, Ormstad K. Suicidal behavior after severe trauma. Part 2: The association between methods of torture and of suicidal ideation in posttraumatic stress disorder. J Trauma Stress 1998; 11:113–124.
- Tiet QQ, Finney JW, Moos RH. Recent sexual abuse, physical abuse, and suicide attempts among male veterans seeking psychiatric treatment. Psychiatr Serv 2006; 57:107–113.
- Pietrzak RH, Goldstein MB, Malley JC, Rivers AJ, Johnson DC, Southwick SM. Risk and protective factors associated with suicidal ideation in veterans of Operations Enduring Freedom and Iraqi Freedom. J Affect Disord 2010; 123:102–107.
- Guerra VS, Calhoun PS; Mid-Atlantic Mental Illness Research, Education and Clinical Center Workgroup. Examining the relation between posttraumatic stress disorder and suicidal ideation in an OEF/OIF veteran sample. J Anxiety Disord 2011; 25:12–18.
- Maguen S, Luxton DD, Skopp NA, et al. Killing in combat, mental health symptoms, and suicidal ideation in Iraq war veterans. J Anxiety Disord 2011; 25:563–567.
- Jakupcak M, Hoerster KD, Varra A, Vannoy S, Felker B, Hunt S. Hopelessness and suicidal ideation in Iraq and Afghanistan War Veterans reporting subthreshold and threshold posttraumatic stress disorder. J Nerv Ment Dis 2011; 199:272–275.
- Lemaire CM, Graham DP. Factors associated with suicidal ideation in OEF/OIF veterans. J Affect Disord 2011; 130:231–238.
- Sher L. The concept of post-traumatic mood disorder. Med Hypotheses 2005; 65:205–210.
- Sher L. Suicide in war veterans: the role of comorbidity of PTSD and depression. Expert Rev Neurother 2009; 9:921–923.
- Blank AS. Clinical detection, diagnosis, and differential diagnosis of posttraumatic stress disorder. Psychiatr Clin North Am 1994; 17:351–383.
- Denneson LM, Basham C, Dickinson KC, et al. Suicide risk assessment and content of VA health care contacts before suicide completion by veterans in Oregon. Psychiatr Serv 2010; 61:1192–1197.
- US Department of Veterans Affairs. Mental Health Suicide Prevention. http://www.mentalhealth.va.gov/suicide_prevention. Accessed December 8, 2011.
- Gliatto MF, Rai AK. Evaluation and treatment of patients with suicidal ideation. Am Fam Physician 1999; 59:1500–1506.
- Sher L, Oquendo MA, Mann JJ. Risk of suicide in mood disorders. Clin Neurosci Res 2001; 1:337–344.
- Oquendo MA, Currier D, Mann JJ. Prospective studies of suicidal behavior in major depressive and bipolar disorders: what is the evidence for predictive risk factors? Acta Psychiatr Scand 2006; 114:151–158.
- Sher L. Alcoholism and suicidal behavior: a clinical overview. Acta Psychiatr Scand 2006; 113:13–22.
- Moscicki EK. Identification of suicide risk factors using epidemiologic studies. Psychiatr Clin North Am 1997; 20:499–517.
- Goldman HH. Review of General Psychiatry, 5th ed. New York, NY: Lange Medical Books/McGraw-Hill; 2000.
- Hendin H, Haas AP. Suicide and guilt as manifestations of PTSD in Vietnam combat veterans. Am J Psychiatry 1991; 148:586–591.
- Henning KR, Frueh BC. Combat guilt and its relationship to PTSD symptoms. J Clin Psychol 1997; 53:801–808.
- Marx BP, Foley KM, Feinstein BA, Wolf EJ, Kaloupek DG, Keane TM. Combat-related guilt mediates the relations between exposure to combat-related abusive violence and psychiatric diagnoses. Depress Anxiety 2010; 27:287–293.
- Hetrick SE, Purcell R, Garner B, Parslow R. Combined pharmacotherapy and psychological therapies for post traumatic stress disorder (PTSD). Cochrane Database Syst Rev 2010; ( 7):CD007316.
- Brent DA, Oquendo M, Birmaher B, et al. Familial pathways to early-onset suicide attempt: risk for suicidal behavior in offspring of mood-disordered suicide attempters. Arch Gen Psychiatry 2002; 59:801–807.
- Australian Department of Health and Ageing. Fact sheet 6: Resilience, vulnerability, and suicide prevention. Living is for Everyone (LIFE) fact sheets. www.livingisforeveryone.com.au/LIFE-Fact-sheets.html. Accessed December 8, 2011.
KEY POINTS
- The association of suicidal ideation with PTSD and depression and the prevalence of these conditions in combat veterans underline the importance of recognizing and treating these conditions.
- In veterans with PTSD related to combat experience, combat-related guilt may be a significant predictor of suicidal ideation and attempts.
- Research addressing PTSD, depression, and suicidal behavior in war veterans is critically needed to improve our understanding of the nature of these conditions and how best to treat them.
Talking to patients: Barriers to overcome
Cultural diversity is indeed a barrier we need to clear to provide good health care to all. But the challenge of physician-patient communication goes beyond differences in sex, race, ethnicity, age, and level of literacy. Dialogue between physicians and patients is not always easy. There are barriers everywhere that can obstruct our best plans and impede a successful clinical outcome. And we may not even realize that the patient has hit a barrier until long after the visit, when we discover that medication has been taken “the wrong way” or not at all, that studies were not obtained, or that follow-up visits were not arranged.
Communication barriers include use of medical terms that we assume patients understand, lack of attention to clues of anxiety in our patients or their families that will adversely affect their memory of the visit, not finding out the patient’s actual concerns, and loss of the human connection in our rush to finish charting and to stay on time. But it is this connection that often drives the action plan to a successful conclusion.
What can we do in this era of one patient every 15 minutes? Try to make a genuine connection with every patient. This will enhance engagement and the retention of knowledge. Address the patient’s concerns, not just our own. Write legibly or type in the patient instruction section of the electronic medical record the key messages from the visit—diagnosis, plan, tests yet to be done—and give this to the patient at every visit. It is not insulting to do this, nor is it insulting to explain the details of what may seem like an intuitively obvious procedure or therapy. Ask the patient what his or her major concern is, and be sure to address it.
Often, the biggest barrier is that we physicians forget that each patient comes to us with a unique set of fears, rationalizations, and biases that we need to address (even if initially unspoken), just as we address the challenges of diagnosis and therapy. Patients don’t all think like doctors, but we need to be able to think like patients.
Cultural diversity is indeed a barrier we need to clear to provide good health care to all. But the challenge of physician-patient communication goes beyond differences in sex, race, ethnicity, age, and level of literacy. Dialogue between physicians and patients is not always easy. There are barriers everywhere that can obstruct our best plans and impede a successful clinical outcome. And we may not even realize that the patient has hit a barrier until long after the visit, when we discover that medication has been taken “the wrong way” or not at all, that studies were not obtained, or that follow-up visits were not arranged.
Communication barriers include use of medical terms that we assume patients understand, lack of attention to clues of anxiety in our patients or their families that will adversely affect their memory of the visit, not finding out the patient’s actual concerns, and loss of the human connection in our rush to finish charting and to stay on time. But it is this connection that often drives the action plan to a successful conclusion.
What can we do in this era of one patient every 15 minutes? Try to make a genuine connection with every patient. This will enhance engagement and the retention of knowledge. Address the patient’s concerns, not just our own. Write legibly or type in the patient instruction section of the electronic medical record the key messages from the visit—diagnosis, plan, tests yet to be done—and give this to the patient at every visit. It is not insulting to do this, nor is it insulting to explain the details of what may seem like an intuitively obvious procedure or therapy. Ask the patient what his or her major concern is, and be sure to address it.
Often, the biggest barrier is that we physicians forget that each patient comes to us with a unique set of fears, rationalizations, and biases that we need to address (even if initially unspoken), just as we address the challenges of diagnosis and therapy. Patients don’t all think like doctors, but we need to be able to think like patients.
Cultural diversity is indeed a barrier we need to clear to provide good health care to all. But the challenge of physician-patient communication goes beyond differences in sex, race, ethnicity, age, and level of literacy. Dialogue between physicians and patients is not always easy. There are barriers everywhere that can obstruct our best plans and impede a successful clinical outcome. And we may not even realize that the patient has hit a barrier until long after the visit, when we discover that medication has been taken “the wrong way” or not at all, that studies were not obtained, or that follow-up visits were not arranged.
Communication barriers include use of medical terms that we assume patients understand, lack of attention to clues of anxiety in our patients or their families that will adversely affect their memory of the visit, not finding out the patient’s actual concerns, and loss of the human connection in our rush to finish charting and to stay on time. But it is this connection that often drives the action plan to a successful conclusion.
What can we do in this era of one patient every 15 minutes? Try to make a genuine connection with every patient. This will enhance engagement and the retention of knowledge. Address the patient’s concerns, not just our own. Write legibly or type in the patient instruction section of the electronic medical record the key messages from the visit—diagnosis, plan, tests yet to be done—and give this to the patient at every visit. It is not insulting to do this, nor is it insulting to explain the details of what may seem like an intuitively obvious procedure or therapy. Ask the patient what his or her major concern is, and be sure to address it.
Often, the biggest barrier is that we physicians forget that each patient comes to us with a unique set of fears, rationalizations, and biases that we need to address (even if initially unspoken), just as we address the challenges of diagnosis and therapy. Patients don’t all think like doctors, but we need to be able to think like patients.
Overcoming health care disparities via better cross-cultural communication and health literacy
An english-speaking middle-aged woman from an ethnic minority group presents to her internist for follow-up of her chronic medical problems, which include diabetes, high blood pressure, asthma, and high cholesterol. Although she sees her physician regularly, her medical conditions are not optimally controlled.
At one of the visits, her physician gives her a list of her medications and, while reviewing it, explains—not for the first time—the importance of taking all of them as prescribed. The patient looks at the paper for a while, and then cautiously tells the physician, “But I can’t read.”
This patient presented to our practice several years ago. The scenario may be familiar to many primary physicians, except for the ending— ie, the patient telling her physician that she cannot read.
Her case raises several questions:
- Why did the physician not realize at the first encounter that she could not read the names of her prescribed medications?
- Why did the patient wait to tell her physician that important fact?
- And to what extent did her inability to read contribute to the poor control of her chronic medical problems?
Patients like this one are the human faces behind the statistics about health disparities—the worse outcomes noted in minority populations. Here, we discuss the issues of cross-cultural communication and health literacy as they relate to health care disparities.
DISPARITY IS NOT ONLY DUE TO LACK OF ACCESS
Health care disparity has been an important topic of discussion in medicine in the past decade.
In a 2003 publication,1 the Institute of Medicine identified lower quality of health care in minority populations as a serious problem. Further, it disputed the long-held belief that the differences in health care between minority and nonminority populations could be explained by lack of access to medical services in minority groups. Instead, it cited factors at the level of the health care system, the level of the patient, and the “care-process level” (ie, the physician-patient encounter) as contributing in distinct ways to the problem.1
A CALL FOR CULTURAL COMPETENCE
In a policy paper published in 2010, the American College of Physicians2 reviewed the progress made in addressing health care disparities. In addition, noting that an individual’s environment, income, level of education, and other factors all affect health, it called for a concerted effort to improve insurance coverage, health literacy, and the health care delivery system; to address stressors both within and outside the health care system; and to recruit more minority health care workers.
None of these things seems like anything a busy practicing clinician could do much about. However, we can try to improve our cultural competence in our interactions with patients on an individual level.
The report recommends that physicians and other health care professionals be sensitive to cultural diversity among patients. It also says we should recognize our preconceived perceptions of minority patients that may affect their treatment and contribute to disparities in health care in minorities. To those ends, it calls for cultural competence training in medical school to improve cultural awareness and sensitivity.2
The Office of Minority Health broadly defines cultural and linguistic competence in health as “a set of congruent behaviors, attitudes, and policies that come together in a system, agency, or among professionals that enables effective work in cross-cultural situations.”3 Cultural competence training should focus on being aware of one’s personal bias, as well as on education about culture-specific norms or knowledge of possible causes of mistrust in minority groups.
For example, many African Americans may mistrust the medical system, given the awareness of previous inequities such as the notorious Tuskegee syphilis study (in which informed consent was not used and treatment that was needed was withheld). Further, beliefs about health in minority populations may be discordant with the Western medical model.4
RECOGNIZING OUR OWN BIASES
Preconceived perceptions on the part of the physician may be shaped by previous experiences with patients from a specific minority group or by personal bias. Unfortunately, even a well-meaning physician who has tried to learn about cultural norms of specific minority groups can be at risk of stereotyping by assuming that all members of that group hold the same beliefs. From the patient’s viewpoint, they can also be molded by previous experiences of health care inequities or unfavorable interactions with physicians.
For example, in the case we described above, perhaps the physician had assumed that the patient was noncompliant and therefore did not look for reasons for the poor control of her medical problems, or maybe the patient did not trust the physician enough to explain the reason for her difficulty with understanding how to take her medications.
Being aware of our own unconscious stereotyping of minority groups is an important step in effectively communicating with patients from different cultural backgrounds or with low health literacy. We also need to reflect about our own health belief system and try to incorporate the patient’s viewpoint into decision-making.
If, on reflection, we recognize that we do harbor biases, we ought to think about ways to better accommodate patients from different backgrounds and literacy levels, including trying to learn more about their culture or mastering techniques to effectively explain treatment plans to low-literacy patients.
ALL ENCOUNTERS WITH PATIENTS ARE ‘CROSS-CULTURAL’
In health care, “cross-cultural communication” does not refer only to interactions between persons from different ethnic backgrounds or with different beliefs about health. Health care has a culture of its own, creating a cross-cultural encounter the moment a person enters your office or clinic in the role of “patient.”
Carillo et al5 categorized issues that may pose difficulties in a cross-cultural encounter as those of authority, physical contact, communication styles, gender, sexuality, and family.
Physician-patient communication is a complicated issue. Many patients will not question a physician if their own cultural norms view it as disrespectful—even if they have very specific fears about the diagnosis or treatment plan. They may also defer any important decision to a family member who has the authority to make decisions for the family.
Frequently, miscommunication is unintentional. In a recent study of hospitalized patients,6 77% of the physicians believed that their patients understood their diagnoses, while only 57% of patients could correctly state this information.
WHAT DOES THE PATIENT THINK?
A key issue in cross-cultural communication, and one that is often neglected, is to address a patient’s fears about his or her illness. In the study mentioned above, more than half of the patients who reported having anxieties or fears in the hospital stated that their physicians did not discuss their fears.6 But if we fail to do so, patients may be less satisfied with the treatment plan and may not accept our recommendations.
A patient’s understanding of his or her illness may be very different from the biomedical explanation. For example, we once saw an elderly man who was admitted to the hospital with back pain due to metastatic prostate cancer, but who was convinced that his symptoms were caused by a voodoo “hex” placed on him by his ex-wife.
For example, for the man who thought that his ex-wife put a hex on him, asking him “What do you think has caused your problem?” during the initial history-taking would allow him to express his concern about the hex and give the physician an opportunity to learn of this fear and then to offer the biomedical explanation for the problem and for the recommended treatment.
What happens more often in practice is that the specific fear is not addressed at the start of the encounter. Consequently, the patient is less likely to follow through with the treatment plan, as he or she does not feel the prescribed treatment is fixing the real problem. This process of exploring the explanatory model of illness may be viewed on a practical level as a way of managing expectations in the clinical care of culturally diverse populations.
HEALTH LITERACY: MORE THAN THE ABILITY TO READ
The better you know how to read, the healthier you probably are. In fact, a study found that a person’s literacy level correlated more strongly with health than did race or formal education level.9 (Apparently, attending school does not necessarily mean that people know how to read, and not attending school doesn’t mean that they don’t.)
Even more important than literacy may be health literacy, defined by Ratzan and Parker as “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.”8 It includes basic math and critical-thinking skills that allow patients to use medications properly and participate in treatment decisions. Thus, health literacy is much more than the ability to read.
Even people who read and write very well may have trouble when confronted with the complexities of navigating our health care system, such as appointment scheduling, specialty referrals, and follow-up testing and procedures: their health literacy may be lower than their general literacy. We had a patient, a highly trained professional, who was confused by instructions for preparing for colonoscopy on a patient handout. Another similar patient could not understand the dosing of eye drops after cataract surgery because the instructions on the discharge paperwork were unclear.
However, limited health literacy disproportionately affects minority groups and is linked to poorer health care outcomes. Thus, addressing limited health literacy is important in addressing health care disparities. Effective physician-patient communication about treatment plans is fundamental to providing equitable care to patients from minority groups, some of whom may be at high risk for low health literacy.
Below, we will review some of the data on health literacy and offer suggestions for screening and interventions for those whose health literacy is limited.
36% have basic or below-basic reading skills
Every 10 years, the US Department of Education completes its National Assessment of Adult Literacy. Its 2003 survey—the most recent—included 19,000 adults in the community and in prison, interviewed at their place of residence.10 Each participant completed a set of tasks to measure his or her ability to read, understand, and interpret text and to use and interpret numbers.
Participants were divided into four categories based on the results: proficient (12%), intermediate (53%), basic (22%), and below basic (14%). Additionally, 5% of potential participants could not be tested because they had insufficient skills to participate in the survey.
Low literacy puts patients at risk
Although literacy is not the same as health literacy, functionally, those who have basic or below-basic literacy skills (36% of the US population) are at high risk for encountering problems in the US health care system. For example, they would have difficulty with most patient education handouts and health insurance forms.
Limited health literacy exacts both personal and financial costs. Patients with low health literacy are less likely to understand how to take their medications, what prescription warning labels mean, how to schedule follow-up appointments, and how to fill out health insurance forms.11–14
Medicare managed-care enrollees are more likely to be hospitalized if they have limited health literacy,15 and diabetic Medicaid patients who have limited health literacy are less likely to have good glycemic control.16 One study showed annual health care costs of $10,688 for Medicaid enrollees with limited health literacy compared with $2,891 for all enrollees.17 The total cost of limited health literacy to the US health care system is estimated to be between $50 and $73 billion per year.18
Screening for limited health literacy: You can’t tell just by looking
Given the high costs of low health literacy, identifying patients who have it is of paramount importance.
Groups who are more likely to have limited health literacy include the elderly, the poor, the unemployed, high school dropouts, members of minority groups, recent immigrants, and people for whom English is a second language.
However, these demographic factors are not sufficient as a screen for low health literacy—you can't tell just by looking. Red flags for low health literacy include difficulty filling out forms in the office, missed appointments, nonadherence to medication regimens, failure to follow up with scheduled testing, and difficulty reading written materials, often masked with a statement such as “I forgot my glasses and will read this at home.”
A number of screening tests have been developed, including the Rapid Estimate of Adult Literacy in Medicine (REALM)19 and the Test for Functional Health Literacy in Adults (TOFHLA).20 These tests are long, making them difficult to incorporate into a patient visit in a busy primary care practice, but they are useful for research. A newer screening test asks the patient to review a nutrition label and answer six questions.21
The most useful screening test for clinical use may consist of a single question. Questions that have been validated:
- “How often do you need to have someone help you when you read instructions, pamphlets, or other written material from your doctor or pharmacy?” Positive answers are “sometimes,” “often,” or “always.”
- “How confident are you filling out medical forms by yourself?” Positive answers are “somewhat,” “a little bit,” or “not at all.”22–24
These questions can be included either in the initial screening by a nurse or medical assistant or as part of the social history portion of the interview with the physician.
A “brown bag review” can also be helpful. Patients are asked to bring in their medications (often in a brown bag—hence the name). Asking the patient to identify each medication by name and the indication for it can uncover knowledge gaps that indicate low health literacy.
The point to remember is that patients with low health literacy will probably not tell you that they do not understand. However, they would appreciate being asked in a nonthreatening manner.
Make your office a shame-free environment
Many experts advocate a “universal precautions approach,” in which interventions to address low health literacy are incorporated into routine office practice for all patients. Practice sites should adopt a culture of a “shame-free environment,” in which support staff encourage patients to ask questions and are trained to offer assistance to those having difficulty reading or filling out forms.
On a broader level, medical offices and hospitals can partner with adult-learning specialists to help patients gain skills to navigate the health care system. All signage should be clear and should use plain language as opposed to medical terms. Medical forms and questionnaires should be designed to collect only essential information and should be written at a sixth-grade reading level or below. Patient instructions and educational materials should also be clear and free of jargon.
The ‘teach-back’ technique
The “teach-back” technique is a simple method to confirm patient understanding at the end of the visit. This involves asking patients in a nonthreatening way to explain or demonstrate what they have been told. Examples:
- “I want to make sure I have explained things correctly. Can you tell me how you plan to take your medication when you go home?”
- “I want to make sure I have done a good job explaining things to you. When you go home and tell your spouse about your visit today, what will you say?”
These questions should be asked in a nonthreatening way. Put the burden of explanation on yourself as the first step, and let the patient know you are willing to explain again more thoroughly any instructions that may have not been clearly understood.
Other measures
Pictures and computer-based education may be useful for some patients who have difficulty reading.
Weiss25 advocates six steps to improve communication with patients in all encounters: slow down; use plain, nonmedical language; show or draw pictures; limit the amount of information provided; use the teach-back technique; and create a shame-free environment, encouraging questions.
Improving health literacy, as it relates to cross-cultural communication of treatment plans, must encompass understanding of health beliefs often based on cultural norms, in order to come to agreement on a mutually acceptable plan of care. Physicians should be aware of preferences for nontraditional or complementary treatments that may reflect specific cultural beliefs.
IF THE PATIENT DOES NOT SPEAK ENGLISH
Verbal communication across language barriers poses another layer of challenge. A trained interpreter should be used whenever possible when treating a patient who speaks a different language than that of the practitioner. When family members are used as interpreters, there are risks that the patient may not fully disclose facts about the history of illness or specific symptoms, and also that family members may place their own “twist” on the story when translating.
The physician should speak directly to the patient in a normal tone of voice. In this setting, also remember that nonverbal communication can be misinterpreted. Gestures should be avoided. Finally, be aware that personal space is viewed differently depending on cultural background, as is eye contact.
It is helpful to have a pre-interview meeting with the interpreter to explain the format of the interview, as well as a post-interview meeting to ensure all parties felt they effectively communicated during the encounter.
TOWARD EQUITABLE CARE
Health care disparities are the result of multiple determinants. In December 2008, a National Institutes of Health summit conference cited not only barriers to access, but also the interaction of biological, behavioral, social, environmental, economic, cultural, and political factors, and noted that the causes and effects of health disparities transcend health care.26
Clearly, an individual physician’s efforts will not be all that is needed to eliminate health disparities. A team-based approach is essential, using skills of nonphysician members of the health care team such as nurses, medical assistants, social workers, and case managers. Continued opportunity for professional training and development in provider-patient communication skills should be offered.
However, the impact of effective cross-cultural communication and managing low health literacy populations on the physician-patient level should not be understated. As practitioners treating patients from diverse backgrounds, improving self-awareness, eliciting the patient’s explanatory model, and assuring understanding of treatment plans for patients with low health literacy or with language barriers, we can do our part in working toward equitable care for all patients.
- Institute of Medicine of the National Academies. Unequal Treatment: Confronting Racial and Ethnic Disparities in Healthcare; 2003. http://www.nap.edu/openbook.php?record_id=12875&page=R1. Accessed January 5, 2012.
- American College of Physicians. Racial and Ethnic Disparities in Health Care, Updated 2010. Philadelphia: American College of Physicians; 2010: Policy Paper.
- US Department of Health and Human Services. The Office of Minority Health. What Is Cultural Competency? http://minorityhealth.hhs.gov/templates/browse.aspx?lvl=2&lvlid=11. Accessed January 5, 2012.
- Eiser AR, Ellis G. Viewpoint: cultural competence and the African American experience with health care: the case for specific content in cross-cultural education. Acad Med 2007; 82:176–183.
- Carrillo JE, Green AR, Betancourt JR. Cross-cultural primary care: a patient-based approach. Ann Intern Med 1999; 130:829–834.
- Olson DP, Windish DM. Communication discrepancies between physicians and hospitalized patients. Arch Intern Med 2010; 170:1302–1307.
- Kleinman A, Eisenberg L, Good B. Culture, illness, and care: clinical lessons from anthropologic and cross-cultural research. Ann Intern Med 1978; 88:251–258.
- National Library of Medicine. Current bibliographies in medicine 2000–1. Health Literacy. www.nlm.nih.gov/archive//20061214/pubs/cbm/hliteracy.html. Accessed January 5, 2012.
- Sentell TL, Halpin HA. Importance of adult literacy in understanding health disparities. J Gen Intern Med 2006; 21:862–866.
- Kutner M, Greenberg E, Jin Y, Paulsen C. The Health Literacy of America’s Adults: Results From the 2003 National Assessment of Adult Literacy (NCES 2006–483). US Department of Education. Washington, DC: National Center for Education Statistics; 2006. http://nces.ed.gov/pubs2006/2006483.pdf. Accessed January 5, 2012.
- Williams MV, Parker RM, Baker DW, et al. Inadequate functional health literacy among patients at two public hospitals. JAMA 1995; 274:1677–1682.
- Baker DW, Parker RM, Williams MV, et al. The health care experience of patients with low literacy. Arch Fam Med 1996; 5:329–334.
- Fact Sheet: health literacy and understanding medical information. Lawrenceville, NJ: Center for Health Care Strategies; 2002.
- Wolf MS, Davis TC, Tilson HH, Bass PF, Parker RM. Misunderstanding of prescription drug warning labels among patients with low literacy. Am J Health Syst Pharm 2006; 63:1048–1055.
- Baker DW, Gazmararian JA, Williams MV, et al. Functional health literacy and the risk of hospital admission among Medicare managed care enrollees. Am J Public Health 2002; 92:1278–1283.
- Schillinger D, Barton LR, Karter AJ, Wang F, Adler N. Does literacy mediate the relationship between education and health outcomes? A study of a low-income population with diabetes. Public Health Rep 2006; 121:245–254.
- Weiss BD, Palmer R. Relationship between health care costs and very low literacy skills in a medically needy and indigent Medicaid population. J Am Board Fam Pract 2004; 17:44–47.
- Friedland RB. Understanding health literacy: new estimates of the costs of inadequate health literacy. Washington, DC: National Academy on an Aging Society; 1998.
- Davis TC, Long SW, Jackson RH, et al. Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med 1993; 25:391–395.
- Baker DW, Williams MV, Parker RM, Gazmararian JA, Nurss J. Development of a brief test to measure functional health literacy. Patient Educ Couns 1999; 38:33–42.
- Weiss BD, Mays MZ, Martz W, et al. Quick assessment of literacy in primary care: the newest vital sign. Ann Fam Med 2005; 3:514–522.
- Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med 2004; 36:588–594.
- Morris NS, MacLean CD, Chew LD, Littenberg B. The Single Item Literacy Screener: evaluation of a brief instrument to identify limited reading ability. BMC Fam Pract 2006; 7:21.
- Wallace LS, Rogers ES, Roskos SE, Holiday DB, Weiss BD. Brief report: screening items to identify patients with limited health literacy skills. J Gen Intern Med 2006; 21:874–877.
- Weiss BD. Health Literacy and Patient Safety: Help Patients Understand. 2nd ed. American Medical Association Foundation and American Medical Association. www.ama-assn.org/ama1/pub/upload/mm/367/healthlitclinicians.pdf. Accessed January 5, 2012.
- Dankwa-Mullan I, Rhee KB, Williams K, et al. The science of eliminating health disparities: summary and analysis of the NIH summit recommendations. Am J Public Health 2010; 100(suppl 1):S12–S18.
An english-speaking middle-aged woman from an ethnic minority group presents to her internist for follow-up of her chronic medical problems, which include diabetes, high blood pressure, asthma, and high cholesterol. Although she sees her physician regularly, her medical conditions are not optimally controlled.
At one of the visits, her physician gives her a list of her medications and, while reviewing it, explains—not for the first time—the importance of taking all of them as prescribed. The patient looks at the paper for a while, and then cautiously tells the physician, “But I can’t read.”
This patient presented to our practice several years ago. The scenario may be familiar to many primary physicians, except for the ending— ie, the patient telling her physician that she cannot read.
Her case raises several questions:
- Why did the physician not realize at the first encounter that she could not read the names of her prescribed medications?
- Why did the patient wait to tell her physician that important fact?
- And to what extent did her inability to read contribute to the poor control of her chronic medical problems?
Patients like this one are the human faces behind the statistics about health disparities—the worse outcomes noted in minority populations. Here, we discuss the issues of cross-cultural communication and health literacy as they relate to health care disparities.
DISPARITY IS NOT ONLY DUE TO LACK OF ACCESS
Health care disparity has been an important topic of discussion in medicine in the past decade.
In a 2003 publication,1 the Institute of Medicine identified lower quality of health care in minority populations as a serious problem. Further, it disputed the long-held belief that the differences in health care between minority and nonminority populations could be explained by lack of access to medical services in minority groups. Instead, it cited factors at the level of the health care system, the level of the patient, and the “care-process level” (ie, the physician-patient encounter) as contributing in distinct ways to the problem.1
A CALL FOR CULTURAL COMPETENCE
In a policy paper published in 2010, the American College of Physicians2 reviewed the progress made in addressing health care disparities. In addition, noting that an individual’s environment, income, level of education, and other factors all affect health, it called for a concerted effort to improve insurance coverage, health literacy, and the health care delivery system; to address stressors both within and outside the health care system; and to recruit more minority health care workers.
None of these things seems like anything a busy practicing clinician could do much about. However, we can try to improve our cultural competence in our interactions with patients on an individual level.
The report recommends that physicians and other health care professionals be sensitive to cultural diversity among patients. It also says we should recognize our preconceived perceptions of minority patients that may affect their treatment and contribute to disparities in health care in minorities. To those ends, it calls for cultural competence training in medical school to improve cultural awareness and sensitivity.2
The Office of Minority Health broadly defines cultural and linguistic competence in health as “a set of congruent behaviors, attitudes, and policies that come together in a system, agency, or among professionals that enables effective work in cross-cultural situations.”3 Cultural competence training should focus on being aware of one’s personal bias, as well as on education about culture-specific norms or knowledge of possible causes of mistrust in minority groups.
For example, many African Americans may mistrust the medical system, given the awareness of previous inequities such as the notorious Tuskegee syphilis study (in which informed consent was not used and treatment that was needed was withheld). Further, beliefs about health in minority populations may be discordant with the Western medical model.4
RECOGNIZING OUR OWN BIASES
Preconceived perceptions on the part of the physician may be shaped by previous experiences with patients from a specific minority group or by personal bias. Unfortunately, even a well-meaning physician who has tried to learn about cultural norms of specific minority groups can be at risk of stereotyping by assuming that all members of that group hold the same beliefs. From the patient’s viewpoint, they can also be molded by previous experiences of health care inequities or unfavorable interactions with physicians.
For example, in the case we described above, perhaps the physician had assumed that the patient was noncompliant and therefore did not look for reasons for the poor control of her medical problems, or maybe the patient did not trust the physician enough to explain the reason for her difficulty with understanding how to take her medications.
Being aware of our own unconscious stereotyping of minority groups is an important step in effectively communicating with patients from different cultural backgrounds or with low health literacy. We also need to reflect about our own health belief system and try to incorporate the patient’s viewpoint into decision-making.
If, on reflection, we recognize that we do harbor biases, we ought to think about ways to better accommodate patients from different backgrounds and literacy levels, including trying to learn more about their culture or mastering techniques to effectively explain treatment plans to low-literacy patients.
ALL ENCOUNTERS WITH PATIENTS ARE ‘CROSS-CULTURAL’
In health care, “cross-cultural communication” does not refer only to interactions between persons from different ethnic backgrounds or with different beliefs about health. Health care has a culture of its own, creating a cross-cultural encounter the moment a person enters your office or clinic in the role of “patient.”
Carillo et al5 categorized issues that may pose difficulties in a cross-cultural encounter as those of authority, physical contact, communication styles, gender, sexuality, and family.
Physician-patient communication is a complicated issue. Many patients will not question a physician if their own cultural norms view it as disrespectful—even if they have very specific fears about the diagnosis or treatment plan. They may also defer any important decision to a family member who has the authority to make decisions for the family.
Frequently, miscommunication is unintentional. In a recent study of hospitalized patients,6 77% of the physicians believed that their patients understood their diagnoses, while only 57% of patients could correctly state this information.
WHAT DOES THE PATIENT THINK?
A key issue in cross-cultural communication, and one that is often neglected, is to address a patient’s fears about his or her illness. In the study mentioned above, more than half of the patients who reported having anxieties or fears in the hospital stated that their physicians did not discuss their fears.6 But if we fail to do so, patients may be less satisfied with the treatment plan and may not accept our recommendations.
A patient’s understanding of his or her illness may be very different from the biomedical explanation. For example, we once saw an elderly man who was admitted to the hospital with back pain due to metastatic prostate cancer, but who was convinced that his symptoms were caused by a voodoo “hex” placed on him by his ex-wife.
For example, for the man who thought that his ex-wife put a hex on him, asking him “What do you think has caused your problem?” during the initial history-taking would allow him to express his concern about the hex and give the physician an opportunity to learn of this fear and then to offer the biomedical explanation for the problem and for the recommended treatment.
What happens more often in practice is that the specific fear is not addressed at the start of the encounter. Consequently, the patient is less likely to follow through with the treatment plan, as he or she does not feel the prescribed treatment is fixing the real problem. This process of exploring the explanatory model of illness may be viewed on a practical level as a way of managing expectations in the clinical care of culturally diverse populations.
HEALTH LITERACY: MORE THAN THE ABILITY TO READ
The better you know how to read, the healthier you probably are. In fact, a study found that a person’s literacy level correlated more strongly with health than did race or formal education level.9 (Apparently, attending school does not necessarily mean that people know how to read, and not attending school doesn’t mean that they don’t.)
Even more important than literacy may be health literacy, defined by Ratzan and Parker as “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.”8 It includes basic math and critical-thinking skills that allow patients to use medications properly and participate in treatment decisions. Thus, health literacy is much more than the ability to read.
Even people who read and write very well may have trouble when confronted with the complexities of navigating our health care system, such as appointment scheduling, specialty referrals, and follow-up testing and procedures: their health literacy may be lower than their general literacy. We had a patient, a highly trained professional, who was confused by instructions for preparing for colonoscopy on a patient handout. Another similar patient could not understand the dosing of eye drops after cataract surgery because the instructions on the discharge paperwork were unclear.
However, limited health literacy disproportionately affects minority groups and is linked to poorer health care outcomes. Thus, addressing limited health literacy is important in addressing health care disparities. Effective physician-patient communication about treatment plans is fundamental to providing equitable care to patients from minority groups, some of whom may be at high risk for low health literacy.
Below, we will review some of the data on health literacy and offer suggestions for screening and interventions for those whose health literacy is limited.
36% have basic or below-basic reading skills
Every 10 years, the US Department of Education completes its National Assessment of Adult Literacy. Its 2003 survey—the most recent—included 19,000 adults in the community and in prison, interviewed at their place of residence.10 Each participant completed a set of tasks to measure his or her ability to read, understand, and interpret text and to use and interpret numbers.
Participants were divided into four categories based on the results: proficient (12%), intermediate (53%), basic (22%), and below basic (14%). Additionally, 5% of potential participants could not be tested because they had insufficient skills to participate in the survey.
Low literacy puts patients at risk
Although literacy is not the same as health literacy, functionally, those who have basic or below-basic literacy skills (36% of the US population) are at high risk for encountering problems in the US health care system. For example, they would have difficulty with most patient education handouts and health insurance forms.
Limited health literacy exacts both personal and financial costs. Patients with low health literacy are less likely to understand how to take their medications, what prescription warning labels mean, how to schedule follow-up appointments, and how to fill out health insurance forms.11–14
Medicare managed-care enrollees are more likely to be hospitalized if they have limited health literacy,15 and diabetic Medicaid patients who have limited health literacy are less likely to have good glycemic control.16 One study showed annual health care costs of $10,688 for Medicaid enrollees with limited health literacy compared with $2,891 for all enrollees.17 The total cost of limited health literacy to the US health care system is estimated to be between $50 and $73 billion per year.18
Screening for limited health literacy: You can’t tell just by looking
Given the high costs of low health literacy, identifying patients who have it is of paramount importance.
Groups who are more likely to have limited health literacy include the elderly, the poor, the unemployed, high school dropouts, members of minority groups, recent immigrants, and people for whom English is a second language.
However, these demographic factors are not sufficient as a screen for low health literacy—you can't tell just by looking. Red flags for low health literacy include difficulty filling out forms in the office, missed appointments, nonadherence to medication regimens, failure to follow up with scheduled testing, and difficulty reading written materials, often masked with a statement such as “I forgot my glasses and will read this at home.”
A number of screening tests have been developed, including the Rapid Estimate of Adult Literacy in Medicine (REALM)19 and the Test for Functional Health Literacy in Adults (TOFHLA).20 These tests are long, making them difficult to incorporate into a patient visit in a busy primary care practice, but they are useful for research. A newer screening test asks the patient to review a nutrition label and answer six questions.21
The most useful screening test for clinical use may consist of a single question. Questions that have been validated:
- “How often do you need to have someone help you when you read instructions, pamphlets, or other written material from your doctor or pharmacy?” Positive answers are “sometimes,” “often,” or “always.”
- “How confident are you filling out medical forms by yourself?” Positive answers are “somewhat,” “a little bit,” or “not at all.”22–24
These questions can be included either in the initial screening by a nurse or medical assistant or as part of the social history portion of the interview with the physician.
A “brown bag review” can also be helpful. Patients are asked to bring in their medications (often in a brown bag—hence the name). Asking the patient to identify each medication by name and the indication for it can uncover knowledge gaps that indicate low health literacy.
The point to remember is that patients with low health literacy will probably not tell you that they do not understand. However, they would appreciate being asked in a nonthreatening manner.
Make your office a shame-free environment
Many experts advocate a “universal precautions approach,” in which interventions to address low health literacy are incorporated into routine office practice for all patients. Practice sites should adopt a culture of a “shame-free environment,” in which support staff encourage patients to ask questions and are trained to offer assistance to those having difficulty reading or filling out forms.
On a broader level, medical offices and hospitals can partner with adult-learning specialists to help patients gain skills to navigate the health care system. All signage should be clear and should use plain language as opposed to medical terms. Medical forms and questionnaires should be designed to collect only essential information and should be written at a sixth-grade reading level or below. Patient instructions and educational materials should also be clear and free of jargon.
The ‘teach-back’ technique
The “teach-back” technique is a simple method to confirm patient understanding at the end of the visit. This involves asking patients in a nonthreatening way to explain or demonstrate what they have been told. Examples:
- “I want to make sure I have explained things correctly. Can you tell me how you plan to take your medication when you go home?”
- “I want to make sure I have done a good job explaining things to you. When you go home and tell your spouse about your visit today, what will you say?”
These questions should be asked in a nonthreatening way. Put the burden of explanation on yourself as the first step, and let the patient know you are willing to explain again more thoroughly any instructions that may have not been clearly understood.
Other measures
Pictures and computer-based education may be useful for some patients who have difficulty reading.
Weiss25 advocates six steps to improve communication with patients in all encounters: slow down; use plain, nonmedical language; show or draw pictures; limit the amount of information provided; use the teach-back technique; and create a shame-free environment, encouraging questions.
Improving health literacy, as it relates to cross-cultural communication of treatment plans, must encompass understanding of health beliefs often based on cultural norms, in order to come to agreement on a mutually acceptable plan of care. Physicians should be aware of preferences for nontraditional or complementary treatments that may reflect specific cultural beliefs.
IF THE PATIENT DOES NOT SPEAK ENGLISH
Verbal communication across language barriers poses another layer of challenge. A trained interpreter should be used whenever possible when treating a patient who speaks a different language than that of the practitioner. When family members are used as interpreters, there are risks that the patient may not fully disclose facts about the history of illness or specific symptoms, and also that family members may place their own “twist” on the story when translating.
The physician should speak directly to the patient in a normal tone of voice. In this setting, also remember that nonverbal communication can be misinterpreted. Gestures should be avoided. Finally, be aware that personal space is viewed differently depending on cultural background, as is eye contact.
It is helpful to have a pre-interview meeting with the interpreter to explain the format of the interview, as well as a post-interview meeting to ensure all parties felt they effectively communicated during the encounter.
TOWARD EQUITABLE CARE
Health care disparities are the result of multiple determinants. In December 2008, a National Institutes of Health summit conference cited not only barriers to access, but also the interaction of biological, behavioral, social, environmental, economic, cultural, and political factors, and noted that the causes and effects of health disparities transcend health care.26
Clearly, an individual physician’s efforts will not be all that is needed to eliminate health disparities. A team-based approach is essential, using skills of nonphysician members of the health care team such as nurses, medical assistants, social workers, and case managers. Continued opportunity for professional training and development in provider-patient communication skills should be offered.
However, the impact of effective cross-cultural communication and managing low health literacy populations on the physician-patient level should not be understated. As practitioners treating patients from diverse backgrounds, improving self-awareness, eliciting the patient’s explanatory model, and assuring understanding of treatment plans for patients with low health literacy or with language barriers, we can do our part in working toward equitable care for all patients.
An english-speaking middle-aged woman from an ethnic minority group presents to her internist for follow-up of her chronic medical problems, which include diabetes, high blood pressure, asthma, and high cholesterol. Although she sees her physician regularly, her medical conditions are not optimally controlled.
At one of the visits, her physician gives her a list of her medications and, while reviewing it, explains—not for the first time—the importance of taking all of them as prescribed. The patient looks at the paper for a while, and then cautiously tells the physician, “But I can’t read.”
This patient presented to our practice several years ago. The scenario may be familiar to many primary physicians, except for the ending— ie, the patient telling her physician that she cannot read.
Her case raises several questions:
- Why did the physician not realize at the first encounter that she could not read the names of her prescribed medications?
- Why did the patient wait to tell her physician that important fact?
- And to what extent did her inability to read contribute to the poor control of her chronic medical problems?
Patients like this one are the human faces behind the statistics about health disparities—the worse outcomes noted in minority populations. Here, we discuss the issues of cross-cultural communication and health literacy as they relate to health care disparities.
DISPARITY IS NOT ONLY DUE TO LACK OF ACCESS
Health care disparity has been an important topic of discussion in medicine in the past decade.
In a 2003 publication,1 the Institute of Medicine identified lower quality of health care in minority populations as a serious problem. Further, it disputed the long-held belief that the differences in health care between minority and nonminority populations could be explained by lack of access to medical services in minority groups. Instead, it cited factors at the level of the health care system, the level of the patient, and the “care-process level” (ie, the physician-patient encounter) as contributing in distinct ways to the problem.1
A CALL FOR CULTURAL COMPETENCE
In a policy paper published in 2010, the American College of Physicians2 reviewed the progress made in addressing health care disparities. In addition, noting that an individual’s environment, income, level of education, and other factors all affect health, it called for a concerted effort to improve insurance coverage, health literacy, and the health care delivery system; to address stressors both within and outside the health care system; and to recruit more minority health care workers.
None of these things seems like anything a busy practicing clinician could do much about. However, we can try to improve our cultural competence in our interactions with patients on an individual level.
The report recommends that physicians and other health care professionals be sensitive to cultural diversity among patients. It also says we should recognize our preconceived perceptions of minority patients that may affect their treatment and contribute to disparities in health care in minorities. To those ends, it calls for cultural competence training in medical school to improve cultural awareness and sensitivity.2
The Office of Minority Health broadly defines cultural and linguistic competence in health as “a set of congruent behaviors, attitudes, and policies that come together in a system, agency, or among professionals that enables effective work in cross-cultural situations.”3 Cultural competence training should focus on being aware of one’s personal bias, as well as on education about culture-specific norms or knowledge of possible causes of mistrust in minority groups.
For example, many African Americans may mistrust the medical system, given the awareness of previous inequities such as the notorious Tuskegee syphilis study (in which informed consent was not used and treatment that was needed was withheld). Further, beliefs about health in minority populations may be discordant with the Western medical model.4
RECOGNIZING OUR OWN BIASES
Preconceived perceptions on the part of the physician may be shaped by previous experiences with patients from a specific minority group or by personal bias. Unfortunately, even a well-meaning physician who has tried to learn about cultural norms of specific minority groups can be at risk of stereotyping by assuming that all members of that group hold the same beliefs. From the patient’s viewpoint, they can also be molded by previous experiences of health care inequities or unfavorable interactions with physicians.
For example, in the case we described above, perhaps the physician had assumed that the patient was noncompliant and therefore did not look for reasons for the poor control of her medical problems, or maybe the patient did not trust the physician enough to explain the reason for her difficulty with understanding how to take her medications.
Being aware of our own unconscious stereotyping of minority groups is an important step in effectively communicating with patients from different cultural backgrounds or with low health literacy. We also need to reflect about our own health belief system and try to incorporate the patient’s viewpoint into decision-making.
If, on reflection, we recognize that we do harbor biases, we ought to think about ways to better accommodate patients from different backgrounds and literacy levels, including trying to learn more about their culture or mastering techniques to effectively explain treatment plans to low-literacy patients.
ALL ENCOUNTERS WITH PATIENTS ARE ‘CROSS-CULTURAL’
In health care, “cross-cultural communication” does not refer only to interactions between persons from different ethnic backgrounds or with different beliefs about health. Health care has a culture of its own, creating a cross-cultural encounter the moment a person enters your office or clinic in the role of “patient.”
Carillo et al5 categorized issues that may pose difficulties in a cross-cultural encounter as those of authority, physical contact, communication styles, gender, sexuality, and family.
Physician-patient communication is a complicated issue. Many patients will not question a physician if their own cultural norms view it as disrespectful—even if they have very specific fears about the diagnosis or treatment plan. They may also defer any important decision to a family member who has the authority to make decisions for the family.
Frequently, miscommunication is unintentional. In a recent study of hospitalized patients,6 77% of the physicians believed that their patients understood their diagnoses, while only 57% of patients could correctly state this information.
WHAT DOES THE PATIENT THINK?
A key issue in cross-cultural communication, and one that is often neglected, is to address a patient’s fears about his or her illness. In the study mentioned above, more than half of the patients who reported having anxieties or fears in the hospital stated that their physicians did not discuss their fears.6 But if we fail to do so, patients may be less satisfied with the treatment plan and may not accept our recommendations.
A patient’s understanding of his or her illness may be very different from the biomedical explanation. For example, we once saw an elderly man who was admitted to the hospital with back pain due to metastatic prostate cancer, but who was convinced that his symptoms were caused by a voodoo “hex” placed on him by his ex-wife.
For example, for the man who thought that his ex-wife put a hex on him, asking him “What do you think has caused your problem?” during the initial history-taking would allow him to express his concern about the hex and give the physician an opportunity to learn of this fear and then to offer the biomedical explanation for the problem and for the recommended treatment.
What happens more often in practice is that the specific fear is not addressed at the start of the encounter. Consequently, the patient is less likely to follow through with the treatment plan, as he or she does not feel the prescribed treatment is fixing the real problem. This process of exploring the explanatory model of illness may be viewed on a practical level as a way of managing expectations in the clinical care of culturally diverse populations.
HEALTH LITERACY: MORE THAN THE ABILITY TO READ
The better you know how to read, the healthier you probably are. In fact, a study found that a person’s literacy level correlated more strongly with health than did race or formal education level.9 (Apparently, attending school does not necessarily mean that people know how to read, and not attending school doesn’t mean that they don’t.)
Even more important than literacy may be health literacy, defined by Ratzan and Parker as “the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.”8 It includes basic math and critical-thinking skills that allow patients to use medications properly and participate in treatment decisions. Thus, health literacy is much more than the ability to read.
Even people who read and write very well may have trouble when confronted with the complexities of navigating our health care system, such as appointment scheduling, specialty referrals, and follow-up testing and procedures: their health literacy may be lower than their general literacy. We had a patient, a highly trained professional, who was confused by instructions for preparing for colonoscopy on a patient handout. Another similar patient could not understand the dosing of eye drops after cataract surgery because the instructions on the discharge paperwork were unclear.
However, limited health literacy disproportionately affects minority groups and is linked to poorer health care outcomes. Thus, addressing limited health literacy is important in addressing health care disparities. Effective physician-patient communication about treatment plans is fundamental to providing equitable care to patients from minority groups, some of whom may be at high risk for low health literacy.
Below, we will review some of the data on health literacy and offer suggestions for screening and interventions for those whose health literacy is limited.
36% have basic or below-basic reading skills
Every 10 years, the US Department of Education completes its National Assessment of Adult Literacy. Its 2003 survey—the most recent—included 19,000 adults in the community and in prison, interviewed at their place of residence.10 Each participant completed a set of tasks to measure his or her ability to read, understand, and interpret text and to use and interpret numbers.
Participants were divided into four categories based on the results: proficient (12%), intermediate (53%), basic (22%), and below basic (14%). Additionally, 5% of potential participants could not be tested because they had insufficient skills to participate in the survey.
Low literacy puts patients at risk
Although literacy is not the same as health literacy, functionally, those who have basic or below-basic literacy skills (36% of the US population) are at high risk for encountering problems in the US health care system. For example, they would have difficulty with most patient education handouts and health insurance forms.
Limited health literacy exacts both personal and financial costs. Patients with low health literacy are less likely to understand how to take their medications, what prescription warning labels mean, how to schedule follow-up appointments, and how to fill out health insurance forms.11–14
Medicare managed-care enrollees are more likely to be hospitalized if they have limited health literacy,15 and diabetic Medicaid patients who have limited health literacy are less likely to have good glycemic control.16 One study showed annual health care costs of $10,688 for Medicaid enrollees with limited health literacy compared with $2,891 for all enrollees.17 The total cost of limited health literacy to the US health care system is estimated to be between $50 and $73 billion per year.18
Screening for limited health literacy: You can’t tell just by looking
Given the high costs of low health literacy, identifying patients who have it is of paramount importance.
Groups who are more likely to have limited health literacy include the elderly, the poor, the unemployed, high school dropouts, members of minority groups, recent immigrants, and people for whom English is a second language.
However, these demographic factors are not sufficient as a screen for low health literacy—you can't tell just by looking. Red flags for low health literacy include difficulty filling out forms in the office, missed appointments, nonadherence to medication regimens, failure to follow up with scheduled testing, and difficulty reading written materials, often masked with a statement such as “I forgot my glasses and will read this at home.”
A number of screening tests have been developed, including the Rapid Estimate of Adult Literacy in Medicine (REALM)19 and the Test for Functional Health Literacy in Adults (TOFHLA).20 These tests are long, making them difficult to incorporate into a patient visit in a busy primary care practice, but they are useful for research. A newer screening test asks the patient to review a nutrition label and answer six questions.21
The most useful screening test for clinical use may consist of a single question. Questions that have been validated:
- “How often do you need to have someone help you when you read instructions, pamphlets, or other written material from your doctor or pharmacy?” Positive answers are “sometimes,” “often,” or “always.”
- “How confident are you filling out medical forms by yourself?” Positive answers are “somewhat,” “a little bit,” or “not at all.”22–24
These questions can be included either in the initial screening by a nurse or medical assistant or as part of the social history portion of the interview with the physician.
A “brown bag review” can also be helpful. Patients are asked to bring in their medications (often in a brown bag—hence the name). Asking the patient to identify each medication by name and the indication for it can uncover knowledge gaps that indicate low health literacy.
The point to remember is that patients with low health literacy will probably not tell you that they do not understand. However, they would appreciate being asked in a nonthreatening manner.
Make your office a shame-free environment
Many experts advocate a “universal precautions approach,” in which interventions to address low health literacy are incorporated into routine office practice for all patients. Practice sites should adopt a culture of a “shame-free environment,” in which support staff encourage patients to ask questions and are trained to offer assistance to those having difficulty reading or filling out forms.
On a broader level, medical offices and hospitals can partner with adult-learning specialists to help patients gain skills to navigate the health care system. All signage should be clear and should use plain language as opposed to medical terms. Medical forms and questionnaires should be designed to collect only essential information and should be written at a sixth-grade reading level or below. Patient instructions and educational materials should also be clear and free of jargon.
The ‘teach-back’ technique
The “teach-back” technique is a simple method to confirm patient understanding at the end of the visit. This involves asking patients in a nonthreatening way to explain or demonstrate what they have been told. Examples:
- “I want to make sure I have explained things correctly. Can you tell me how you plan to take your medication when you go home?”
- “I want to make sure I have done a good job explaining things to you. When you go home and tell your spouse about your visit today, what will you say?”
These questions should be asked in a nonthreatening way. Put the burden of explanation on yourself as the first step, and let the patient know you are willing to explain again more thoroughly any instructions that may have not been clearly understood.
Other measures
Pictures and computer-based education may be useful for some patients who have difficulty reading.
Weiss25 advocates six steps to improve communication with patients in all encounters: slow down; use plain, nonmedical language; show or draw pictures; limit the amount of information provided; use the teach-back technique; and create a shame-free environment, encouraging questions.
Improving health literacy, as it relates to cross-cultural communication of treatment plans, must encompass understanding of health beliefs often based on cultural norms, in order to come to agreement on a mutually acceptable plan of care. Physicians should be aware of preferences for nontraditional or complementary treatments that may reflect specific cultural beliefs.
IF THE PATIENT DOES NOT SPEAK ENGLISH
Verbal communication across language barriers poses another layer of challenge. A trained interpreter should be used whenever possible when treating a patient who speaks a different language than that of the practitioner. When family members are used as interpreters, there are risks that the patient may not fully disclose facts about the history of illness or specific symptoms, and also that family members may place their own “twist” on the story when translating.
The physician should speak directly to the patient in a normal tone of voice. In this setting, also remember that nonverbal communication can be misinterpreted. Gestures should be avoided. Finally, be aware that personal space is viewed differently depending on cultural background, as is eye contact.
It is helpful to have a pre-interview meeting with the interpreter to explain the format of the interview, as well as a post-interview meeting to ensure all parties felt they effectively communicated during the encounter.
TOWARD EQUITABLE CARE
Health care disparities are the result of multiple determinants. In December 2008, a National Institutes of Health summit conference cited not only barriers to access, but also the interaction of biological, behavioral, social, environmental, economic, cultural, and political factors, and noted that the causes and effects of health disparities transcend health care.26
Clearly, an individual physician’s efforts will not be all that is needed to eliminate health disparities. A team-based approach is essential, using skills of nonphysician members of the health care team such as nurses, medical assistants, social workers, and case managers. Continued opportunity for professional training and development in provider-patient communication skills should be offered.
However, the impact of effective cross-cultural communication and managing low health literacy populations on the physician-patient level should not be understated. As practitioners treating patients from diverse backgrounds, improving self-awareness, eliciting the patient’s explanatory model, and assuring understanding of treatment plans for patients with low health literacy or with language barriers, we can do our part in working toward equitable care for all patients.
- Institute of Medicine of the National Academies. Unequal Treatment: Confronting Racial and Ethnic Disparities in Healthcare; 2003. http://www.nap.edu/openbook.php?record_id=12875&page=R1. Accessed January 5, 2012.
- American College of Physicians. Racial and Ethnic Disparities in Health Care, Updated 2010. Philadelphia: American College of Physicians; 2010: Policy Paper.
- US Department of Health and Human Services. The Office of Minority Health. What Is Cultural Competency? http://minorityhealth.hhs.gov/templates/browse.aspx?lvl=2&lvlid=11. Accessed January 5, 2012.
- Eiser AR, Ellis G. Viewpoint: cultural competence and the African American experience with health care: the case for specific content in cross-cultural education. Acad Med 2007; 82:176–183.
- Carrillo JE, Green AR, Betancourt JR. Cross-cultural primary care: a patient-based approach. Ann Intern Med 1999; 130:829–834.
- Olson DP, Windish DM. Communication discrepancies between physicians and hospitalized patients. Arch Intern Med 2010; 170:1302–1307.
- Kleinman A, Eisenberg L, Good B. Culture, illness, and care: clinical lessons from anthropologic and cross-cultural research. Ann Intern Med 1978; 88:251–258.
- National Library of Medicine. Current bibliographies in medicine 2000–1. Health Literacy. www.nlm.nih.gov/archive//20061214/pubs/cbm/hliteracy.html. Accessed January 5, 2012.
- Sentell TL, Halpin HA. Importance of adult literacy in understanding health disparities. J Gen Intern Med 2006; 21:862–866.
- Kutner M, Greenberg E, Jin Y, Paulsen C. The Health Literacy of America’s Adults: Results From the 2003 National Assessment of Adult Literacy (NCES 2006–483). US Department of Education. Washington, DC: National Center for Education Statistics; 2006. http://nces.ed.gov/pubs2006/2006483.pdf. Accessed January 5, 2012.
- Williams MV, Parker RM, Baker DW, et al. Inadequate functional health literacy among patients at two public hospitals. JAMA 1995; 274:1677–1682.
- Baker DW, Parker RM, Williams MV, et al. The health care experience of patients with low literacy. Arch Fam Med 1996; 5:329–334.
- Fact Sheet: health literacy and understanding medical information. Lawrenceville, NJ: Center for Health Care Strategies; 2002.
- Wolf MS, Davis TC, Tilson HH, Bass PF, Parker RM. Misunderstanding of prescription drug warning labels among patients with low literacy. Am J Health Syst Pharm 2006; 63:1048–1055.
- Baker DW, Gazmararian JA, Williams MV, et al. Functional health literacy and the risk of hospital admission among Medicare managed care enrollees. Am J Public Health 2002; 92:1278–1283.
- Schillinger D, Barton LR, Karter AJ, Wang F, Adler N. Does literacy mediate the relationship between education and health outcomes? A study of a low-income population with diabetes. Public Health Rep 2006; 121:245–254.
- Weiss BD, Palmer R. Relationship between health care costs and very low literacy skills in a medically needy and indigent Medicaid population. J Am Board Fam Pract 2004; 17:44–47.
- Friedland RB. Understanding health literacy: new estimates of the costs of inadequate health literacy. Washington, DC: National Academy on an Aging Society; 1998.
- Davis TC, Long SW, Jackson RH, et al. Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med 1993; 25:391–395.
- Baker DW, Williams MV, Parker RM, Gazmararian JA, Nurss J. Development of a brief test to measure functional health literacy. Patient Educ Couns 1999; 38:33–42.
- Weiss BD, Mays MZ, Martz W, et al. Quick assessment of literacy in primary care: the newest vital sign. Ann Fam Med 2005; 3:514–522.
- Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med 2004; 36:588–594.
- Morris NS, MacLean CD, Chew LD, Littenberg B. The Single Item Literacy Screener: evaluation of a brief instrument to identify limited reading ability. BMC Fam Pract 2006; 7:21.
- Wallace LS, Rogers ES, Roskos SE, Holiday DB, Weiss BD. Brief report: screening items to identify patients with limited health literacy skills. J Gen Intern Med 2006; 21:874–877.
- Weiss BD. Health Literacy and Patient Safety: Help Patients Understand. 2nd ed. American Medical Association Foundation and American Medical Association. www.ama-assn.org/ama1/pub/upload/mm/367/healthlitclinicians.pdf. Accessed January 5, 2012.
- Dankwa-Mullan I, Rhee KB, Williams K, et al. The science of eliminating health disparities: summary and analysis of the NIH summit recommendations. Am J Public Health 2010; 100(suppl 1):S12–S18.
- Institute of Medicine of the National Academies. Unequal Treatment: Confronting Racial and Ethnic Disparities in Healthcare; 2003. http://www.nap.edu/openbook.php?record_id=12875&page=R1. Accessed January 5, 2012.
- American College of Physicians. Racial and Ethnic Disparities in Health Care, Updated 2010. Philadelphia: American College of Physicians; 2010: Policy Paper.
- US Department of Health and Human Services. The Office of Minority Health. What Is Cultural Competency? http://minorityhealth.hhs.gov/templates/browse.aspx?lvl=2&lvlid=11. Accessed January 5, 2012.
- Eiser AR, Ellis G. Viewpoint: cultural competence and the African American experience with health care: the case for specific content in cross-cultural education. Acad Med 2007; 82:176–183.
- Carrillo JE, Green AR, Betancourt JR. Cross-cultural primary care: a patient-based approach. Ann Intern Med 1999; 130:829–834.
- Olson DP, Windish DM. Communication discrepancies between physicians and hospitalized patients. Arch Intern Med 2010; 170:1302–1307.
- Kleinman A, Eisenberg L, Good B. Culture, illness, and care: clinical lessons from anthropologic and cross-cultural research. Ann Intern Med 1978; 88:251–258.
- National Library of Medicine. Current bibliographies in medicine 2000–1. Health Literacy. www.nlm.nih.gov/archive//20061214/pubs/cbm/hliteracy.html. Accessed January 5, 2012.
- Sentell TL, Halpin HA. Importance of adult literacy in understanding health disparities. J Gen Intern Med 2006; 21:862–866.
- Kutner M, Greenberg E, Jin Y, Paulsen C. The Health Literacy of America’s Adults: Results From the 2003 National Assessment of Adult Literacy (NCES 2006–483). US Department of Education. Washington, DC: National Center for Education Statistics; 2006. http://nces.ed.gov/pubs2006/2006483.pdf. Accessed January 5, 2012.
- Williams MV, Parker RM, Baker DW, et al. Inadequate functional health literacy among patients at two public hospitals. JAMA 1995; 274:1677–1682.
- Baker DW, Parker RM, Williams MV, et al. The health care experience of patients with low literacy. Arch Fam Med 1996; 5:329–334.
- Fact Sheet: health literacy and understanding medical information. Lawrenceville, NJ: Center for Health Care Strategies; 2002.
- Wolf MS, Davis TC, Tilson HH, Bass PF, Parker RM. Misunderstanding of prescription drug warning labels among patients with low literacy. Am J Health Syst Pharm 2006; 63:1048–1055.
- Baker DW, Gazmararian JA, Williams MV, et al. Functional health literacy and the risk of hospital admission among Medicare managed care enrollees. Am J Public Health 2002; 92:1278–1283.
- Schillinger D, Barton LR, Karter AJ, Wang F, Adler N. Does literacy mediate the relationship between education and health outcomes? A study of a low-income population with diabetes. Public Health Rep 2006; 121:245–254.
- Weiss BD, Palmer R. Relationship between health care costs and very low literacy skills in a medically needy and indigent Medicaid population. J Am Board Fam Pract 2004; 17:44–47.
- Friedland RB. Understanding health literacy: new estimates of the costs of inadequate health literacy. Washington, DC: National Academy on an Aging Society; 1998.
- Davis TC, Long SW, Jackson RH, et al. Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med 1993; 25:391–395.
- Baker DW, Williams MV, Parker RM, Gazmararian JA, Nurss J. Development of a brief test to measure functional health literacy. Patient Educ Couns 1999; 38:33–42.
- Weiss BD, Mays MZ, Martz W, et al. Quick assessment of literacy in primary care: the newest vital sign. Ann Fam Med 2005; 3:514–522.
- Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med 2004; 36:588–594.
- Morris NS, MacLean CD, Chew LD, Littenberg B. The Single Item Literacy Screener: evaluation of a brief instrument to identify limited reading ability. BMC Fam Pract 2006; 7:21.
- Wallace LS, Rogers ES, Roskos SE, Holiday DB, Weiss BD. Brief report: screening items to identify patients with limited health literacy skills. J Gen Intern Med 2006; 21:874–877.
- Weiss BD. Health Literacy and Patient Safety: Help Patients Understand. 2nd ed. American Medical Association Foundation and American Medical Association. www.ama-assn.org/ama1/pub/upload/mm/367/healthlitclinicians.pdf. Accessed January 5, 2012.
- Dankwa-Mullan I, Rhee KB, Williams K, et al. The science of eliminating health disparities: summary and analysis of the NIH summit recommendations. Am J Public Health 2010; 100(suppl 1):S12–S18.
KEY POINTS
- To provide optimal care, physicians and staff need to think about ways to accommodate patients of other cultures and backgrounds, in particular by learning more about the patient’s culture and by examining themselves for possible bias.
- Even people who read and write very well may have limited health literacy. We should not assume that patients understand what we are talking about.
- Weiss (2011) advocates six steps to improve communication with patients in all encounters: slow down; use plain, nonmedical language; show or draw pictures; limit the amount of information provided; use the “teach-back” technique; and create a shame-free environment, encouraging questions.
- The “teach-back” technique is a simple way to confirm a patient’s understanding at the end of the visit. This involves asking the patient in a nonthreatening way to explain or show what he or she has been told.
Military Announces New Breast Cancer Vaccine
Grand Rounds: Woman, 29, With Persistent Migraine
A 29-year-old woman with a history of frequent migraines presented to her primary care provider for a refill of medication. For the past two years she had been taking rizatriptan 10 mg, but with little relief. She stated that she had continued to experience discrete migraines several days per month, often clustered around menses. The severity of the headaches had negatively affected her work attendance, productivity, and social interactions. She wondered if she should be taking a different kind of medication.
The patient had been diagnosed with migraines at age 12, just prior to menarche. She described her headache as a unilateral, sharp throbbing pain associated with increased sensitivity to light and sound as well as nausea. She denied any history of head trauma. She had no allergies, and the only other medications she was taking at the time were an oral contraceptive (ethinyl estradiol/norgestimate 0.035 mg/0.18 mg with an oral triphasic 21/7 treatment cycle) and fluoxetine 20 mg for depression.
The patient worked daytime hours as a sales representative. She considered herself active, exercised regularly, ate a balanced diet, and slept well. She consumed no more than two to four alcoholic drinks per month and denied the use of herbals, dietary supplements, tobacco, or illegal drugs.
The patient stated that her mother had frequent headaches but had never sought a medical explanation or treatment. She was unaware of any other family history of headaches, and there was no family history of cardiovascular disease. Her sister had been diagnosed with a prolactinoma at age 25. At age 26, the patient had undergone a pituitary protocol MRI of the head with and without contrast, with negative results.
On examination, the patient was alert and oriented with normal vital signs. Her pupils were equal and reactive to light, and no papilledema was evident on fundoscopic examination. The cranial nerves were grossly intact and no other neurologic deficits were appreciated. No carotid bruits were present on cardiovascular exam.
Based on the patient’s history and physical exam, she met the International Classification of Headache Disorders (ICHD-II)1 diagnostic criteria for migraine without aura (1.1). When asked to recall the onset and frequency of attacks she had had in the previous four weeks, she noted that they regularly occurred during her menstrual cycle.
She was subsequently asked to begin a diary to record her headache characteristics, severity, and duration, with days of menstruation noted. The Migraine Disability Assessment (MIDAS) questionnaire2 (see Tables 1 and 22) was performed to measure the migraine attacks’ impact on the patient’s life; her score indicated that the headaches were causing her severe disability.
The patient’s abortive migraine medication was changed from rizatriptan 10 mg to the combination sumatriptan/naproxen sodium 85 mg/500 mg. She was instructed to take the initial dose as soon as she noticed signs of an impending migraine and to repeat the dose in two hours if symptoms persisted. The possibility of starting a preventive medication was discussed, but the patient wanted to evaluate her response to the combination triptan/NSAID before considering migraine prophylaxis.
Three months later, the patient returned for follow-up, including a review of her headache diary. She stated that the frequency and intensity of attacks had not decreased; acute treatment with sumatriptan/naproxen sodium made her headaches more bearable but did not ameliorate symptoms. The patient had recorded a detailed account of each migraine which, based on the ICHD-II criteria,1 demonstrated a pattern of headache occurrences consistent with menstrually related migraine. She reported a total of 18 headaches in the previous three months, 12 of which had occurred within the five-day perimenstrual period (see Figure 1).
Based on this information and the fact that the patient’s headaches were resistant to previous treatments, it was decided to alter the approach to her migraine management once more. In an effort to limit estrogen fluctuations during her menstrual cycle, her oral contraceptive was changed from ethinyl estradiol/norgestimate to a 12-week placebo-free monophasic regimen of ethinyl estradiol/levonorgestrel 20 mg/90 mcg. For intermittent prophylaxis, she was instructed to take frovatriptan 2.5 mg twice daily, beginning two days prior to the start of menses and continuing through the last day of her cycle. For acute treatment of breakthrough migraines, she was prescribed sumatriptan 20-mg nasal spray to take at the first sign of migraine symptoms and instructed to repeat the dose if the pain persisted or returned.
The patient continued to track her headaches in the diary and was seen in the office after three months of following the revised menstrual migraine management plan. She reported fewer migraines associated with her menstrual cycle and noted that they were less severe and shorter in duration. When she repeated the MIDAS test, her score was reduced from 23 to 10. In the subsequent nine months she has reported a consistent decrease in migraine prevalence and now rarely needs the abortive therapy.
DISCUSSION
Migraine, though commonly encountered in clinical practice, is a complex disorder. For women, migraine headaches have been recognized by the World Health Organization as the 12th leading cause of “life lived with a disabling condition.”3 Pure menstrual migraine and menstrually related migraine will be the focus of discussion here.
Etiology
Menstrually related migraine (comparable to pure menstrual migraine, although the latter is distinguished by occurring only during the perimenstrual period1) is recognized as a distinct type of migraine associated with perimenstrual hormone fluctuations.4 Of women who experience migraine, 42% to 61% can associate their attacks with the perimenstrual period5; this is defined as two days before to three days after the start of menstruation.
It has also been determined that women are more likely to have migraine attacks during the late luteal and early follicular phases (when there is a natural drop in estrogen levels) than in other phases (when estrogen levels are higher).6 Despite clinical evidence to support this estrogen withdrawal theory, the pathophysiology is not completely understood. It is possible that affected women are more sensitive than other women to the decrease in estrogen levels that occurs with menstruation.7
History and Physical Findings of Menstrual Migraines
Almost every woman with perimenstrual migraines reports an absence of aura.7 In the evaluation of headache, the same criteria for migraine without aura pertain to the classifications of pure menstrual migraine (PMM) or menstrually related migraine (MRM).1 Correlation of migraine attacks to the onset of menses is the key finding in the patient history to differentiate menstrual migraine from migraine without aura in women.8 Furthermore, perimenstrual migraines are often of longer duration and more difficult to treat than migraines not associated with hormone fluctuations.9
In order to distinguish between PMM and MRM, it is important to understand that pure menstrual migraine attacks take place exclusively in the five-day perimenstrual window and at no other times of the cycle. The criteria for MRM allow for attacks at other times of the cycle.1
In addition to causing physical pain, menstrual migraines can impact work performance, household activities, and personal relationships. The MIDAS questionnaire is a disability assessment tool that can reveal to the practitioner how migraines have affected the patient’s life over the previous three months.10 This is a useful method to identify patients with disabling migraines, determine their need for treatment, and monitor treatment efficacy.
Diagnosis
Menstrual migraine is a clinical diagnosis made by findings from the patient’s history. The International Headache Society has established specific diagnostic criteria in the ICHD-II for both PMM and MRM.1 An accurate and detailed migraine history is invaluable for the diagnosis of menstrual migraine. Although a formal questionnaire can serve as a good screening tool, it relies on the patient’s ability to recall specific times and dates with accuracy.11 Recall bias can be misleading in any attempt to confirm a diagnosis. The patient’s conscientious use of a daily headache diary or calendar (see Figure 2, for example) can lead to a precise record of the characteristics and timing of migraines, overcoming these obstacles.
Brain imaging is necessary if the patient’s symptoms suggest a critical etiology that requires immediate diagnosis and management. Red flags include sudden onset of a severe headache, a headache characterized as “the worst headache of the patient’s life,” a change in headache pattern, altered mental status, an abnormal neurologic examination, or fever with neck stiffness.12
Treatment Options for Menstrual Migraine
There is no FDA-approved treatment specific for menstrual migraines; however, medications used for management of nonmenstrual migraines are also those most commonly prescribed for women with menstrual migraine headaches.13 Because these headaches are frequently more severe and of longer duration than nonmenstrual migraine headaches, a combination of intermittent preventive therapy, hormone manipulation, and acute treatment strategies is often necessary.4
Acute therapy is aimed to treat migraine pain quickly and effectively with minimal adverse effects or need for additional medication. Triptans have been the mainstay of menstrual migraine treatment and have been proven effective for both acute attacks and prevention.4 Sumatriptan has a rapid onset of action and may be given orally as a 50- or 100-mg tablet, as a 6-mg subcutaneous injection, or as a 20-mg nasal spray.14
Abortive therapies are most effective when taken at the first sign of an attack. Patients can repeat the dose in two hours if the headache persists or recurs, to a maximum of two doses in 24 hours.15 Rizatriptan is another triptan used for acute treatment of menstrual migraine headaches. Its initial 10-mg dose can be repeated every two hours, to a maximum of 30 mg per 24 hours. NSAIDs, such as naproxen sodium, have also been recommended in acute migraine attacks. They seem to work synergistically with triptans, inhibiting prostaglandin synthesis and blocking neurogenic inflammation.15
Clinical study results have demonstrated superior pain relief and decreased migraine recurrence when a triptan and NSAID are used in combination, compared with use of either medication alone.4 A single-tablet formulation of sumatriptan 85 mg and naproxen sodium 500 mg may be considered for initial therapy in hard-to-treat patients.14
Preventive therapy should be considered when responsiveness to acute treatment is inadequate.4 Nonhormonal intermittent prophylactic treatment is recommended two days prior to the beginning of menses, continuing for five days.16 Longer-acting triptans, such as frovatriptan 2.5 mg and naratriptan 1.0 mg, dosed twice daily, have been demonstrated as effective in clinical trials when used during the perimenstrual period.17,18
The advantage of short-term therapy over daily prophylaxis is the potential to avoid adverse effects seen with continuous exposure to the drug.3 However, successful therapy relies on consistency in menstruation, and therefore may not be ideal for women with irregular cycles or those with coexisting nonmenstrual migraines.16 Estrogen-based therapy is an option for these women and for those who have failed nonhormonal methods.19
The goal of hormone prophylaxis is to prevent or reduce the physiologic decline in estradiol that occurs in the late luteal phase.4 Clinical studies have been conducted using various hormonal strategies to maintain steady estradiol levels, all of which decreased migraine prevalence.19 Estrogen fluctuations can be minimized by eliminating the placebo week in traditional estrogen/progestin oral contraceptives to achieve an extended-cycle regimen, resembling that of the 12-week ethinyl estradiol/levonorgestrel formulation.19
Continuous use of combined oral contraceptives is also an option for relief of menstrual migraine. When cyclic or extended-cycle regimens allow for menses, supplemental estrogen (10- to 20-mg ethinyl estradiol) is recommended during the hormone-free week.14
CONCLUSION
Proper diagnosis of menstrual migraines, using screening tools and the MIDAS questionnaire, can help practitioners provide the most effective migraine management for their patients. The most important step toward a good prognosis is acknowledging menstrual migraine as a unique headache disorder and formulating a precise diagnosis in order to identify individually tailored treatment options. With proper identification and integrated acute and prophylactic treatment, women with menstrual migraines are able to lead a healthier, more satisfying life.
REFERENCES
1. International Headache Society. The International Classification of Headache Disorders. 2nd ed. Cephalalgia. 2004;24(suppl 1):1-160.
2. Stewart WF, Lipton RB, Dowson AJ, Sawyer J. Development and testing of the Migraine Disability Assessment (MIDAS) Questionnaire to assess headache-related disability. Neurology. 2001;56(6 suppl 1):S20-S28.
3. MacGregor EA. Perimenstrual headaches: unmet needs. Curr Pain Headache Rep. 2008;12(6):468-474.
4. Mannix LK. Menstrual-related pain conditions: dysmenorrhea and migraine. J Womens Health (Larchmt). 2008;17(5):879-891.
5. Martin VT. New theories in the pathogenesis of menstrual migraine. Curr Pain Headache Rep. 2008;12(6):453-462.
6. MacGregor EA. Migraine headache in perimenopausal and menopausal women. Curr Pain Headache Rep. 2009;13(5):399-403.
7. Martin VT, Wernke S, Mandell K, et al. Symptoms of premenstrual syndrome and their association with migraine headache. Headache. 2006; 46(1):125-137.
8. Martin VT, Behbehani M. Ovarian hormones and migraine headache: understanding mechanisms and pathogenesis—part 2. Headache. 2006;46(3):365-386.
9. Granella F, Sances G, Allais G, et al. Characteristics of menstrual and nonmenstrual attacks in women with menstrually related migraine referred to headache centres. Cephalalgia. 2004;24(9):707-716.
10. Hutchinson SL, Silberstein SD. Menstrual migraine: case studies of women with estrogen-related headaches. Headache. 2008;48 suppl 3:S131-S141.
11. Tepper SJ, Zatochill M, Szeto M, et al. Development of a simple menstrual migraine screening tool for obstetric and gynecology clinics: the Menstrual Migraine Assessment Tool. Headache. 2008; 48(10):1419-1425.
12. Marcus DA. Focus on primary care diagnosis and management of headache in women. Obstet Gynecol Surv. 1999;54(6):395-402.
13. Tepper SJ. Tailoring management strategies for the patient with menstrual migraine: focus on prevention and treatment. Headache. 2006;46(suppl 2):S61-S68.
14. Lay CL, Payne R. Recognition and treatment of menstrual migraine. Neurologist. 2007;13(4):197-204.
15. Henry KA, Cohen CI. Perimenstrual headache: treatment options. Curr Pain Headache Rep. 2009;13(1):82-88.
16. Calhoun AH. Estrogen-associated migraine. www.uptodate.com/contents/estrogen-associated-migraine. Accessed May 4, 2011.
17. Silberstein SD, Elkind AH, Schreiber C, et al. A randomized trial of frovatriptan for the intermittent prevention of menstrual migraine. Neurology. 2004;63:261-269.
18. Mannix LK, Savani N, Landy S, et al. Efficacy and tolerability of naratriptan for short-term prevention of menstrually related migraine: data from two randomized, double-blind, placebo-controlled studies. Headache. 2007;47(7):1037-1049.
19. Calhoun AH, Hutchinson S. Hormonal therapies for menstrual migraine. Curr Pain Headache Rep. 2009;13(5):381-385.
A 29-year-old woman with a history of frequent migraines presented to her primary care provider for a refill of medication. For the past two years she had been taking rizatriptan 10 mg, but with little relief. She stated that she had continued to experience discrete migraines several days per month, often clustered around menses. The severity of the headaches had negatively affected her work attendance, productivity, and social interactions. She wondered if she should be taking a different kind of medication.
The patient had been diagnosed with migraines at age 12, just prior to menarche. She described her headache as a unilateral, sharp throbbing pain associated with increased sensitivity to light and sound as well as nausea. She denied any history of head trauma. She had no allergies, and the only other medications she was taking at the time were an oral contraceptive (ethinyl estradiol/norgestimate 0.035 mg/0.18 mg with an oral triphasic 21/7 treatment cycle) and fluoxetine 20 mg for depression.
The patient worked daytime hours as a sales representative. She considered herself active, exercised regularly, ate a balanced diet, and slept well. She consumed no more than two to four alcoholic drinks per month and denied the use of herbals, dietary supplements, tobacco, or illegal drugs.
The patient stated that her mother had frequent headaches but had never sought a medical explanation or treatment. She was unaware of any other family history of headaches, and there was no family history of cardiovascular disease. Her sister had been diagnosed with a prolactinoma at age 25. At age 26, the patient had undergone a pituitary protocol MRI of the head with and without contrast, with negative results.
On examination, the patient was alert and oriented with normal vital signs. Her pupils were equal and reactive to light, and no papilledema was evident on fundoscopic examination. The cranial nerves were grossly intact and no other neurologic deficits were appreciated. No carotid bruits were present on cardiovascular exam.
Based on the patient’s history and physical exam, she met the International Classification of Headache Disorders (ICHD-II)1 diagnostic criteria for migraine without aura (1.1). When asked to recall the onset and frequency of attacks she had had in the previous four weeks, she noted that they regularly occurred during her menstrual cycle.
She was subsequently asked to begin a diary to record her headache characteristics, severity, and duration, with days of menstruation noted. The Migraine Disability Assessment (MIDAS) questionnaire2 (see Tables 1 and 22) was performed to measure the migraine attacks’ impact on the patient’s life; her score indicated that the headaches were causing her severe disability.
The patient’s abortive migraine medication was changed from rizatriptan 10 mg to the combination sumatriptan/naproxen sodium 85 mg/500 mg. She was instructed to take the initial dose as soon as she noticed signs of an impending migraine and to repeat the dose in two hours if symptoms persisted. The possibility of starting a preventive medication was discussed, but the patient wanted to evaluate her response to the combination triptan/NSAID before considering migraine prophylaxis.
Three months later, the patient returned for follow-up, including a review of her headache diary. She stated that the frequency and intensity of attacks had not decreased; acute treatment with sumatriptan/naproxen sodium made her headaches more bearable but did not ameliorate symptoms. The patient had recorded a detailed account of each migraine which, based on the ICHD-II criteria,1 demonstrated a pattern of headache occurrences consistent with menstrually related migraine. She reported a total of 18 headaches in the previous three months, 12 of which had occurred within the five-day perimenstrual period (see Figure 1).
Based on this information and the fact that the patient’s headaches were resistant to previous treatments, it was decided to alter the approach to her migraine management once more. In an effort to limit estrogen fluctuations during her menstrual cycle, her oral contraceptive was changed from ethinyl estradiol/norgestimate to a 12-week placebo-free monophasic regimen of ethinyl estradiol/levonorgestrel 20 mg/90 mcg. For intermittent prophylaxis, she was instructed to take frovatriptan 2.5 mg twice daily, beginning two days prior to the start of menses and continuing through the last day of her cycle. For acute treatment of breakthrough migraines, she was prescribed sumatriptan 20-mg nasal spray to take at the first sign of migraine symptoms and instructed to repeat the dose if the pain persisted or returned.
The patient continued to track her headaches in the diary and was seen in the office after three months of following the revised menstrual migraine management plan. She reported fewer migraines associated with her menstrual cycle and noted that they were less severe and shorter in duration. When she repeated the MIDAS test, her score was reduced from 23 to 10. In the subsequent nine months she has reported a consistent decrease in migraine prevalence and now rarely needs the abortive therapy.
DISCUSSION
Migraine, though commonly encountered in clinical practice, is a complex disorder. For women, migraine headaches have been recognized by the World Health Organization as the 12th leading cause of “life lived with a disabling condition.”3 Pure menstrual migraine and menstrually related migraine will be the focus of discussion here.
Etiology
Menstrually related migraine (comparable to pure menstrual migraine, although the latter is distinguished by occurring only during the perimenstrual period1) is recognized as a distinct type of migraine associated with perimenstrual hormone fluctuations.4 Of women who experience migraine, 42% to 61% can associate their attacks with the perimenstrual period5; this is defined as two days before to three days after the start of menstruation.
It has also been determined that women are more likely to have migraine attacks during the late luteal and early follicular phases (when there is a natural drop in estrogen levels) than in other phases (when estrogen levels are higher).6 Despite clinical evidence to support this estrogen withdrawal theory, the pathophysiology is not completely understood. It is possible that affected women are more sensitive than other women to the decrease in estrogen levels that occurs with menstruation.7
History and Physical Findings of Menstrual Migraines
Almost every woman with perimenstrual migraines reports an absence of aura.7 In the evaluation of headache, the same criteria for migraine without aura pertain to the classifications of pure menstrual migraine (PMM) or menstrually related migraine (MRM).1 Correlation of migraine attacks to the onset of menses is the key finding in the patient history to differentiate menstrual migraine from migraine without aura in women.8 Furthermore, perimenstrual migraines are often of longer duration and more difficult to treat than migraines not associated with hormone fluctuations.9
In order to distinguish between PMM and MRM, it is important to understand that pure menstrual migraine attacks take place exclusively in the five-day perimenstrual window and at no other times of the cycle. The criteria for MRM allow for attacks at other times of the cycle.1
In addition to causing physical pain, menstrual migraines can impact work performance, household activities, and personal relationships. The MIDAS questionnaire is a disability assessment tool that can reveal to the practitioner how migraines have affected the patient’s life over the previous three months.10 This is a useful method to identify patients with disabling migraines, determine their need for treatment, and monitor treatment efficacy.
Diagnosis
Menstrual migraine is a clinical diagnosis made by findings from the patient’s history. The International Headache Society has established specific diagnostic criteria in the ICHD-II for both PMM and MRM.1 An accurate and detailed migraine history is invaluable for the diagnosis of menstrual migraine. Although a formal questionnaire can serve as a good screening tool, it relies on the patient’s ability to recall specific times and dates with accuracy.11 Recall bias can be misleading in any attempt to confirm a diagnosis. The patient’s conscientious use of a daily headache diary or calendar (see Figure 2, for example) can lead to a precise record of the characteristics and timing of migraines, overcoming these obstacles.
Brain imaging is necessary if the patient’s symptoms suggest a critical etiology that requires immediate diagnosis and management. Red flags include sudden onset of a severe headache, a headache characterized as “the worst headache of the patient’s life,” a change in headache pattern, altered mental status, an abnormal neurologic examination, or fever with neck stiffness.12
Treatment Options for Menstrual Migraine
There is no FDA-approved treatment specific for menstrual migraines; however, medications used for management of nonmenstrual migraines are also those most commonly prescribed for women with menstrual migraine headaches.13 Because these headaches are frequently more severe and of longer duration than nonmenstrual migraine headaches, a combination of intermittent preventive therapy, hormone manipulation, and acute treatment strategies is often necessary.4
Acute therapy is aimed to treat migraine pain quickly and effectively with minimal adverse effects or need for additional medication. Triptans have been the mainstay of menstrual migraine treatment and have been proven effective for both acute attacks and prevention.4 Sumatriptan has a rapid onset of action and may be given orally as a 50- or 100-mg tablet, as a 6-mg subcutaneous injection, or as a 20-mg nasal spray.14
Abortive therapies are most effective when taken at the first sign of an attack. Patients can repeat the dose in two hours if the headache persists or recurs, to a maximum of two doses in 24 hours.15 Rizatriptan is another triptan used for acute treatment of menstrual migraine headaches. Its initial 10-mg dose can be repeated every two hours, to a maximum of 30 mg per 24 hours. NSAIDs, such as naproxen sodium, have also been recommended in acute migraine attacks. They seem to work synergistically with triptans, inhibiting prostaglandin synthesis and blocking neurogenic inflammation.15
Clinical study results have demonstrated superior pain relief and decreased migraine recurrence when a triptan and NSAID are used in combination, compared with use of either medication alone.4 A single-tablet formulation of sumatriptan 85 mg and naproxen sodium 500 mg may be considered for initial therapy in hard-to-treat patients.14
Preventive therapy should be considered when responsiveness to acute treatment is inadequate.4 Nonhormonal intermittent prophylactic treatment is recommended two days prior to the beginning of menses, continuing for five days.16 Longer-acting triptans, such as frovatriptan 2.5 mg and naratriptan 1.0 mg, dosed twice daily, have been demonstrated as effective in clinical trials when used during the perimenstrual period.17,18
The advantage of short-term therapy over daily prophylaxis is the potential to avoid adverse effects seen with continuous exposure to the drug.3 However, successful therapy relies on consistency in menstruation, and therefore may not be ideal for women with irregular cycles or those with coexisting nonmenstrual migraines.16 Estrogen-based therapy is an option for these women and for those who have failed nonhormonal methods.19
The goal of hormone prophylaxis is to prevent or reduce the physiologic decline in estradiol that occurs in the late luteal phase.4 Clinical studies have been conducted using various hormonal strategies to maintain steady estradiol levels, all of which decreased migraine prevalence.19 Estrogen fluctuations can be minimized by eliminating the placebo week in traditional estrogen/progestin oral contraceptives to achieve an extended-cycle regimen, resembling that of the 12-week ethinyl estradiol/levonorgestrel formulation.19
Continuous use of combined oral contraceptives is also an option for relief of menstrual migraine. When cyclic or extended-cycle regimens allow for menses, supplemental estrogen (10- to 20-mg ethinyl estradiol) is recommended during the hormone-free week.14
CONCLUSION
Proper diagnosis of menstrual migraines, using screening tools and the MIDAS questionnaire, can help practitioners provide the most effective migraine management for their patients. The most important step toward a good prognosis is acknowledging menstrual migraine as a unique headache disorder and formulating a precise diagnosis in order to identify individually tailored treatment options. With proper identification and integrated acute and prophylactic treatment, women with menstrual migraines are able to lead a healthier, more satisfying life.
REFERENCES
1. International Headache Society. The International Classification of Headache Disorders. 2nd ed. Cephalalgia. 2004;24(suppl 1):1-160.
2. Stewart WF, Lipton RB, Dowson AJ, Sawyer J. Development and testing of the Migraine Disability Assessment (MIDAS) Questionnaire to assess headache-related disability. Neurology. 2001;56(6 suppl 1):S20-S28.
3. MacGregor EA. Perimenstrual headaches: unmet needs. Curr Pain Headache Rep. 2008;12(6):468-474.
4. Mannix LK. Menstrual-related pain conditions: dysmenorrhea and migraine. J Womens Health (Larchmt). 2008;17(5):879-891.
5. Martin VT. New theories in the pathogenesis of menstrual migraine. Curr Pain Headache Rep. 2008;12(6):453-462.
6. MacGregor EA. Migraine headache in perimenopausal and menopausal women. Curr Pain Headache Rep. 2009;13(5):399-403.
7. Martin VT, Wernke S, Mandell K, et al. Symptoms of premenstrual syndrome and their association with migraine headache. Headache. 2006; 46(1):125-137.
8. Martin VT, Behbehani M. Ovarian hormones and migraine headache: understanding mechanisms and pathogenesis—part 2. Headache. 2006;46(3):365-386.
9. Granella F, Sances G, Allais G, et al. Characteristics of menstrual and nonmenstrual attacks in women with menstrually related migraine referred to headache centres. Cephalalgia. 2004;24(9):707-716.
10. Hutchinson SL, Silberstein SD. Menstrual migraine: case studies of women with estrogen-related headaches. Headache. 2008;48 suppl 3:S131-S141.
11. Tepper SJ, Zatochill M, Szeto M, et al. Development of a simple menstrual migraine screening tool for obstetric and gynecology clinics: the Menstrual Migraine Assessment Tool. Headache. 2008; 48(10):1419-1425.
12. Marcus DA. Focus on primary care diagnosis and management of headache in women. Obstet Gynecol Surv. 1999;54(6):395-402.
13. Tepper SJ. Tailoring management strategies for the patient with menstrual migraine: focus on prevention and treatment. Headache. 2006;46(suppl 2):S61-S68.
14. Lay CL, Payne R. Recognition and treatment of menstrual migraine. Neurologist. 2007;13(4):197-204.
15. Henry KA, Cohen CI. Perimenstrual headache: treatment options. Curr Pain Headache Rep. 2009;13(1):82-88.
16. Calhoun AH. Estrogen-associated migraine. www.uptodate.com/contents/estrogen-associated-migraine. Accessed May 4, 2011.
17. Silberstein SD, Elkind AH, Schreiber C, et al. A randomized trial of frovatriptan for the intermittent prevention of menstrual migraine. Neurology. 2004;63:261-269.
18. Mannix LK, Savani N, Landy S, et al. Efficacy and tolerability of naratriptan for short-term prevention of menstrually related migraine: data from two randomized, double-blind, placebo-controlled studies. Headache. 2007;47(7):1037-1049.
19. Calhoun AH, Hutchinson S. Hormonal therapies for menstrual migraine. Curr Pain Headache Rep. 2009;13(5):381-385.
A 29-year-old woman with a history of frequent migraines presented to her primary care provider for a refill of medication. For the past two years she had been taking rizatriptan 10 mg, but with little relief. She stated that she had continued to experience discrete migraines several days per month, often clustered around menses. The severity of the headaches had negatively affected her work attendance, productivity, and social interactions. She wondered if she should be taking a different kind of medication.
The patient had been diagnosed with migraines at age 12, just prior to menarche. She described her headache as a unilateral, sharp throbbing pain associated with increased sensitivity to light and sound as well as nausea. She denied any history of head trauma. She had no allergies, and the only other medications she was taking at the time were an oral contraceptive (ethinyl estradiol/norgestimate 0.035 mg/0.18 mg with an oral triphasic 21/7 treatment cycle) and fluoxetine 20 mg for depression.
The patient worked daytime hours as a sales representative. She considered herself active, exercised regularly, ate a balanced diet, and slept well. She consumed no more than two to four alcoholic drinks per month and denied the use of herbals, dietary supplements, tobacco, or illegal drugs.
The patient stated that her mother had frequent headaches but had never sought a medical explanation or treatment. She was unaware of any other family history of headaches, and there was no family history of cardiovascular disease. Her sister had been diagnosed with a prolactinoma at age 25. At age 26, the patient had undergone a pituitary protocol MRI of the head with and without contrast, with negative results.
On examination, the patient was alert and oriented with normal vital signs. Her pupils were equal and reactive to light, and no papilledema was evident on fundoscopic examination. The cranial nerves were grossly intact and no other neurologic deficits were appreciated. No carotid bruits were present on cardiovascular exam.
Based on the patient’s history and physical exam, she met the International Classification of Headache Disorders (ICHD-II)1 diagnostic criteria for migraine without aura (1.1). When asked to recall the onset and frequency of attacks she had had in the previous four weeks, she noted that they regularly occurred during her menstrual cycle.
She was subsequently asked to begin a diary to record her headache characteristics, severity, and duration, with days of menstruation noted. The Migraine Disability Assessment (MIDAS) questionnaire2 (see Tables 1 and 22) was performed to measure the migraine attacks’ impact on the patient’s life; her score indicated that the headaches were causing her severe disability.
The patient’s abortive migraine medication was changed from rizatriptan 10 mg to the combination sumatriptan/naproxen sodium 85 mg/500 mg. She was instructed to take the initial dose as soon as she noticed signs of an impending migraine and to repeat the dose in two hours if symptoms persisted. The possibility of starting a preventive medication was discussed, but the patient wanted to evaluate her response to the combination triptan/NSAID before considering migraine prophylaxis.
Three months later, the patient returned for follow-up, including a review of her headache diary. She stated that the frequency and intensity of attacks had not decreased; acute treatment with sumatriptan/naproxen sodium made her headaches more bearable but did not ameliorate symptoms. The patient had recorded a detailed account of each migraine which, based on the ICHD-II criteria,1 demonstrated a pattern of headache occurrences consistent with menstrually related migraine. She reported a total of 18 headaches in the previous three months, 12 of which had occurred within the five-day perimenstrual period (see Figure 1).
Based on this information and the fact that the patient’s headaches were resistant to previous treatments, it was decided to alter the approach to her migraine management once more. In an effort to limit estrogen fluctuations during her menstrual cycle, her oral contraceptive was changed from ethinyl estradiol/norgestimate to a 12-week placebo-free monophasic regimen of ethinyl estradiol/levonorgestrel 20 mg/90 mcg. For intermittent prophylaxis, she was instructed to take frovatriptan 2.5 mg twice daily, beginning two days prior to the start of menses and continuing through the last day of her cycle. For acute treatment of breakthrough migraines, she was prescribed sumatriptan 20-mg nasal spray to take at the first sign of migraine symptoms and instructed to repeat the dose if the pain persisted or returned.
The patient continued to track her headaches in the diary and was seen in the office after three months of following the revised menstrual migraine management plan. She reported fewer migraines associated with her menstrual cycle and noted that they were less severe and shorter in duration. When she repeated the MIDAS test, her score was reduced from 23 to 10. In the subsequent nine months she has reported a consistent decrease in migraine prevalence and now rarely needs the abortive therapy.
DISCUSSION
Migraine, though commonly encountered in clinical practice, is a complex disorder. For women, migraine headaches have been recognized by the World Health Organization as the 12th leading cause of “life lived with a disabling condition.”3 Pure menstrual migraine and menstrually related migraine will be the focus of discussion here.
Etiology
Menstrually related migraine (comparable to pure menstrual migraine, although the latter is distinguished by occurring only during the perimenstrual period1) is recognized as a distinct type of migraine associated with perimenstrual hormone fluctuations.4 Of women who experience migraine, 42% to 61% can associate their attacks with the perimenstrual period5; this is defined as two days before to three days after the start of menstruation.
It has also been determined that women are more likely to have migraine attacks during the late luteal and early follicular phases (when there is a natural drop in estrogen levels) than in other phases (when estrogen levels are higher).6 Despite clinical evidence to support this estrogen withdrawal theory, the pathophysiology is not completely understood. It is possible that affected women are more sensitive than other women to the decrease in estrogen levels that occurs with menstruation.7
History and Physical Findings of Menstrual Migraines
Almost every woman with perimenstrual migraines reports an absence of aura.7 In the evaluation of headache, the same criteria for migraine without aura pertain to the classifications of pure menstrual migraine (PMM) or menstrually related migraine (MRM).1 Correlation of migraine attacks to the onset of menses is the key finding in the patient history to differentiate menstrual migraine from migraine without aura in women.8 Furthermore, perimenstrual migraines are often of longer duration and more difficult to treat than migraines not associated with hormone fluctuations.9
In order to distinguish between PMM and MRM, it is important to understand that pure menstrual migraine attacks take place exclusively in the five-day perimenstrual window and at no other times of the cycle. The criteria for MRM allow for attacks at other times of the cycle.1
In addition to causing physical pain, menstrual migraines can impact work performance, household activities, and personal relationships. The MIDAS questionnaire is a disability assessment tool that can reveal to the practitioner how migraines have affected the patient’s life over the previous three months.10 This is a useful method to identify patients with disabling migraines, determine their need for treatment, and monitor treatment efficacy.
Diagnosis
Menstrual migraine is a clinical diagnosis made by findings from the patient’s history. The International Headache Society has established specific diagnostic criteria in the ICHD-II for both PMM and MRM.1 An accurate and detailed migraine history is invaluable for the diagnosis of menstrual migraine. Although a formal questionnaire can serve as a good screening tool, it relies on the patient’s ability to recall specific times and dates with accuracy.11 Recall bias can be misleading in any attempt to confirm a diagnosis. The patient’s conscientious use of a daily headache diary or calendar (see Figure 2, for example) can lead to a precise record of the characteristics and timing of migraines, overcoming these obstacles.
Brain imaging is necessary if the patient’s symptoms suggest a critical etiology that requires immediate diagnosis and management. Red flags include sudden onset of a severe headache, a headache characterized as “the worst headache of the patient’s life,” a change in headache pattern, altered mental status, an abnormal neurologic examination, or fever with neck stiffness.12
Treatment Options for Menstrual Migraine
There is no FDA-approved treatment specific for menstrual migraines; however, medications used for management of nonmenstrual migraines are also those most commonly prescribed for women with menstrual migraine headaches.13 Because these headaches are frequently more severe and of longer duration than nonmenstrual migraine headaches, a combination of intermittent preventive therapy, hormone manipulation, and acute treatment strategies is often necessary.4
Acute therapy is aimed to treat migraine pain quickly and effectively with minimal adverse effects or need for additional medication. Triptans have been the mainstay of menstrual migraine treatment and have been proven effective for both acute attacks and prevention.4 Sumatriptan has a rapid onset of action and may be given orally as a 50- or 100-mg tablet, as a 6-mg subcutaneous injection, or as a 20-mg nasal spray.14
Abortive therapies are most effective when taken at the first sign of an attack. Patients can repeat the dose in two hours if the headache persists or recurs, to a maximum of two doses in 24 hours.15 Rizatriptan is another triptan used for acute treatment of menstrual migraine headaches. Its initial 10-mg dose can be repeated every two hours, to a maximum of 30 mg per 24 hours. NSAIDs, such as naproxen sodium, have also been recommended in acute migraine attacks. They seem to work synergistically with triptans, inhibiting prostaglandin synthesis and blocking neurogenic inflammation.15
Clinical study results have demonstrated superior pain relief and decreased migraine recurrence when a triptan and NSAID are used in combination, compared with use of either medication alone.4 A single-tablet formulation of sumatriptan 85 mg and naproxen sodium 500 mg may be considered for initial therapy in hard-to-treat patients.14
Preventive therapy should be considered when responsiveness to acute treatment is inadequate.4 Nonhormonal intermittent prophylactic treatment is recommended two days prior to the beginning of menses, continuing for five days.16 Longer-acting triptans, such as frovatriptan 2.5 mg and naratriptan 1.0 mg, dosed twice daily, have been demonstrated as effective in clinical trials when used during the perimenstrual period.17,18
The advantage of short-term therapy over daily prophylaxis is the potential to avoid adverse effects seen with continuous exposure to the drug.3 However, successful therapy relies on consistency in menstruation, and therefore may not be ideal for women with irregular cycles or those with coexisting nonmenstrual migraines.16 Estrogen-based therapy is an option for these women and for those who have failed nonhormonal methods.19
The goal of hormone prophylaxis is to prevent or reduce the physiologic decline in estradiol that occurs in the late luteal phase.4 Clinical studies have been conducted using various hormonal strategies to maintain steady estradiol levels, all of which decreased migraine prevalence.19 Estrogen fluctuations can be minimized by eliminating the placebo week in traditional estrogen/progestin oral contraceptives to achieve an extended-cycle regimen, resembling that of the 12-week ethinyl estradiol/levonorgestrel formulation.19
Continuous use of combined oral contraceptives is also an option for relief of menstrual migraine. When cyclic or extended-cycle regimens allow for menses, supplemental estrogen (10- to 20-mg ethinyl estradiol) is recommended during the hormone-free week.14
CONCLUSION
Proper diagnosis of menstrual migraines, using screening tools and the MIDAS questionnaire, can help practitioners provide the most effective migraine management for their patients. The most important step toward a good prognosis is acknowledging menstrual migraine as a unique headache disorder and formulating a precise diagnosis in order to identify individually tailored treatment options. With proper identification and integrated acute and prophylactic treatment, women with menstrual migraines are able to lead a healthier, more satisfying life.
REFERENCES
1. International Headache Society. The International Classification of Headache Disorders. 2nd ed. Cephalalgia. 2004;24(suppl 1):1-160.
2. Stewart WF, Lipton RB, Dowson AJ, Sawyer J. Development and testing of the Migraine Disability Assessment (MIDAS) Questionnaire to assess headache-related disability. Neurology. 2001;56(6 suppl 1):S20-S28.
3. MacGregor EA. Perimenstrual headaches: unmet needs. Curr Pain Headache Rep. 2008;12(6):468-474.
4. Mannix LK. Menstrual-related pain conditions: dysmenorrhea and migraine. J Womens Health (Larchmt). 2008;17(5):879-891.
5. Martin VT. New theories in the pathogenesis of menstrual migraine. Curr Pain Headache Rep. 2008;12(6):453-462.
6. MacGregor EA. Migraine headache in perimenopausal and menopausal women. Curr Pain Headache Rep. 2009;13(5):399-403.
7. Martin VT, Wernke S, Mandell K, et al. Symptoms of premenstrual syndrome and their association with migraine headache. Headache. 2006; 46(1):125-137.
8. Martin VT, Behbehani M. Ovarian hormones and migraine headache: understanding mechanisms and pathogenesis—part 2. Headache. 2006;46(3):365-386.
9. Granella F, Sances G, Allais G, et al. Characteristics of menstrual and nonmenstrual attacks in women with menstrually related migraine referred to headache centres. Cephalalgia. 2004;24(9):707-716.
10. Hutchinson SL, Silberstein SD. Menstrual migraine: case studies of women with estrogen-related headaches. Headache. 2008;48 suppl 3:S131-S141.
11. Tepper SJ, Zatochill M, Szeto M, et al. Development of a simple menstrual migraine screening tool for obstetric and gynecology clinics: the Menstrual Migraine Assessment Tool. Headache. 2008; 48(10):1419-1425.
12. Marcus DA. Focus on primary care diagnosis and management of headache in women. Obstet Gynecol Surv. 1999;54(6):395-402.
13. Tepper SJ. Tailoring management strategies for the patient with menstrual migraine: focus on prevention and treatment. Headache. 2006;46(suppl 2):S61-S68.
14. Lay CL, Payne R. Recognition and treatment of menstrual migraine. Neurologist. 2007;13(4):197-204.
15. Henry KA, Cohen CI. Perimenstrual headache: treatment options. Curr Pain Headache Rep. 2009;13(1):82-88.
16. Calhoun AH. Estrogen-associated migraine. www.uptodate.com/contents/estrogen-associated-migraine. Accessed May 4, 2011.
17. Silberstein SD, Elkind AH, Schreiber C, et al. A randomized trial of frovatriptan for the intermittent prevention of menstrual migraine. Neurology. 2004;63:261-269.
18. Mannix LK, Savani N, Landy S, et al. Efficacy and tolerability of naratriptan for short-term prevention of menstrually related migraine: data from two randomized, double-blind, placebo-controlled studies. Headache. 2007;47(7):1037-1049.
19. Calhoun AH, Hutchinson S. Hormonal therapies for menstrual migraine. Curr Pain Headache Rep. 2009;13(5):381-385.