User login
Why Scientists Are Linking More Diseases to Light at Night
This October, millions of Americans missed out on two of the most spectacular shows in the universe: the northern lights and a rare comet. Even if you were aware of them, light pollution made them difficult to see, unless you went to a dark area and let your eyes adjust.
It’s not getting any easier — the night sky over North America has been growing brighter by about 10% per year since 2011. More and more research is linking all that light pollution to a surprising range of health consequences: cancer, heart disease, diabetes, Alzheimer’s disease, and even low sperm quality, though the reasons for these troubling associations are not always clear.
“We’ve lost the contrast between light and dark, and we are confusing our physiology on a regular basis,” said John Hanifin, PhD, associate director of Thomas Jefferson University’s Light Research Program.
Our own galaxy is invisible to nearly 80% of people in North America. In 1994, an earthquake-triggered blackout in Los Angeles led to calls to the Griffith Observatory from people wondering about that hazy blob of light in the night sky. It was the Milky Way.
Glaring headlights, illuminated buildings, blazing billboards, and streetlights fill our urban skies with a glow that even affects rural residents. Inside, since the invention of the lightbulb, we’ve kept our homes bright at night. Now, we’ve also added blue light-emitting devices — smartphones, television screens, tablets — which have been linked to sleep problems.
But outdoor light may matter for our health, too. “Every photon counts,” Hanifin said.
Bright Lights, Big Problems
For one 2024 study researchers used satellite data to measure light pollution at residential addresses of over 13,000 people. They found that those who lived in places with the brightest skies at night had a 31% higher risk of high blood pressure. Another study out of Hong Kong showed a 29% higher risk of death from coronary heart disease. And yet another found a 17%higher risk of cerebrovascular disease, such as strokes or brain aneurysms.
Of course, urban areas also have air pollution, noise, and a lack of greenery. So, for some studies, scientists controlled for these factors, and the correlation remained strong (although air pollution with fine particulate matter appeared to be worse for heart health than outdoor light).
Research has found links between the nighttime glow outside and other diseases:
Breast cancer. “It’s a very strong correlation,” said Randy Nelson, PhD, a neuroscientist at West Virginia University. A study of over 100,000 teachers in California revealed that women living in areas with the most light pollution had a 12%higher risk. That effect is comparable to increasing your intake of ultra-processed foods by 10%.
Alzheimer’s disease. In a study published this fall, outdoor light at night was more strongly linked to the disease than even alcohol misuse or obesity.
Diabetes. In one recent study, people living in the most illuminated areas had a 28% higher risk of diabetes than those residing in much darker places. In a country like China, scientists concluded that 9 million cases of diabetes could be linked to light pollution.
What Happens in Your Body When You’re Exposed to Light at Night
“hormone of darkness.” “Darkness is very important,” Hanifin said. When he and his colleagues decades ago started studying the effects of light on human physiology, “people thought we were borderline crazy,” he said.
Nighttime illumination affects the health and behavior of species as diverse as Siberian hamsters, zebra finches, mice, crickets, and mosquitoes. Like most creatures on Earth, humans have internal clocks that are synced to the 24-hour cycle of day and night. The master clock is in your hypothalamus, a diamond-shaped part of the brain, but every cell in your body has its own clock, too. Many physiological processes run on circadian rhythms (a term derived from a Latin phrase meaning “about a day”), from sleep-wake cycle to hormone secretion, as well as processes involved in cancer progression, such as cell division.
“There are special photoreceptors in the eye that don’t deal with visual information. They just send light information,” Nelson said. “If you get light at the wrong time, you’re resetting the clocks.”
This internal clock “prepares the body for various recurrent challenges, such as eating,” said Christian Benedict, PhD, a sleep researcher at Uppsala University, Sweden. “Light exposure [at night] can mess up this very important system.” This could mean, for instance, that your insulin is released at the wrong time, Benedict said, causing “a jet lag-ish condition that will then impair the ability to handle blood sugar.” Animal studies confirm that exposure to light at night can reduce glucose tolerance and alter insulin secretion – potential pathways to diabetes.
The hormone melatonin, produced when it’s dark by the pineal gland in the brain, is a key player in this modern struggle. Melatonin helps you sleep, synchronizes the body’s circadian rhythms, protects neurons from damage, regulates the immune system, and fights inflammation. But even a sliver of light at night can suppress its secretion. Less than 30 lux of light, about the level of a pedestrian street at night, can slash melatonin by half.
When lab animals are exposed to nighttime light, they “show enormous neuroinflammation” — that is, inflammation of nervous tissue, Nelson said. In one experiment on humans, those who slept immersed in weak light had higher levels of C-reactive protein in their blood, a marker of inflammation.
Low melatonin has also been linked to cancer. It “allows the metabolic machinery of the cancer cells to be active,” Hanifin said. One of melatonin’s effects is stimulation of natural killer cells, which can recognize and destroy cancer cells. What’s more, when melatonin plunges, estrogen may go up, which could explain the link between light at night and breast cancer (estrogen fuels tumor growth in breast cancers).
Researchers concede that satellite data might be too coarse to estimate how much light people are actually exposed to while they sleep. Plus, many of us are staring at bright screens. “But the studies keep coming,” Nelson said, suggesting that outdoor light pollution does have an impact.
When researchers put wrist-worn light sensors on over 80,000 British people, they found that the more light the device registered between half-past midnight and 6 a.m., the more its wearer was at risk of having diabetes several years down the road — no matter how long they’ve actually slept. This, according to the study’s authors, supports the findings of satellite data.
A similar study that used actigraphy with built-in light sensors, measuring whether people had been sleeping in complete darkness for at least five hours, found that light pollution upped the risk of heart disease by 74%.
What Can You Do About This?
Not everyone’s melatonin is affected by nighttime light to the same degree. “Some people are very much sensitive to very dim light, whereas others are not as sensitive and need far, far more light stimulation [to impact melatonin],” Benedict said. In one study, some volunteers needed 350 lux to lower their melatonin by half. For such people, flipping on the light in the bathroom at night wouldn’t matter; for others, though, a mere 6 lux was already as harmful – which is darker than twilight.
You can protect yourself by keeping your bedroom lights off and your screens stashed away, but avoiding outdoor light pollution may be harder. You can invest in high-quality blackout curtains, of course, although some light may still seep inside. You can plant trees in front of your windows, reorient any motion-detector lights, and even petition your local government to reduce over-illumination of buildings and to choose better streetlights. You can support organizations, such as the International Dark-Sky Association, that work to preserve darkness.
Last but not least, you might want to change your habits. If you live in a particularly light-polluted area, such as the District of Columbia, America’s top place for urban blaze, you might reconsider late-night walks or drives around the neighborhood. Instead, Hanifin said, read a book in bed, while keeping the light “as dim as you can.” It’s “a much better idea versus being outside in midtown Manhattan,” he said. According to recent recommendations published by Hanifin and his colleagues, when you sleep, there should be no more than 1 lux of illumination at the level of your eyes — about as much as you’d get from having a lit candle 1 meter away.
And if we manage to preserve outdoor darkness, and the stars reappear (including the breathtaking Milky Way), we could reap more benefits — some research suggests that stargazing can elicit positive emotions, a sense of personal growth, and “a variety of transcendent thoughts and experiences.”
A version of this article appeared on WebMD.com.
This October, millions of Americans missed out on two of the most spectacular shows in the universe: the northern lights and a rare comet. Even if you were aware of them, light pollution made them difficult to see, unless you went to a dark area and let your eyes adjust.
It’s not getting any easier — the night sky over North America has been growing brighter by about 10% per year since 2011. More and more research is linking all that light pollution to a surprising range of health consequences: cancer, heart disease, diabetes, Alzheimer’s disease, and even low sperm quality, though the reasons for these troubling associations are not always clear.
“We’ve lost the contrast between light and dark, and we are confusing our physiology on a regular basis,” said John Hanifin, PhD, associate director of Thomas Jefferson University’s Light Research Program.
Our own galaxy is invisible to nearly 80% of people in North America. In 1994, an earthquake-triggered blackout in Los Angeles led to calls to the Griffith Observatory from people wondering about that hazy blob of light in the night sky. It was the Milky Way.
Glaring headlights, illuminated buildings, blazing billboards, and streetlights fill our urban skies with a glow that even affects rural residents. Inside, since the invention of the lightbulb, we’ve kept our homes bright at night. Now, we’ve also added blue light-emitting devices — smartphones, television screens, tablets — which have been linked to sleep problems.
But outdoor light may matter for our health, too. “Every photon counts,” Hanifin said.
Bright Lights, Big Problems
For one 2024 study researchers used satellite data to measure light pollution at residential addresses of over 13,000 people. They found that those who lived in places with the brightest skies at night had a 31% higher risk of high blood pressure. Another study out of Hong Kong showed a 29% higher risk of death from coronary heart disease. And yet another found a 17%higher risk of cerebrovascular disease, such as strokes or brain aneurysms.
Of course, urban areas also have air pollution, noise, and a lack of greenery. So, for some studies, scientists controlled for these factors, and the correlation remained strong (although air pollution with fine particulate matter appeared to be worse for heart health than outdoor light).
Research has found links between the nighttime glow outside and other diseases:
Breast cancer. “It’s a very strong correlation,” said Randy Nelson, PhD, a neuroscientist at West Virginia University. A study of over 100,000 teachers in California revealed that women living in areas with the most light pollution had a 12%higher risk. That effect is comparable to increasing your intake of ultra-processed foods by 10%.
Alzheimer’s disease. In a study published this fall, outdoor light at night was more strongly linked to the disease than even alcohol misuse or obesity.
Diabetes. In one recent study, people living in the most illuminated areas had a 28% higher risk of diabetes than those residing in much darker places. In a country like China, scientists concluded that 9 million cases of diabetes could be linked to light pollution.
What Happens in Your Body When You’re Exposed to Light at Night
“hormone of darkness.” “Darkness is very important,” Hanifin said. When he and his colleagues decades ago started studying the effects of light on human physiology, “people thought we were borderline crazy,” he said.
Nighttime illumination affects the health and behavior of species as diverse as Siberian hamsters, zebra finches, mice, crickets, and mosquitoes. Like most creatures on Earth, humans have internal clocks that are synced to the 24-hour cycle of day and night. The master clock is in your hypothalamus, a diamond-shaped part of the brain, but every cell in your body has its own clock, too. Many physiological processes run on circadian rhythms (a term derived from a Latin phrase meaning “about a day”), from sleep-wake cycle to hormone secretion, as well as processes involved in cancer progression, such as cell division.
“There are special photoreceptors in the eye that don’t deal with visual information. They just send light information,” Nelson said. “If you get light at the wrong time, you’re resetting the clocks.”
This internal clock “prepares the body for various recurrent challenges, such as eating,” said Christian Benedict, PhD, a sleep researcher at Uppsala University, Sweden. “Light exposure [at night] can mess up this very important system.” This could mean, for instance, that your insulin is released at the wrong time, Benedict said, causing “a jet lag-ish condition that will then impair the ability to handle blood sugar.” Animal studies confirm that exposure to light at night can reduce glucose tolerance and alter insulin secretion – potential pathways to diabetes.
The hormone melatonin, produced when it’s dark by the pineal gland in the brain, is a key player in this modern struggle. Melatonin helps you sleep, synchronizes the body’s circadian rhythms, protects neurons from damage, regulates the immune system, and fights inflammation. But even a sliver of light at night can suppress its secretion. Less than 30 lux of light, about the level of a pedestrian street at night, can slash melatonin by half.
When lab animals are exposed to nighttime light, they “show enormous neuroinflammation” — that is, inflammation of nervous tissue, Nelson said. In one experiment on humans, those who slept immersed in weak light had higher levels of C-reactive protein in their blood, a marker of inflammation.
Low melatonin has also been linked to cancer. It “allows the metabolic machinery of the cancer cells to be active,” Hanifin said. One of melatonin’s effects is stimulation of natural killer cells, which can recognize and destroy cancer cells. What’s more, when melatonin plunges, estrogen may go up, which could explain the link between light at night and breast cancer (estrogen fuels tumor growth in breast cancers).
Researchers concede that satellite data might be too coarse to estimate how much light people are actually exposed to while they sleep. Plus, many of us are staring at bright screens. “But the studies keep coming,” Nelson said, suggesting that outdoor light pollution does have an impact.
When researchers put wrist-worn light sensors on over 80,000 British people, they found that the more light the device registered between half-past midnight and 6 a.m., the more its wearer was at risk of having diabetes several years down the road — no matter how long they’ve actually slept. This, according to the study’s authors, supports the findings of satellite data.
A similar study that used actigraphy with built-in light sensors, measuring whether people had been sleeping in complete darkness for at least five hours, found that light pollution upped the risk of heart disease by 74%.
What Can You Do About This?
Not everyone’s melatonin is affected by nighttime light to the same degree. “Some people are very much sensitive to very dim light, whereas others are not as sensitive and need far, far more light stimulation [to impact melatonin],” Benedict said. In one study, some volunteers needed 350 lux to lower their melatonin by half. For such people, flipping on the light in the bathroom at night wouldn’t matter; for others, though, a mere 6 lux was already as harmful – which is darker than twilight.
You can protect yourself by keeping your bedroom lights off and your screens stashed away, but avoiding outdoor light pollution may be harder. You can invest in high-quality blackout curtains, of course, although some light may still seep inside. You can plant trees in front of your windows, reorient any motion-detector lights, and even petition your local government to reduce over-illumination of buildings and to choose better streetlights. You can support organizations, such as the International Dark-Sky Association, that work to preserve darkness.
Last but not least, you might want to change your habits. If you live in a particularly light-polluted area, such as the District of Columbia, America’s top place for urban blaze, you might reconsider late-night walks or drives around the neighborhood. Instead, Hanifin said, read a book in bed, while keeping the light “as dim as you can.” It’s “a much better idea versus being outside in midtown Manhattan,” he said. According to recent recommendations published by Hanifin and his colleagues, when you sleep, there should be no more than 1 lux of illumination at the level of your eyes — about as much as you’d get from having a lit candle 1 meter away.
And if we manage to preserve outdoor darkness, and the stars reappear (including the breathtaking Milky Way), we could reap more benefits — some research suggests that stargazing can elicit positive emotions, a sense of personal growth, and “a variety of transcendent thoughts and experiences.”
A version of this article appeared on WebMD.com.
This October, millions of Americans missed out on two of the most spectacular shows in the universe: the northern lights and a rare comet. Even if you were aware of them, light pollution made them difficult to see, unless you went to a dark area and let your eyes adjust.
It’s not getting any easier — the night sky over North America has been growing brighter by about 10% per year since 2011. More and more research is linking all that light pollution to a surprising range of health consequences: cancer, heart disease, diabetes, Alzheimer’s disease, and even low sperm quality, though the reasons for these troubling associations are not always clear.
“We’ve lost the contrast between light and dark, and we are confusing our physiology on a regular basis,” said John Hanifin, PhD, associate director of Thomas Jefferson University’s Light Research Program.
Our own galaxy is invisible to nearly 80% of people in North America. In 1994, an earthquake-triggered blackout in Los Angeles led to calls to the Griffith Observatory from people wondering about that hazy blob of light in the night sky. It was the Milky Way.
Glaring headlights, illuminated buildings, blazing billboards, and streetlights fill our urban skies with a glow that even affects rural residents. Inside, since the invention of the lightbulb, we’ve kept our homes bright at night. Now, we’ve also added blue light-emitting devices — smartphones, television screens, tablets — which have been linked to sleep problems.
But outdoor light may matter for our health, too. “Every photon counts,” Hanifin said.
Bright Lights, Big Problems
For one 2024 study researchers used satellite data to measure light pollution at residential addresses of over 13,000 people. They found that those who lived in places with the brightest skies at night had a 31% higher risk of high blood pressure. Another study out of Hong Kong showed a 29% higher risk of death from coronary heart disease. And yet another found a 17%higher risk of cerebrovascular disease, such as strokes or brain aneurysms.
Of course, urban areas also have air pollution, noise, and a lack of greenery. So, for some studies, scientists controlled for these factors, and the correlation remained strong (although air pollution with fine particulate matter appeared to be worse for heart health than outdoor light).
Research has found links between the nighttime glow outside and other diseases:
Breast cancer. “It’s a very strong correlation,” said Randy Nelson, PhD, a neuroscientist at West Virginia University. A study of over 100,000 teachers in California revealed that women living in areas with the most light pollution had a 12%higher risk. That effect is comparable to increasing your intake of ultra-processed foods by 10%.
Alzheimer’s disease. In a study published this fall, outdoor light at night was more strongly linked to the disease than even alcohol misuse or obesity.
Diabetes. In one recent study, people living in the most illuminated areas had a 28% higher risk of diabetes than those residing in much darker places. In a country like China, scientists concluded that 9 million cases of diabetes could be linked to light pollution.
What Happens in Your Body When You’re Exposed to Light at Night
“hormone of darkness.” “Darkness is very important,” Hanifin said. When he and his colleagues decades ago started studying the effects of light on human physiology, “people thought we were borderline crazy,” he said.
Nighttime illumination affects the health and behavior of species as diverse as Siberian hamsters, zebra finches, mice, crickets, and mosquitoes. Like most creatures on Earth, humans have internal clocks that are synced to the 24-hour cycle of day and night. The master clock is in your hypothalamus, a diamond-shaped part of the brain, but every cell in your body has its own clock, too. Many physiological processes run on circadian rhythms (a term derived from a Latin phrase meaning “about a day”), from sleep-wake cycle to hormone secretion, as well as processes involved in cancer progression, such as cell division.
“There are special photoreceptors in the eye that don’t deal with visual information. They just send light information,” Nelson said. “If you get light at the wrong time, you’re resetting the clocks.”
This internal clock “prepares the body for various recurrent challenges, such as eating,” said Christian Benedict, PhD, a sleep researcher at Uppsala University, Sweden. “Light exposure [at night] can mess up this very important system.” This could mean, for instance, that your insulin is released at the wrong time, Benedict said, causing “a jet lag-ish condition that will then impair the ability to handle blood sugar.” Animal studies confirm that exposure to light at night can reduce glucose tolerance and alter insulin secretion – potential pathways to diabetes.
The hormone melatonin, produced when it’s dark by the pineal gland in the brain, is a key player in this modern struggle. Melatonin helps you sleep, synchronizes the body’s circadian rhythms, protects neurons from damage, regulates the immune system, and fights inflammation. But even a sliver of light at night can suppress its secretion. Less than 30 lux of light, about the level of a pedestrian street at night, can slash melatonin by half.
When lab animals are exposed to nighttime light, they “show enormous neuroinflammation” — that is, inflammation of nervous tissue, Nelson said. In one experiment on humans, those who slept immersed in weak light had higher levels of C-reactive protein in their blood, a marker of inflammation.
Low melatonin has also been linked to cancer. It “allows the metabolic machinery of the cancer cells to be active,” Hanifin said. One of melatonin’s effects is stimulation of natural killer cells, which can recognize and destroy cancer cells. What’s more, when melatonin plunges, estrogen may go up, which could explain the link between light at night and breast cancer (estrogen fuels tumor growth in breast cancers).
Researchers concede that satellite data might be too coarse to estimate how much light people are actually exposed to while they sleep. Plus, many of us are staring at bright screens. “But the studies keep coming,” Nelson said, suggesting that outdoor light pollution does have an impact.
When researchers put wrist-worn light sensors on over 80,000 British people, they found that the more light the device registered between half-past midnight and 6 a.m., the more its wearer was at risk of having diabetes several years down the road — no matter how long they’ve actually slept. This, according to the study’s authors, supports the findings of satellite data.
A similar study that used actigraphy with built-in light sensors, measuring whether people had been sleeping in complete darkness for at least five hours, found that light pollution upped the risk of heart disease by 74%.
What Can You Do About This?
Not everyone’s melatonin is affected by nighttime light to the same degree. “Some people are very much sensitive to very dim light, whereas others are not as sensitive and need far, far more light stimulation [to impact melatonin],” Benedict said. In one study, some volunteers needed 350 lux to lower their melatonin by half. For such people, flipping on the light in the bathroom at night wouldn’t matter; for others, though, a mere 6 lux was already as harmful – which is darker than twilight.
You can protect yourself by keeping your bedroom lights off and your screens stashed away, but avoiding outdoor light pollution may be harder. You can invest in high-quality blackout curtains, of course, although some light may still seep inside. You can plant trees in front of your windows, reorient any motion-detector lights, and even petition your local government to reduce over-illumination of buildings and to choose better streetlights. You can support organizations, such as the International Dark-Sky Association, that work to preserve darkness.
Last but not least, you might want to change your habits. If you live in a particularly light-polluted area, such as the District of Columbia, America’s top place for urban blaze, you might reconsider late-night walks or drives around the neighborhood. Instead, Hanifin said, read a book in bed, while keeping the light “as dim as you can.” It’s “a much better idea versus being outside in midtown Manhattan,” he said. According to recent recommendations published by Hanifin and his colleagues, when you sleep, there should be no more than 1 lux of illumination at the level of your eyes — about as much as you’d get from having a lit candle 1 meter away.
And if we manage to preserve outdoor darkness, and the stars reappear (including the breathtaking Milky Way), we could reap more benefits — some research suggests that stargazing can elicit positive emotions, a sense of personal growth, and “a variety of transcendent thoughts and experiences.”
A version of this article appeared on WebMD.com.
The Appendix: Is It ’Useless,’ or a Safe House and Immune Training Ground?
When doctors and patients consider the appendix, it’s often with urgency. In cases of appendicitis, the clock could be ticking down to a life-threatening burst. Thus, despite recent research suggesting antibiotics could be an alternative therapy, appendectomy remains standard for uncomplicated appendicitis.
But what if removing the appendix could raise the risk for gastrointestinal (GI) diseases like irritable bowel syndrome and colorectal cancer? That’s what some emerging science suggests. And though the research is early and mixed, it’s enough to give some health professionals pause.
“If there’s no reason to remove the appendix, then it’s better to have one,” said Heather Smith, PhD, a comparative anatomist at Midwestern University, Glendale, Arizona. Preemptive removal is not supported by the evidence, she said.
To be fair, we’ve come a long way since 1928, when American physician Miles Breuer, MD, suggested that people with infected appendixes should be left to perish, so as to remove their inferior DNA from the gene pool (he called such people “uncivilized” and “candidates for extinction”). Charles Darwin, while less radical, believed the appendix was at best useless — a mere vestige of our ancestors switching diets from leaves to fruits.
What we know now is that the appendix isn’t just a troublesome piece of worthless flesh. Instead, it may act as a safe house for friendly gut bacteria and a training camp for the immune system. It also appears to play a role in several medical conditions, from ulcerative colitis and colorectal cancer to Parkinson’s disease and lupus. The roughly 300,000 Americans who undergo appendectomy each year should be made aware of this, some experts say. But the frustrating truth is, scientists are still trying to figure out in which cases having an appendix is protective and in which we may be better off without it.
A ‘Worm’ as Intestinal Protection
The appendix is a blind pouch (meaning its ending is closed off) that extends from the large intestine. Not all mammals have one; it’s been found in several species of primates and rodents, as well as in rabbits, wombats, and Florida manatees, among others (dogs and cats don’t have it). While a human appendix “looks like a little worm,” Dr. Smith said, these anatomical structures come in various sizes and shapes. Some are thick, as in a beaver, while others are long and spiraling, like a rabbit’s.
Comparative anatomy studies reveal that the appendix has evolved independently at least 29 times throughout mammalian evolution. This suggests that “it has some kind of an adaptive function,” Dr. Smith said. When French scientists analyzed data from 258 species of mammals, they discovered that those that possess an appendix live longer than those without one. A possible explanation, the researchers wrote, may lie with the appendix’s role in preventing diarrhea.
Their 2023 study supported this hypothesis. Based on veterinary records of 45 different species of primates housed in a French zoo, the scientists established that primates with appendixes are far less likely to suffer severe diarrhea than those that don’t possess this organ. The appendix, it appears, might be our tiny weapon against bowel troubles.
For immunologist William Parker, PhD, a visiting scholar at the University of North Carolina at Chapel Hill, these data are “about as good as we could hope for” in support of the idea that the appendix might protect mammals from GI problems. An experiment on humans would be unethical, Dr. Parker said. But observational studies offer clues.
One study showed that compared with people with an intact appendix, young adults with a history of appendectomy have more than double the risk of developing a serious infection with non-typhoidal Salmonella of the kind that would require hospitalization.
A ‘Safe House’ for Bacteria
Such studies add weight to a theory that Dr. Parker and his colleagues developed back in 2007: That the appendix acts as a “safe house” for beneficial gut bacteria.
Think of the colon as a wide pipe, Dr. Parker said, that may become contaminated with a pathogen such as Salmonella. Diarrhea follows, and the pipe gets repeatedly flushed, wiping everything clean, including your friendly gut microbiome. Luckily, “you’ve got this little offshoot of that pipe,” where the flow can’t really get in “because it’s so constricted,” Dr. Parker said. The friendly gut microbes can survive inside the appendix and repopulate the colon once diarrhea is over. Dr. Parker and his colleagues found that the human appendix contains a thick layer of beneficial bacteria. “They were right where we predicted they would be,” he said.
This safe house hypothesis could explain why the gut microbiome may be different in people who no longer have an appendix. In one small study, people who’d had an appendectomy had a less diverse microbiome, with a lower abundance of beneficial strains such as Butyricicoccus and Barnesiella, than did those with intact appendixes.
The appendix likely has a second function, too, Dr. Smith said: It may serve as a training camp for the immune system. “When there is an invading pathogen in the gut, it helps the GI system to mount the immune response,” she said. The human appendix is rich in special cells known as M cells. These act as scouts, detecting and capturing invasive bacteria and viruses and presenting them to the body’s defense team, such as the T lymphocytes.
If the appendix shelters beneficial bacteria and boosts immune response, that may explain its links to various diseases. According to an epidemiological study from Taiwan,patients who underwent an appendectomy have a 46% higher risk of developing irritable bowel syndrome (IBS) — a disease associated with a low abundance of Butyricicoccus bacteria. This is why, the study authors wrote, doctors should pay careful attention to people who’ve had their appendixes removed, monitoring them for potential symptoms of IBS.
The same database helped uncover other connections between appendectomy and disease. For one, there was type 2 diabetes: Within 3 years of the surgery, patients under 30 had double the risk of developing this disorder. Then there was lupus: While those who underwent appendectomy generally had higher risk for this autoimmune disease, women were particularly affected.
The Contentious Connections
The most heated scientific discussion surrounds the links between the appendix and conditions such as Parkinson’s disease, ulcerative colitis, and colorectal cancer. A small 2019 study showed, for example, that appendectomy may improve symptoms of certain forms of ulcerative colitis that don’t respond to standard medical treatments. A third of patients improved after their appendix was removed, and 17% fully recovered.
Why? According to Dr. Parker, appendectomy may work for ulcerative colitis because it’s “a way of suppressing the immune system, especially in the lower intestinal areas.” A 2023 meta-analysis found that people who’d had their appendix removed before being diagnosed with ulcerative colitis were less likely to need their colon removed later on.
Such a procedure may have a serious side effect, however: Colorectal cancer. French scientists discovered that removing the appendix may reduce the numbers of certain immune cells called CD3+ and CD8+ T cells, causing a weakened immune surveillance. As a result, tumor cells might escape detection.
Yet the links between appendix removal and cancer are far from clear. A recent meta-analysis found that while people with appendectomies generally had a higher risk for colorectal cancer, for Europeans, these effects were insignificant. In fact, removal of the appendix actually protected European women from this particular form of cancer. For Parker, such mixed results may stem from the fact that treatments and populations vary widely. The issue “may depend on complex social and medical factors,” Dr. Parker said.
Things also appear complicated with Parkinson’s disease — another condition linked to the appendix. A large epidemiological study showed that appendectomy is associated with a lower risk for Parkinson’s disease and a delayed age of Parkinson’s onset. It also found that a normal appendix contains α-synuclein, a protein that may accumulate in the brain and contribute to the development of Parkinson’s. “Although α-synuclein is toxic when in the brain, it appears to be quite normal when present in the appendix,” said Luis Vitetta, PhD, MD, a clinical epidemiologist at the University of Sydney, Camperdown, Australia. Yet, not all studies find that removing the appendix lowers the risk for Parkinson’s. In fact, some show the opposite results.
How Should Doctors View the Appendix?
Even with these mysteries and contradictions, Dr. Vitetta said, a healthy appendix in a healthy body appears to be protective. This is why, he said, when someone is diagnosed with appendicitis, careful assessment is essential before surgery is performed.
“Perhaps an antibiotic can actually help fix it,” he said. A 2020 study published in The New England Journal of Medicine showed that antibiotics may indeed be a good alternative to surgery for the treatment of appendicitis. “We don’t want necessarily to remove an appendix that could be beneficial,” Dr. Smith said.
The many links between the appendix and various diseases mean that doctors should be more vigilant when treating patients who’ve had this organ removed, Dr. Parker said. “When a patient loses an appendix, depending on their environment, there may be effects on infection and cancer. So they might need more regular checkups,” he said. This could include monitoring for IBS and colorectal cancer.
What’s more, Dr. Parker believes that research on the appendix puts even more emphasis on the need to protect the gut microbiome — such as taking probiotics with antibiotics. And while we are still a long way from understanding how exactly this worm-like structure affects various diseases, one thing appears quite certain: The appendix is not useless. “If Darwin had the information that we have, he would not have drawn these conclusions,” Dr. Parker said.
A version of this article first appeared on Medscape.com.
When doctors and patients consider the appendix, it’s often with urgency. In cases of appendicitis, the clock could be ticking down to a life-threatening burst. Thus, despite recent research suggesting antibiotics could be an alternative therapy, appendectomy remains standard for uncomplicated appendicitis.
But what if removing the appendix could raise the risk for gastrointestinal (GI) diseases like irritable bowel syndrome and colorectal cancer? That’s what some emerging science suggests. And though the research is early and mixed, it’s enough to give some health professionals pause.
“If there’s no reason to remove the appendix, then it’s better to have one,” said Heather Smith, PhD, a comparative anatomist at Midwestern University, Glendale, Arizona. Preemptive removal is not supported by the evidence, she said.
To be fair, we’ve come a long way since 1928, when American physician Miles Breuer, MD, suggested that people with infected appendixes should be left to perish, so as to remove their inferior DNA from the gene pool (he called such people “uncivilized” and “candidates for extinction”). Charles Darwin, while less radical, believed the appendix was at best useless — a mere vestige of our ancestors switching diets from leaves to fruits.
What we know now is that the appendix isn’t just a troublesome piece of worthless flesh. Instead, it may act as a safe house for friendly gut bacteria and a training camp for the immune system. It also appears to play a role in several medical conditions, from ulcerative colitis and colorectal cancer to Parkinson’s disease and lupus. The roughly 300,000 Americans who undergo appendectomy each year should be made aware of this, some experts say. But the frustrating truth is, scientists are still trying to figure out in which cases having an appendix is protective and in which we may be better off without it.
A ‘Worm’ as Intestinal Protection
The appendix is a blind pouch (meaning its ending is closed off) that extends from the large intestine. Not all mammals have one; it’s been found in several species of primates and rodents, as well as in rabbits, wombats, and Florida manatees, among others (dogs and cats don’t have it). While a human appendix “looks like a little worm,” Dr. Smith said, these anatomical structures come in various sizes and shapes. Some are thick, as in a beaver, while others are long and spiraling, like a rabbit’s.
Comparative anatomy studies reveal that the appendix has evolved independently at least 29 times throughout mammalian evolution. This suggests that “it has some kind of an adaptive function,” Dr. Smith said. When French scientists analyzed data from 258 species of mammals, they discovered that those that possess an appendix live longer than those without one. A possible explanation, the researchers wrote, may lie with the appendix’s role in preventing diarrhea.
Their 2023 study supported this hypothesis. Based on veterinary records of 45 different species of primates housed in a French zoo, the scientists established that primates with appendixes are far less likely to suffer severe diarrhea than those that don’t possess this organ. The appendix, it appears, might be our tiny weapon against bowel troubles.
For immunologist William Parker, PhD, a visiting scholar at the University of North Carolina at Chapel Hill, these data are “about as good as we could hope for” in support of the idea that the appendix might protect mammals from GI problems. An experiment on humans would be unethical, Dr. Parker said. But observational studies offer clues.
One study showed that compared with people with an intact appendix, young adults with a history of appendectomy have more than double the risk of developing a serious infection with non-typhoidal Salmonella of the kind that would require hospitalization.
A ‘Safe House’ for Bacteria
Such studies add weight to a theory that Dr. Parker and his colleagues developed back in 2007: That the appendix acts as a “safe house” for beneficial gut bacteria.
Think of the colon as a wide pipe, Dr. Parker said, that may become contaminated with a pathogen such as Salmonella. Diarrhea follows, and the pipe gets repeatedly flushed, wiping everything clean, including your friendly gut microbiome. Luckily, “you’ve got this little offshoot of that pipe,” where the flow can’t really get in “because it’s so constricted,” Dr. Parker said. The friendly gut microbes can survive inside the appendix and repopulate the colon once diarrhea is over. Dr. Parker and his colleagues found that the human appendix contains a thick layer of beneficial bacteria. “They were right where we predicted they would be,” he said.
This safe house hypothesis could explain why the gut microbiome may be different in people who no longer have an appendix. In one small study, people who’d had an appendectomy had a less diverse microbiome, with a lower abundance of beneficial strains such as Butyricicoccus and Barnesiella, than did those with intact appendixes.
The appendix likely has a second function, too, Dr. Smith said: It may serve as a training camp for the immune system. “When there is an invading pathogen in the gut, it helps the GI system to mount the immune response,” she said. The human appendix is rich in special cells known as M cells. These act as scouts, detecting and capturing invasive bacteria and viruses and presenting them to the body’s defense team, such as the T lymphocytes.
If the appendix shelters beneficial bacteria and boosts immune response, that may explain its links to various diseases. According to an epidemiological study from Taiwan,patients who underwent an appendectomy have a 46% higher risk of developing irritable bowel syndrome (IBS) — a disease associated with a low abundance of Butyricicoccus bacteria. This is why, the study authors wrote, doctors should pay careful attention to people who’ve had their appendixes removed, monitoring them for potential symptoms of IBS.
The same database helped uncover other connections between appendectomy and disease. For one, there was type 2 diabetes: Within 3 years of the surgery, patients under 30 had double the risk of developing this disorder. Then there was lupus: While those who underwent appendectomy generally had higher risk for this autoimmune disease, women were particularly affected.
The Contentious Connections
The most heated scientific discussion surrounds the links between the appendix and conditions such as Parkinson’s disease, ulcerative colitis, and colorectal cancer. A small 2019 study showed, for example, that appendectomy may improve symptoms of certain forms of ulcerative colitis that don’t respond to standard medical treatments. A third of patients improved after their appendix was removed, and 17% fully recovered.
Why? According to Dr. Parker, appendectomy may work for ulcerative colitis because it’s “a way of suppressing the immune system, especially in the lower intestinal areas.” A 2023 meta-analysis found that people who’d had their appendix removed before being diagnosed with ulcerative colitis were less likely to need their colon removed later on.
Such a procedure may have a serious side effect, however: Colorectal cancer. French scientists discovered that removing the appendix may reduce the numbers of certain immune cells called CD3+ and CD8+ T cells, causing a weakened immune surveillance. As a result, tumor cells might escape detection.
Yet the links between appendix removal and cancer are far from clear. A recent meta-analysis found that while people with appendectomies generally had a higher risk for colorectal cancer, for Europeans, these effects were insignificant. In fact, removal of the appendix actually protected European women from this particular form of cancer. For Parker, such mixed results may stem from the fact that treatments and populations vary widely. The issue “may depend on complex social and medical factors,” Dr. Parker said.
Things also appear complicated with Parkinson’s disease — another condition linked to the appendix. A large epidemiological study showed that appendectomy is associated with a lower risk for Parkinson’s disease and a delayed age of Parkinson’s onset. It also found that a normal appendix contains α-synuclein, a protein that may accumulate in the brain and contribute to the development of Parkinson’s. “Although α-synuclein is toxic when in the brain, it appears to be quite normal when present in the appendix,” said Luis Vitetta, PhD, MD, a clinical epidemiologist at the University of Sydney, Camperdown, Australia. Yet, not all studies find that removing the appendix lowers the risk for Parkinson’s. In fact, some show the opposite results.
How Should Doctors View the Appendix?
Even with these mysteries and contradictions, Dr. Vitetta said, a healthy appendix in a healthy body appears to be protective. This is why, he said, when someone is diagnosed with appendicitis, careful assessment is essential before surgery is performed.
“Perhaps an antibiotic can actually help fix it,” he said. A 2020 study published in The New England Journal of Medicine showed that antibiotics may indeed be a good alternative to surgery for the treatment of appendicitis. “We don’t want necessarily to remove an appendix that could be beneficial,” Dr. Smith said.
The many links between the appendix and various diseases mean that doctors should be more vigilant when treating patients who’ve had this organ removed, Dr. Parker said. “When a patient loses an appendix, depending on their environment, there may be effects on infection and cancer. So they might need more regular checkups,” he said. This could include monitoring for IBS and colorectal cancer.
What’s more, Dr. Parker believes that research on the appendix puts even more emphasis on the need to protect the gut microbiome — such as taking probiotics with antibiotics. And while we are still a long way from understanding how exactly this worm-like structure affects various diseases, one thing appears quite certain: The appendix is not useless. “If Darwin had the information that we have, he would not have drawn these conclusions,” Dr. Parker said.
A version of this article first appeared on Medscape.com.
When doctors and patients consider the appendix, it’s often with urgency. In cases of appendicitis, the clock could be ticking down to a life-threatening burst. Thus, despite recent research suggesting antibiotics could be an alternative therapy, appendectomy remains standard for uncomplicated appendicitis.
But what if removing the appendix could raise the risk for gastrointestinal (GI) diseases like irritable bowel syndrome and colorectal cancer? That’s what some emerging science suggests. And though the research is early and mixed, it’s enough to give some health professionals pause.
“If there’s no reason to remove the appendix, then it’s better to have one,” said Heather Smith, PhD, a comparative anatomist at Midwestern University, Glendale, Arizona. Preemptive removal is not supported by the evidence, she said.
To be fair, we’ve come a long way since 1928, when American physician Miles Breuer, MD, suggested that people with infected appendixes should be left to perish, so as to remove their inferior DNA from the gene pool (he called such people “uncivilized” and “candidates for extinction”). Charles Darwin, while less radical, believed the appendix was at best useless — a mere vestige of our ancestors switching diets from leaves to fruits.
What we know now is that the appendix isn’t just a troublesome piece of worthless flesh. Instead, it may act as a safe house for friendly gut bacteria and a training camp for the immune system. It also appears to play a role in several medical conditions, from ulcerative colitis and colorectal cancer to Parkinson’s disease and lupus. The roughly 300,000 Americans who undergo appendectomy each year should be made aware of this, some experts say. But the frustrating truth is, scientists are still trying to figure out in which cases having an appendix is protective and in which we may be better off without it.
A ‘Worm’ as Intestinal Protection
The appendix is a blind pouch (meaning its ending is closed off) that extends from the large intestine. Not all mammals have one; it’s been found in several species of primates and rodents, as well as in rabbits, wombats, and Florida manatees, among others (dogs and cats don’t have it). While a human appendix “looks like a little worm,” Dr. Smith said, these anatomical structures come in various sizes and shapes. Some are thick, as in a beaver, while others are long and spiraling, like a rabbit’s.
Comparative anatomy studies reveal that the appendix has evolved independently at least 29 times throughout mammalian evolution. This suggests that “it has some kind of an adaptive function,” Dr. Smith said. When French scientists analyzed data from 258 species of mammals, they discovered that those that possess an appendix live longer than those without one. A possible explanation, the researchers wrote, may lie with the appendix’s role in preventing diarrhea.
Their 2023 study supported this hypothesis. Based on veterinary records of 45 different species of primates housed in a French zoo, the scientists established that primates with appendixes are far less likely to suffer severe diarrhea than those that don’t possess this organ. The appendix, it appears, might be our tiny weapon against bowel troubles.
For immunologist William Parker, PhD, a visiting scholar at the University of North Carolina at Chapel Hill, these data are “about as good as we could hope for” in support of the idea that the appendix might protect mammals from GI problems. An experiment on humans would be unethical, Dr. Parker said. But observational studies offer clues.
One study showed that compared with people with an intact appendix, young adults with a history of appendectomy have more than double the risk of developing a serious infection with non-typhoidal Salmonella of the kind that would require hospitalization.
A ‘Safe House’ for Bacteria
Such studies add weight to a theory that Dr. Parker and his colleagues developed back in 2007: That the appendix acts as a “safe house” for beneficial gut bacteria.
Think of the colon as a wide pipe, Dr. Parker said, that may become contaminated with a pathogen such as Salmonella. Diarrhea follows, and the pipe gets repeatedly flushed, wiping everything clean, including your friendly gut microbiome. Luckily, “you’ve got this little offshoot of that pipe,” where the flow can’t really get in “because it’s so constricted,” Dr. Parker said. The friendly gut microbes can survive inside the appendix and repopulate the colon once diarrhea is over. Dr. Parker and his colleagues found that the human appendix contains a thick layer of beneficial bacteria. “They were right where we predicted they would be,” he said.
This safe house hypothesis could explain why the gut microbiome may be different in people who no longer have an appendix. In one small study, people who’d had an appendectomy had a less diverse microbiome, with a lower abundance of beneficial strains such as Butyricicoccus and Barnesiella, than did those with intact appendixes.
The appendix likely has a second function, too, Dr. Smith said: It may serve as a training camp for the immune system. “When there is an invading pathogen in the gut, it helps the GI system to mount the immune response,” she said. The human appendix is rich in special cells known as M cells. These act as scouts, detecting and capturing invasive bacteria and viruses and presenting them to the body’s defense team, such as the T lymphocytes.
If the appendix shelters beneficial bacteria and boosts immune response, that may explain its links to various diseases. According to an epidemiological study from Taiwan,patients who underwent an appendectomy have a 46% higher risk of developing irritable bowel syndrome (IBS) — a disease associated with a low abundance of Butyricicoccus bacteria. This is why, the study authors wrote, doctors should pay careful attention to people who’ve had their appendixes removed, monitoring them for potential symptoms of IBS.
The same database helped uncover other connections between appendectomy and disease. For one, there was type 2 diabetes: Within 3 years of the surgery, patients under 30 had double the risk of developing this disorder. Then there was lupus: While those who underwent appendectomy generally had higher risk for this autoimmune disease, women were particularly affected.
The Contentious Connections
The most heated scientific discussion surrounds the links between the appendix and conditions such as Parkinson’s disease, ulcerative colitis, and colorectal cancer. A small 2019 study showed, for example, that appendectomy may improve symptoms of certain forms of ulcerative colitis that don’t respond to standard medical treatments. A third of patients improved after their appendix was removed, and 17% fully recovered.
Why? According to Dr. Parker, appendectomy may work for ulcerative colitis because it’s “a way of suppressing the immune system, especially in the lower intestinal areas.” A 2023 meta-analysis found that people who’d had their appendix removed before being diagnosed with ulcerative colitis were less likely to need their colon removed later on.
Such a procedure may have a serious side effect, however: Colorectal cancer. French scientists discovered that removing the appendix may reduce the numbers of certain immune cells called CD3+ and CD8+ T cells, causing a weakened immune surveillance. As a result, tumor cells might escape detection.
Yet the links between appendix removal and cancer are far from clear. A recent meta-analysis found that while people with appendectomies generally had a higher risk for colorectal cancer, for Europeans, these effects were insignificant. In fact, removal of the appendix actually protected European women from this particular form of cancer. For Parker, such mixed results may stem from the fact that treatments and populations vary widely. The issue “may depend on complex social and medical factors,” Dr. Parker said.
Things also appear complicated with Parkinson’s disease — another condition linked to the appendix. A large epidemiological study showed that appendectomy is associated with a lower risk for Parkinson’s disease and a delayed age of Parkinson’s onset. It also found that a normal appendix contains α-synuclein, a protein that may accumulate in the brain and contribute to the development of Parkinson’s. “Although α-synuclein is toxic when in the brain, it appears to be quite normal when present in the appendix,” said Luis Vitetta, PhD, MD, a clinical epidemiologist at the University of Sydney, Camperdown, Australia. Yet, not all studies find that removing the appendix lowers the risk for Parkinson’s. In fact, some show the opposite results.
How Should Doctors View the Appendix?
Even with these mysteries and contradictions, Dr. Vitetta said, a healthy appendix in a healthy body appears to be protective. This is why, he said, when someone is diagnosed with appendicitis, careful assessment is essential before surgery is performed.
“Perhaps an antibiotic can actually help fix it,” he said. A 2020 study published in The New England Journal of Medicine showed that antibiotics may indeed be a good alternative to surgery for the treatment of appendicitis. “We don’t want necessarily to remove an appendix that could be beneficial,” Dr. Smith said.
The many links between the appendix and various diseases mean that doctors should be more vigilant when treating patients who’ve had this organ removed, Dr. Parker said. “When a patient loses an appendix, depending on their environment, there may be effects on infection and cancer. So they might need more regular checkups,” he said. This could include monitoring for IBS and colorectal cancer.
What’s more, Dr. Parker believes that research on the appendix puts even more emphasis on the need to protect the gut microbiome — such as taking probiotics with antibiotics. And while we are still a long way from understanding how exactly this worm-like structure affects various diseases, one thing appears quite certain: The appendix is not useless. “If Darwin had the information that we have, he would not have drawn these conclusions,” Dr. Parker said.
A version of this article first appeared on Medscape.com.
MDMA therapy for loneliness? Researchers say it could work
Some call the drug “ecstasy” or “molly.” Researchers are calling it a potential tool to help treat loneliness.
As public health experts sound the alarm on a rising loneliness epidemic in the United States and across the globe,
In the latest study, MDMA “led to a robust increase in feelings of connection” among people socializing in a controlled setting. Participants were dosed with either MDMA or a placebo and asked to chat with a stranger. Afterward, those who took MDMA said their companion was more responsive and attentive, and that they had plenty in common. The drug also “increased participants’ ratings of liking their partners, feeling connected and finding the conversation enjoyable and meaningful.”
The study was small — just 18 participants — but its results “have implications for MDMA-assisted therapy,” the authors wrote. “This feeling of connectedness could help patients feel safe and trusting, thereby facilitating deeper emotional exploration.”
MDMA “really does seem to make people want to interact more with other people,” says Harriet de Wit, PhD, a neuropharmacologist at the University of Chicago and one of the study’s authors. The results echo those of earlier research using psychedelics like LSD or psilocybin.
It’s important to note that any intervention involving MDMA or psychedelics would be a drug-assisted therapy — that is, used in conjunction with the appropriate therapy and in a therapeutic setting. MDMA-assisted therapy has already drawn popular and scientific attention, as it recently cleared clinical trials for treating posttraumatic stress disorder (PTSD) and may be nearing approval by the US Food and Drug Administration (FDA).
According to Friederike Holze, PhD, psychopharmacologist at the University of Basel, in Switzerland, “there could be a place” for MDMA and psychedelics in treating chronic loneliness, but only under professional supervision.
There would have to be clear guidelines too, says Joshua Woolley, MD, PhD, a psychiatrist at the University of California, San Francisco.
MDMA and psychedelics “induce this plastic state, a state where people can change. They feel open, they feel like things are possible,” Dr. Woolley says. Then, with therapy, “you can help them change.”
Loneliness Can Impact Our Health
On top of the mental health ramifications, the physiologic effects of loneliness could have grave consequences over time. In observational studies, loneliness has been linked to higher risks for cancer and heart disease, and shorter lifespan. One third of Americans over 45 say they are chronically lonely.
Chronic loneliness changes how we think and behave, research shows. It makes us fear contact with others and see them in a more negative light, as more threatening and less trustworthy. Lonely people prefer to stand farther apart from strangers and avoid touch.
This is where MDMA-assisted therapies could potentially help, by easing these defensive tendencies, according to Dr. Woolley.
MDMA, Psychedelics, and Social Behavior
MDMA, or 3,4-methylenedioxymethamphetamine, is a hybrid between a stimulant and a psychedelic. In Dr. de Wit’s earlier experiments, volunteers given MDMA engaged more in communal activities, chatting, and playing games. They used more positive words during social encounters than those who had received a placebo. And after MDMA, people felt less rejected if they were slighted in Cyberball — a virtual ball-tossing game commonly used to measure the effects of social exclusion.
MDMA has been shown to reduce people’s response to other’s negative emotions, diminishing activation of the amygdala (the brain’s fear center) while looking at pictures of angry faces.
This could be helpful. “If you perceive a person’s natural expression as being a little bit angry, if that disappears, then you might be more inclined to interact,” de Wit says.
However, there may be downsides, too. If a drug makes people more trusting and willing to connect, they could be taken advantage of. This is why, Dr. Woolley says, “psychedelics have been used in cults.”
MDMA may also make the experience of touch more pleasant. In a series of experiments in 2019, researchers gently stroked volunteers ’ arms with a goat-hair brush, mimicking the comforting gestures one may receive from a loved one. At the same time, the scientists monitored the volunteers’ facial muscles. People on MDMA perceived gentle touch as more pleasant than those on placebo, and their smile muscles activated more.
MDMA and psychedelics boost social behaviors in animals, too — suggesting that their effects on relationships have a biological basis. Rats on MDMA are more likely to lie next to each other, and mice become more resilient to social stress. Even octopuses become more outgoing after a dose of MDMA, choosing to spend more time with other octopuses instead of a new toy. Classic psychedelics show similar effects — LSD, for example, makes mice more social.
Psychedelics can induce a sense of a “dissolution of the self-other boundary,” Dr. Woolley says. People who take them often say it’s “helped them feel more connected to themselves and other people.” LSD, first synthesized in 1938, may help increase empathy in some people.
Psilocybin, a compound found in over 200 species of mushrooms and used for centuries in Mesoamerican rituals, also seems to boost empathy, with effects persisting for at least seven days. In Cyberball, the online ball-throwing game, people who took psilocybin felt less socially rejected, an outcome reflected in their brain activation patterns in one study — the areas responsible for social-pain processing appeared to dim after a dose.
Making It Legal and Putting It to Use
In 2020, Oregon became the first state to establish a regulatory framework for psilocybin for therapeutic use, and Colorado followed suit in 2022. Such therapeutic applications of psilocybin could help fight loneliness as well, Dr. Woolley believes, because a “ common symptom of depression is that people feel socially withdrawn and lack motivation, ” he says. As mentioned above, MDMA-assisted therapy is also nearing FDA approval for PTSD.
What remain unclear are the exact mechanisms at play.
“MDMA releases oxytocin, and it does that through serotonin receptors,” Dr. de Wit says. Serotonin activates 5-HT1A receptors in the hypothalamus, releasing oxytocin into the bloodstream. In Dr. de Wit’s recent experiments, the more people felt connected after taking MDMA, the more oxytocin was found circulating in their bodies. (Another drug, methamphetamine, also upped the levels of oxytocin but did not increase feelings of connectedness.)
“It’s likely that both something in the serotonin system independent of oxytocin, and oxytocin itself, contribute,” Dr. de Wit says. Dopamine, a neurotransmitter responsible for motivation, appears to increase as well.
The empathy-boosting effects of LSD also seem to be at least partly driven by oxytocin, experiments published in 2021 revealed. Studies in mice, meanwhile, suggest that glutamate, a chemical messenger in the brain, may be behind some of LSD’s prosocial effects.
Scientists are fairly certain which receptors these drugs bind to and which neurotransmitters they affect. “How that gets translated into these higher-order things like empathy and feeling connected to the world, we don’t totally understand,” Dr. Woolley says.
Challenges and the Future
Although MDMA and psychedelics are largely considered safe when taken in a legal, medically controlled setting, there is reason to be cautious.
“They have relatively low impact on the body, like heart rate increase or blood pressure increase. But they might leave some disturbing psychological effects,” says Dr. Holze. Scientists routinely screen experiment volunteers for their risk for psychiatric disorders.
Although risk for addiction is low with both MDMA and psychedelics, there is always some risk for misuse. MDMA “ can produce feelings of well-being, and then people might use it repeatedly, ” Dr. de Wit says. “ That doesn ’ t seem to be a problem for really a lot of people, but it could easily happen. ”
Still, possibilities remain for MDMA in the fight against loneliness.
“[People] feel open, they feel like things are possible, they feel like they’re unstuck,” Dr. Woolley says. “You can harness that in psychotherapy.”
A version of this article appeared on Medscape.com.
Some call the drug “ecstasy” or “molly.” Researchers are calling it a potential tool to help treat loneliness.
As public health experts sound the alarm on a rising loneliness epidemic in the United States and across the globe,
In the latest study, MDMA “led to a robust increase in feelings of connection” among people socializing in a controlled setting. Participants were dosed with either MDMA or a placebo and asked to chat with a stranger. Afterward, those who took MDMA said their companion was more responsive and attentive, and that they had plenty in common. The drug also “increased participants’ ratings of liking their partners, feeling connected and finding the conversation enjoyable and meaningful.”
The study was small — just 18 participants — but its results “have implications for MDMA-assisted therapy,” the authors wrote. “This feeling of connectedness could help patients feel safe and trusting, thereby facilitating deeper emotional exploration.”
MDMA “really does seem to make people want to interact more with other people,” says Harriet de Wit, PhD, a neuropharmacologist at the University of Chicago and one of the study’s authors. The results echo those of earlier research using psychedelics like LSD or psilocybin.
It’s important to note that any intervention involving MDMA or psychedelics would be a drug-assisted therapy — that is, used in conjunction with the appropriate therapy and in a therapeutic setting. MDMA-assisted therapy has already drawn popular and scientific attention, as it recently cleared clinical trials for treating posttraumatic stress disorder (PTSD) and may be nearing approval by the US Food and Drug Administration (FDA).
According to Friederike Holze, PhD, psychopharmacologist at the University of Basel, in Switzerland, “there could be a place” for MDMA and psychedelics in treating chronic loneliness, but only under professional supervision.
There would have to be clear guidelines too, says Joshua Woolley, MD, PhD, a psychiatrist at the University of California, San Francisco.
MDMA and psychedelics “induce this plastic state, a state where people can change. They feel open, they feel like things are possible,” Dr. Woolley says. Then, with therapy, “you can help them change.”
Loneliness Can Impact Our Health
On top of the mental health ramifications, the physiologic effects of loneliness could have grave consequences over time. In observational studies, loneliness has been linked to higher risks for cancer and heart disease, and shorter lifespan. One third of Americans over 45 say they are chronically lonely.
Chronic loneliness changes how we think and behave, research shows. It makes us fear contact with others and see them in a more negative light, as more threatening and less trustworthy. Lonely people prefer to stand farther apart from strangers and avoid touch.
This is where MDMA-assisted therapies could potentially help, by easing these defensive tendencies, according to Dr. Woolley.
MDMA, Psychedelics, and Social Behavior
MDMA, or 3,4-methylenedioxymethamphetamine, is a hybrid between a stimulant and a psychedelic. In Dr. de Wit’s earlier experiments, volunteers given MDMA engaged more in communal activities, chatting, and playing games. They used more positive words during social encounters than those who had received a placebo. And after MDMA, people felt less rejected if they were slighted in Cyberball — a virtual ball-tossing game commonly used to measure the effects of social exclusion.
MDMA has been shown to reduce people’s response to other’s negative emotions, diminishing activation of the amygdala (the brain’s fear center) while looking at pictures of angry faces.
This could be helpful. “If you perceive a person’s natural expression as being a little bit angry, if that disappears, then you might be more inclined to interact,” de Wit says.
However, there may be downsides, too. If a drug makes people more trusting and willing to connect, they could be taken advantage of. This is why, Dr. Woolley says, “psychedelics have been used in cults.”
MDMA may also make the experience of touch more pleasant. In a series of experiments in 2019, researchers gently stroked volunteers ’ arms with a goat-hair brush, mimicking the comforting gestures one may receive from a loved one. At the same time, the scientists monitored the volunteers’ facial muscles. People on MDMA perceived gentle touch as more pleasant than those on placebo, and their smile muscles activated more.
MDMA and psychedelics boost social behaviors in animals, too — suggesting that their effects on relationships have a biological basis. Rats on MDMA are more likely to lie next to each other, and mice become more resilient to social stress. Even octopuses become more outgoing after a dose of MDMA, choosing to spend more time with other octopuses instead of a new toy. Classic psychedelics show similar effects — LSD, for example, makes mice more social.
Psychedelics can induce a sense of a “dissolution of the self-other boundary,” Dr. Woolley says. People who take them often say it’s “helped them feel more connected to themselves and other people.” LSD, first synthesized in 1938, may help increase empathy in some people.
Psilocybin, a compound found in over 200 species of mushrooms and used for centuries in Mesoamerican rituals, also seems to boost empathy, with effects persisting for at least seven days. In Cyberball, the online ball-throwing game, people who took psilocybin felt less socially rejected, an outcome reflected in their brain activation patterns in one study — the areas responsible for social-pain processing appeared to dim after a dose.
Making It Legal and Putting It to Use
In 2020, Oregon became the first state to establish a regulatory framework for psilocybin for therapeutic use, and Colorado followed suit in 2022. Such therapeutic applications of psilocybin could help fight loneliness as well, Dr. Woolley believes, because a “ common symptom of depression is that people feel socially withdrawn and lack motivation, ” he says. As mentioned above, MDMA-assisted therapy is also nearing FDA approval for PTSD.
What remain unclear are the exact mechanisms at play.
“MDMA releases oxytocin, and it does that through serotonin receptors,” Dr. de Wit says. Serotonin activates 5-HT1A receptors in the hypothalamus, releasing oxytocin into the bloodstream. In Dr. de Wit’s recent experiments, the more people felt connected after taking MDMA, the more oxytocin was found circulating in their bodies. (Another drug, methamphetamine, also upped the levels of oxytocin but did not increase feelings of connectedness.)
“It’s likely that both something in the serotonin system independent of oxytocin, and oxytocin itself, contribute,” Dr. de Wit says. Dopamine, a neurotransmitter responsible for motivation, appears to increase as well.
The empathy-boosting effects of LSD also seem to be at least partly driven by oxytocin, experiments published in 2021 revealed. Studies in mice, meanwhile, suggest that glutamate, a chemical messenger in the brain, may be behind some of LSD’s prosocial effects.
Scientists are fairly certain which receptors these drugs bind to and which neurotransmitters they affect. “How that gets translated into these higher-order things like empathy and feeling connected to the world, we don’t totally understand,” Dr. Woolley says.
Challenges and the Future
Although MDMA and psychedelics are largely considered safe when taken in a legal, medically controlled setting, there is reason to be cautious.
“They have relatively low impact on the body, like heart rate increase or blood pressure increase. But they might leave some disturbing psychological effects,” says Dr. Holze. Scientists routinely screen experiment volunteers for their risk for psychiatric disorders.
Although risk for addiction is low with both MDMA and psychedelics, there is always some risk for misuse. MDMA “ can produce feelings of well-being, and then people might use it repeatedly, ” Dr. de Wit says. “ That doesn ’ t seem to be a problem for really a lot of people, but it could easily happen. ”
Still, possibilities remain for MDMA in the fight against loneliness.
“[People] feel open, they feel like things are possible, they feel like they’re unstuck,” Dr. Woolley says. “You can harness that in psychotherapy.”
A version of this article appeared on Medscape.com.
Some call the drug “ecstasy” or “molly.” Researchers are calling it a potential tool to help treat loneliness.
As public health experts sound the alarm on a rising loneliness epidemic in the United States and across the globe,
In the latest study, MDMA “led to a robust increase in feelings of connection” among people socializing in a controlled setting. Participants were dosed with either MDMA or a placebo and asked to chat with a stranger. Afterward, those who took MDMA said their companion was more responsive and attentive, and that they had plenty in common. The drug also “increased participants’ ratings of liking their partners, feeling connected and finding the conversation enjoyable and meaningful.”
The study was small — just 18 participants — but its results “have implications for MDMA-assisted therapy,” the authors wrote. “This feeling of connectedness could help patients feel safe and trusting, thereby facilitating deeper emotional exploration.”
MDMA “really does seem to make people want to interact more with other people,” says Harriet de Wit, PhD, a neuropharmacologist at the University of Chicago and one of the study’s authors. The results echo those of earlier research using psychedelics like LSD or psilocybin.
It’s important to note that any intervention involving MDMA or psychedelics would be a drug-assisted therapy — that is, used in conjunction with the appropriate therapy and in a therapeutic setting. MDMA-assisted therapy has already drawn popular and scientific attention, as it recently cleared clinical trials for treating posttraumatic stress disorder (PTSD) and may be nearing approval by the US Food and Drug Administration (FDA).
According to Friederike Holze, PhD, psychopharmacologist at the University of Basel, in Switzerland, “there could be a place” for MDMA and psychedelics in treating chronic loneliness, but only under professional supervision.
There would have to be clear guidelines too, says Joshua Woolley, MD, PhD, a psychiatrist at the University of California, San Francisco.
MDMA and psychedelics “induce this plastic state, a state where people can change. They feel open, they feel like things are possible,” Dr. Woolley says. Then, with therapy, “you can help them change.”
Loneliness Can Impact Our Health
On top of the mental health ramifications, the physiologic effects of loneliness could have grave consequences over time. In observational studies, loneliness has been linked to higher risks for cancer and heart disease, and shorter lifespan. One third of Americans over 45 say they are chronically lonely.
Chronic loneliness changes how we think and behave, research shows. It makes us fear contact with others and see them in a more negative light, as more threatening and less trustworthy. Lonely people prefer to stand farther apart from strangers and avoid touch.
This is where MDMA-assisted therapies could potentially help, by easing these defensive tendencies, according to Dr. Woolley.
MDMA, Psychedelics, and Social Behavior
MDMA, or 3,4-methylenedioxymethamphetamine, is a hybrid between a stimulant and a psychedelic. In Dr. de Wit’s earlier experiments, volunteers given MDMA engaged more in communal activities, chatting, and playing games. They used more positive words during social encounters than those who had received a placebo. And after MDMA, people felt less rejected if they were slighted in Cyberball — a virtual ball-tossing game commonly used to measure the effects of social exclusion.
MDMA has been shown to reduce people’s response to other’s negative emotions, diminishing activation of the amygdala (the brain’s fear center) while looking at pictures of angry faces.
This could be helpful. “If you perceive a person’s natural expression as being a little bit angry, if that disappears, then you might be more inclined to interact,” de Wit says.
However, there may be downsides, too. If a drug makes people more trusting and willing to connect, they could be taken advantage of. This is why, Dr. Woolley says, “psychedelics have been used in cults.”
MDMA may also make the experience of touch more pleasant. In a series of experiments in 2019, researchers gently stroked volunteers ’ arms with a goat-hair brush, mimicking the comforting gestures one may receive from a loved one. At the same time, the scientists monitored the volunteers’ facial muscles. People on MDMA perceived gentle touch as more pleasant than those on placebo, and their smile muscles activated more.
MDMA and psychedelics boost social behaviors in animals, too — suggesting that their effects on relationships have a biological basis. Rats on MDMA are more likely to lie next to each other, and mice become more resilient to social stress. Even octopuses become more outgoing after a dose of MDMA, choosing to spend more time with other octopuses instead of a new toy. Classic psychedelics show similar effects — LSD, for example, makes mice more social.
Psychedelics can induce a sense of a “dissolution of the self-other boundary,” Dr. Woolley says. People who take them often say it’s “helped them feel more connected to themselves and other people.” LSD, first synthesized in 1938, may help increase empathy in some people.
Psilocybin, a compound found in over 200 species of mushrooms and used for centuries in Mesoamerican rituals, also seems to boost empathy, with effects persisting for at least seven days. In Cyberball, the online ball-throwing game, people who took psilocybin felt less socially rejected, an outcome reflected in their brain activation patterns in one study — the areas responsible for social-pain processing appeared to dim after a dose.
Making It Legal and Putting It to Use
In 2020, Oregon became the first state to establish a regulatory framework for psilocybin for therapeutic use, and Colorado followed suit in 2022. Such therapeutic applications of psilocybin could help fight loneliness as well, Dr. Woolley believes, because a “ common symptom of depression is that people feel socially withdrawn and lack motivation, ” he says. As mentioned above, MDMA-assisted therapy is also nearing FDA approval for PTSD.
What remain unclear are the exact mechanisms at play.
“MDMA releases oxytocin, and it does that through serotonin receptors,” Dr. de Wit says. Serotonin activates 5-HT1A receptors in the hypothalamus, releasing oxytocin into the bloodstream. In Dr. de Wit’s recent experiments, the more people felt connected after taking MDMA, the more oxytocin was found circulating in their bodies. (Another drug, methamphetamine, also upped the levels of oxytocin but did not increase feelings of connectedness.)
“It’s likely that both something in the serotonin system independent of oxytocin, and oxytocin itself, contribute,” Dr. de Wit says. Dopamine, a neurotransmitter responsible for motivation, appears to increase as well.
The empathy-boosting effects of LSD also seem to be at least partly driven by oxytocin, experiments published in 2021 revealed. Studies in mice, meanwhile, suggest that glutamate, a chemical messenger in the brain, may be behind some of LSD’s prosocial effects.
Scientists are fairly certain which receptors these drugs bind to and which neurotransmitters they affect. “How that gets translated into these higher-order things like empathy and feeling connected to the world, we don’t totally understand,” Dr. Woolley says.
Challenges and the Future
Although MDMA and psychedelics are largely considered safe when taken in a legal, medically controlled setting, there is reason to be cautious.
“They have relatively low impact on the body, like heart rate increase or blood pressure increase. But they might leave some disturbing psychological effects,” says Dr. Holze. Scientists routinely screen experiment volunteers for their risk for psychiatric disorders.
Although risk for addiction is low with both MDMA and psychedelics, there is always some risk for misuse. MDMA “ can produce feelings of well-being, and then people might use it repeatedly, ” Dr. de Wit says. “ That doesn ’ t seem to be a problem for really a lot of people, but it could easily happen. ”
Still, possibilities remain for MDMA in the fight against loneliness.
“[People] feel open, they feel like things are possible, they feel like they’re unstuck,” Dr. Woolley says. “You can harness that in psychotherapy.”
A version of this article appeared on Medscape.com.
Unlocking the riddle of REM sleep
Eugene Aserinsky, PhD, never wanted to study sleep. He tried being a social worker, a dental student, and even did a stint in the army as an explosives handler. He enrolled at the University of Chicago to pursue organ physiology, but all potential supervisors were too busy to take him on. His only choice was Nathaniel Kleitman, PhD, a middle-aged professor whom Dr. Aserinsky described as “always serious.” Dr. Kleitman was doing research on sleep and so, grudgingly, Dr. Aserinsky had followed suit.
Two years later, in 1953, the duo published a paper that shattered the way we saw sleep. They described a weird phenomenon Dr. Aserinsky later called REM sleep: periods of rapid eye movements paired with wakefulness-like activity in the brain. “We are still at the very beginning of understanding this phenomenon,” Mark Blumberg, PhD, professor of psychological and brain sciences at University of Iowa, Iowa City, said in an interview.
Before Dr. Aserinsky had walked into Dr. Kleitman’s lab, the widespread belief held that sleep was “the antithesis of wakefulness,” as Dr. Kleitman wrote in his seminal 1939 book, “Sleep and Wakefulness.” Others saw it as a kind of a coma, a passive state. Another theory, developed in the early 20th century by French psychologist Henri Piéron, PhD, held that sleepiness is caused by an accumulation of ‘hypnotoxins’ in the brain.
In his 1913 study that would likely fail a contemporary ethics review, Dr. Piéron drew fluid from the brains of sleep-deprived dogs and injected it into other dogs to induce sleep. As he explained in an interview with The Washington Times in 1933, he said he believed that fatigue toxins accumulate in the brain throughout the wakeful hours, then slowly seep into the spinal column, promoting drowsiness. Once we fall asleep, Dr. Piéron claimed, the hypnotoxins burn away.
From blinking to rapid eye movement
In 1925 when Dr. Kleitman established the world’s first sleep laboratory at the University of Chicago, sleep was a fringe science that most researchers avoided with a wide berth. Yet Dr. Kleitman was obsessed. The Moldova-born scientist famously worked 24/7 – literally. He not only stayed long hours in his lab, but also slept attached to a plethora of instruments to measure his brain waves, breathing, and heartbeat. At one point, Dr. Kleitman stayed awake for 180 hours (more than a week), to check how forced sleeplessness would affect his body (he later compared it to torture). He also lived 2 weeks aboard a submarine, moved his family north of the Arctic Circle, and spent over a month 119 feet below the surface in a cave in Kentucky, fighting rats, cold, and humidity to study circadian rhythms.
Dr. Kleitman was intrigued by an article in Nature in which the author asserted that he could detect the approach of slumber in train passengers by observing their blink frequencies. He instructed Dr. Aserinsky to observe sleeping infants (being monitored for a different study), to see how their blinking related to sleep. Yet Dr. Aserinsky was not amused. The project, he later wrote, “seemed about as exciting as warm milk.”
Dr. Aserinsky was uncertain whether eyelid movement with the eyes closed constituted a blink, then he noticed a 20-minute span in each hour when eye movement ceased entirely. Still short of getting his degree, Dr. Aserinsky decided to observe sleeping adults. He hauled a dusty clanker of a brain-wave machine out of the university’s basement, and started registering the electrical activity of the brain of his dozing subjects. Soon, he noticed something weird.
As he kept staring at the sleeping adults, he noticed that at times they’d have saccadic-like eye movements, just as the EEG machine would register a wake-like state of the brain. At first, he thought the machine was broken (it was ancient, after all). Then, that the subjects were awake and just keeping their eyes shut. Yet after conducting several sessions and tinkering with the EEG machine, Dr. Aserinsky finally concluded that the recordings and observations were correct: Something was indeed happening during sleep that kept the cortex activated and made the subjects’ eyes move in a jerky manner.
Dreams, memory, and thermoregulation
After studying dozens of subjects, including his son and Dr. Kleitman’s daughter, and using miles of polygraph paper, the two scientists published their findings in September 1953 in the journal Science. Dr. Kleitman didn’t expect the discovery to be particularly earth-shattering. When asked in a later interview how much research and excitement he thought the paper would generate, he replied: “none whatsoever.” That’s not how things went, though. “They completely changed the way people think,” Dr. Blumberg said. Once and for all, the REM discovery put to rest the idea that sleep was a passive state where nothing interesting happens.
Dr. Aserinsky soon left the University of Chicago, while Dr. Kleitman continued research on rapid eye movements in sleep with his new student, William Dement, MD. Together, they published studies suggesting that REM periods were when dreaming occurred – they reported that people who were awakened during REM sleep were far more likely to recall dreams than were those awakened outside of that period. “REM sleep = dreams” became established dogma for decades, even though first reports of dreams during non-REM sleep came as early as Dr. Kleitman’s and Dr. Dement’s original research (they assumed these were recollections from the preceding REM episodes).
“It turns out that you can have a perfectly good dream when you haven’t had a previous REM sleep period,” said Jerome Siegel, PhD, professor of psychiatry and biobehavioral sciences at UCLA’s Center for Sleep Research, pointing out that equating REM sleep with dreams is still “a common misconception.”
By the 1960s, REM sleep seemed to be well defined as the combination of rapid eye movement with EEG showing brain activation, first noted by Dr. Aserinsky, as well as muscle atonia – a state of near-total muscle relaxation or paralysis. Today, however, Dr. Blumberg said, things are considerably less clear cut. In one recent paper, Dr. Blumberg and his colleagues went as far as to question whether REM sleep is even “a thing.” REM sleep is prevalent across terrestrial vertebrates, but they found that it is also highly nuanced, messing up old definitions.
Take the platypus, for example, the animal with the most REM sleep (as far as we know): They have rapid eye movements and their bills twitch during REM (stillness punctuated by sudden twitches is typical of that period of sleep), but they don’t have the classic brain activation on EEG. Owls have EEG activation and twitching, but no rapid eye movements, since their eyes are largely immobile. Geese, meanwhile, are missing muscle atonia – that’s why they can sleep standing. And new studies are still coming in, showing, for instance, that even jumping spiders may have REM sleep, complete with jerky eye movements and limb twitching.
For Dr. Siegel, the findings on REM sleep in animals point to the potential explanation of what that bizarre stage of sleep may be all about: thermoregulation. “When you look at differences in sleep among the groups of warm-blooded animals, the correlation is almost perfect, and inverse. The colder they are, the more REM sleep they get,” Dr. Siegel said. During REM sleep, body thermoregulation is basically suspended, and so, as Dr. Siegel argued in The Lancet Neurology last fall, REM sleep could be a vital player in managing our brain’s temperature and metabolic activity during sleep.
Wallace B. Mendelson, MD, professor emeritus of psychiatry at the University of Chicago, said it’s likely, however, that REM sleep has more than one function. “There is no reason why one single theory has to be an answer. Most important physiological functions have multiple functions,” he said. The ideas are many, including that REM sleep helps consolidate our memories and plays an important role in emotion regulation But it’s not that simple. A Swiss study of nearly 1,000 healthy participants did not show any correlation between sleep stage and memory consolidation. Sleep disruption of any stage can prevent memory consolidation and quiet wakefulness with closed eyes can be as effective as sleep for memory recall.
In 1971, researchers from the National Institute of Mental Health published results of their study on total suppression of REM sleep. For as long as 40 days, they administered the monoamine oxidase inhibitor (MAOI) phenelzine, a type of drug that can completely eliminate REM sleep, to six patients with anxiety and depression. They showed that suppression of REM sleep could improve symptoms of depression, seemingly without impairing the patients’ cognitive function. Modern antidepressants, too, can greatly diminish REM sleep, Dr. Siegel said. “I’m not aware that there is any dramatic downside in having REM sleep reduced,” he said.
So do we even need REM sleep for optimal performance? Dr. Siegel said that there is a lot of exaggeration about how great REM sleep is for our health. “People just indulge their imaginations,” he said.
Dr. Blumberg pointed out that, in general, as long as you get enough sleep in the first place, you will get enough REM. “You can’t control the amount of REM sleep you have,” he explained.
REM sleep behavior disorder
Even though we may not need REM sleep to function well, REM sleep behavior disorder (RBD) is a sign that our health may be in trouble. In 1986, scientists from the University of Minnesota reported a bizarre REM sleep pathology in four men and one woman who would act out their dreams. One 67-year-old man, for example, reportedly punched and kicked his wife at night for years. One time he found himself kneeling alongside the bed with his arms extended as if he were holding a rifle (he dreamt he was in a shootout). His overall health, however, seemed unaffected apart from self-injury during some episodes.
However, in 1996 the same group of researchers reported that 11 of 29 men originally diagnosed with RBD went on to develop a parkinsonian disorder. Combined data from 24 centers of the International RBD Study Group puts that number as high as 74% at 12-year follow-up. These patients get diagnosed with Parkinson’s disease, dementia with Lewy bodies, or multiple system atrophy. Scientists believe that the protein alpha-synuclein forms toxic clumps in the brain, which are responsible both for malfunctioning of muscle atonia during REM sleep and subsequent neurodegenerative disorders.
While some researchers say that RBD may offer a unique window into better understanding REM sleep, we’re still a long way off from fully figuring out this biological phenomenon. According to Dr. Blumberg, the story of REM sleep has arguably become more muddled in the 7 decades since Dr. Aserinsky and Dr. Kleitman published their original findings, dispelling myths about ‘fatigue toxins’ and sleep as a passive, coma-like state. Dr. Mendelson concurred: “It truly remains a mystery.”
Dr. Blumberg, Dr. Mendelson, and Dr. Siegel reported no relevant disclosures.
A version of this article originally appeared on Medscape.com.
Eugene Aserinsky, PhD, never wanted to study sleep. He tried being a social worker, a dental student, and even did a stint in the army as an explosives handler. He enrolled at the University of Chicago to pursue organ physiology, but all potential supervisors were too busy to take him on. His only choice was Nathaniel Kleitman, PhD, a middle-aged professor whom Dr. Aserinsky described as “always serious.” Dr. Kleitman was doing research on sleep and so, grudgingly, Dr. Aserinsky had followed suit.
Two years later, in 1953, the duo published a paper that shattered the way we saw sleep. They described a weird phenomenon Dr. Aserinsky later called REM sleep: periods of rapid eye movements paired with wakefulness-like activity in the brain. “We are still at the very beginning of understanding this phenomenon,” Mark Blumberg, PhD, professor of psychological and brain sciences at University of Iowa, Iowa City, said in an interview.
Before Dr. Aserinsky had walked into Dr. Kleitman’s lab, the widespread belief held that sleep was “the antithesis of wakefulness,” as Dr. Kleitman wrote in his seminal 1939 book, “Sleep and Wakefulness.” Others saw it as a kind of a coma, a passive state. Another theory, developed in the early 20th century by French psychologist Henri Piéron, PhD, held that sleepiness is caused by an accumulation of ‘hypnotoxins’ in the brain.
In his 1913 study that would likely fail a contemporary ethics review, Dr. Piéron drew fluid from the brains of sleep-deprived dogs and injected it into other dogs to induce sleep. As he explained in an interview with The Washington Times in 1933, he said he believed that fatigue toxins accumulate in the brain throughout the wakeful hours, then slowly seep into the spinal column, promoting drowsiness. Once we fall asleep, Dr. Piéron claimed, the hypnotoxins burn away.
From blinking to rapid eye movement
In 1925 when Dr. Kleitman established the world’s first sleep laboratory at the University of Chicago, sleep was a fringe science that most researchers avoided with a wide berth. Yet Dr. Kleitman was obsessed. The Moldova-born scientist famously worked 24/7 – literally. He not only stayed long hours in his lab, but also slept attached to a plethora of instruments to measure his brain waves, breathing, and heartbeat. At one point, Dr. Kleitman stayed awake for 180 hours (more than a week), to check how forced sleeplessness would affect his body (he later compared it to torture). He also lived 2 weeks aboard a submarine, moved his family north of the Arctic Circle, and spent over a month 119 feet below the surface in a cave in Kentucky, fighting rats, cold, and humidity to study circadian rhythms.
Dr. Kleitman was intrigued by an article in Nature in which the author asserted that he could detect the approach of slumber in train passengers by observing their blink frequencies. He instructed Dr. Aserinsky to observe sleeping infants (being monitored for a different study), to see how their blinking related to sleep. Yet Dr. Aserinsky was not amused. The project, he later wrote, “seemed about as exciting as warm milk.”
Dr. Aserinsky was uncertain whether eyelid movement with the eyes closed constituted a blink, then he noticed a 20-minute span in each hour when eye movement ceased entirely. Still short of getting his degree, Dr. Aserinsky decided to observe sleeping adults. He hauled a dusty clanker of a brain-wave machine out of the university’s basement, and started registering the electrical activity of the brain of his dozing subjects. Soon, he noticed something weird.
As he kept staring at the sleeping adults, he noticed that at times they’d have saccadic-like eye movements, just as the EEG machine would register a wake-like state of the brain. At first, he thought the machine was broken (it was ancient, after all). Then, that the subjects were awake and just keeping their eyes shut. Yet after conducting several sessions and tinkering with the EEG machine, Dr. Aserinsky finally concluded that the recordings and observations were correct: Something was indeed happening during sleep that kept the cortex activated and made the subjects’ eyes move in a jerky manner.
Dreams, memory, and thermoregulation
After studying dozens of subjects, including his son and Dr. Kleitman’s daughter, and using miles of polygraph paper, the two scientists published their findings in September 1953 in the journal Science. Dr. Kleitman didn’t expect the discovery to be particularly earth-shattering. When asked in a later interview how much research and excitement he thought the paper would generate, he replied: “none whatsoever.” That’s not how things went, though. “They completely changed the way people think,” Dr. Blumberg said. Once and for all, the REM discovery put to rest the idea that sleep was a passive state where nothing interesting happens.
Dr. Aserinsky soon left the University of Chicago, while Dr. Kleitman continued research on rapid eye movements in sleep with his new student, William Dement, MD. Together, they published studies suggesting that REM periods were when dreaming occurred – they reported that people who were awakened during REM sleep were far more likely to recall dreams than were those awakened outside of that period. “REM sleep = dreams” became established dogma for decades, even though first reports of dreams during non-REM sleep came as early as Dr. Kleitman’s and Dr. Dement’s original research (they assumed these were recollections from the preceding REM episodes).
“It turns out that you can have a perfectly good dream when you haven’t had a previous REM sleep period,” said Jerome Siegel, PhD, professor of psychiatry and biobehavioral sciences at UCLA’s Center for Sleep Research, pointing out that equating REM sleep with dreams is still “a common misconception.”
By the 1960s, REM sleep seemed to be well defined as the combination of rapid eye movement with EEG showing brain activation, first noted by Dr. Aserinsky, as well as muscle atonia – a state of near-total muscle relaxation or paralysis. Today, however, Dr. Blumberg said, things are considerably less clear cut. In one recent paper, Dr. Blumberg and his colleagues went as far as to question whether REM sleep is even “a thing.” REM sleep is prevalent across terrestrial vertebrates, but they found that it is also highly nuanced, messing up old definitions.
Take the platypus, for example, the animal with the most REM sleep (as far as we know): They have rapid eye movements and their bills twitch during REM (stillness punctuated by sudden twitches is typical of that period of sleep), but they don’t have the classic brain activation on EEG. Owls have EEG activation and twitching, but no rapid eye movements, since their eyes are largely immobile. Geese, meanwhile, are missing muscle atonia – that’s why they can sleep standing. And new studies are still coming in, showing, for instance, that even jumping spiders may have REM sleep, complete with jerky eye movements and limb twitching.
For Dr. Siegel, the findings on REM sleep in animals point to the potential explanation of what that bizarre stage of sleep may be all about: thermoregulation. “When you look at differences in sleep among the groups of warm-blooded animals, the correlation is almost perfect, and inverse. The colder they are, the more REM sleep they get,” Dr. Siegel said. During REM sleep, body thermoregulation is basically suspended, and so, as Dr. Siegel argued in The Lancet Neurology last fall, REM sleep could be a vital player in managing our brain’s temperature and metabolic activity during sleep.
Wallace B. Mendelson, MD, professor emeritus of psychiatry at the University of Chicago, said it’s likely, however, that REM sleep has more than one function. “There is no reason why one single theory has to be an answer. Most important physiological functions have multiple functions,” he said. The ideas are many, including that REM sleep helps consolidate our memories and plays an important role in emotion regulation But it’s not that simple. A Swiss study of nearly 1,000 healthy participants did not show any correlation between sleep stage and memory consolidation. Sleep disruption of any stage can prevent memory consolidation and quiet wakefulness with closed eyes can be as effective as sleep for memory recall.
In 1971, researchers from the National Institute of Mental Health published results of their study on total suppression of REM sleep. For as long as 40 days, they administered the monoamine oxidase inhibitor (MAOI) phenelzine, a type of drug that can completely eliminate REM sleep, to six patients with anxiety and depression. They showed that suppression of REM sleep could improve symptoms of depression, seemingly without impairing the patients’ cognitive function. Modern antidepressants, too, can greatly diminish REM sleep, Dr. Siegel said. “I’m not aware that there is any dramatic downside in having REM sleep reduced,” he said.
So do we even need REM sleep for optimal performance? Dr. Siegel said that there is a lot of exaggeration about how great REM sleep is for our health. “People just indulge their imaginations,” he said.
Dr. Blumberg pointed out that, in general, as long as you get enough sleep in the first place, you will get enough REM. “You can’t control the amount of REM sleep you have,” he explained.
REM sleep behavior disorder
Even though we may not need REM sleep to function well, REM sleep behavior disorder (RBD) is a sign that our health may be in trouble. In 1986, scientists from the University of Minnesota reported a bizarre REM sleep pathology in four men and one woman who would act out their dreams. One 67-year-old man, for example, reportedly punched and kicked his wife at night for years. One time he found himself kneeling alongside the bed with his arms extended as if he were holding a rifle (he dreamt he was in a shootout). His overall health, however, seemed unaffected apart from self-injury during some episodes.
However, in 1996 the same group of researchers reported that 11 of 29 men originally diagnosed with RBD went on to develop a parkinsonian disorder. Combined data from 24 centers of the International RBD Study Group puts that number as high as 74% at 12-year follow-up. These patients get diagnosed with Parkinson’s disease, dementia with Lewy bodies, or multiple system atrophy. Scientists believe that the protein alpha-synuclein forms toxic clumps in the brain, which are responsible both for malfunctioning of muscle atonia during REM sleep and subsequent neurodegenerative disorders.
While some researchers say that RBD may offer a unique window into better understanding REM sleep, we’re still a long way off from fully figuring out this biological phenomenon. According to Dr. Blumberg, the story of REM sleep has arguably become more muddled in the 7 decades since Dr. Aserinsky and Dr. Kleitman published their original findings, dispelling myths about ‘fatigue toxins’ and sleep as a passive, coma-like state. Dr. Mendelson concurred: “It truly remains a mystery.”
Dr. Blumberg, Dr. Mendelson, and Dr. Siegel reported no relevant disclosures.
A version of this article originally appeared on Medscape.com.
Eugene Aserinsky, PhD, never wanted to study sleep. He tried being a social worker, a dental student, and even did a stint in the army as an explosives handler. He enrolled at the University of Chicago to pursue organ physiology, but all potential supervisors were too busy to take him on. His only choice was Nathaniel Kleitman, PhD, a middle-aged professor whom Dr. Aserinsky described as “always serious.” Dr. Kleitman was doing research on sleep and so, grudgingly, Dr. Aserinsky had followed suit.
Two years later, in 1953, the duo published a paper that shattered the way we saw sleep. They described a weird phenomenon Dr. Aserinsky later called REM sleep: periods of rapid eye movements paired with wakefulness-like activity in the brain. “We are still at the very beginning of understanding this phenomenon,” Mark Blumberg, PhD, professor of psychological and brain sciences at University of Iowa, Iowa City, said in an interview.
Before Dr. Aserinsky had walked into Dr. Kleitman’s lab, the widespread belief held that sleep was “the antithesis of wakefulness,” as Dr. Kleitman wrote in his seminal 1939 book, “Sleep and Wakefulness.” Others saw it as a kind of a coma, a passive state. Another theory, developed in the early 20th century by French psychologist Henri Piéron, PhD, held that sleepiness is caused by an accumulation of ‘hypnotoxins’ in the brain.
In his 1913 study that would likely fail a contemporary ethics review, Dr. Piéron drew fluid from the brains of sleep-deprived dogs and injected it into other dogs to induce sleep. As he explained in an interview with The Washington Times in 1933, he said he believed that fatigue toxins accumulate in the brain throughout the wakeful hours, then slowly seep into the spinal column, promoting drowsiness. Once we fall asleep, Dr. Piéron claimed, the hypnotoxins burn away.
From blinking to rapid eye movement
In 1925 when Dr. Kleitman established the world’s first sleep laboratory at the University of Chicago, sleep was a fringe science that most researchers avoided with a wide berth. Yet Dr. Kleitman was obsessed. The Moldova-born scientist famously worked 24/7 – literally. He not only stayed long hours in his lab, but also slept attached to a plethora of instruments to measure his brain waves, breathing, and heartbeat. At one point, Dr. Kleitman stayed awake for 180 hours (more than a week), to check how forced sleeplessness would affect his body (he later compared it to torture). He also lived 2 weeks aboard a submarine, moved his family north of the Arctic Circle, and spent over a month 119 feet below the surface in a cave in Kentucky, fighting rats, cold, and humidity to study circadian rhythms.
Dr. Kleitman was intrigued by an article in Nature in which the author asserted that he could detect the approach of slumber in train passengers by observing their blink frequencies. He instructed Dr. Aserinsky to observe sleeping infants (being monitored for a different study), to see how their blinking related to sleep. Yet Dr. Aserinsky was not amused. The project, he later wrote, “seemed about as exciting as warm milk.”
Dr. Aserinsky was uncertain whether eyelid movement with the eyes closed constituted a blink, then he noticed a 20-minute span in each hour when eye movement ceased entirely. Still short of getting his degree, Dr. Aserinsky decided to observe sleeping adults. He hauled a dusty clanker of a brain-wave machine out of the university’s basement, and started registering the electrical activity of the brain of his dozing subjects. Soon, he noticed something weird.
As he kept staring at the sleeping adults, he noticed that at times they’d have saccadic-like eye movements, just as the EEG machine would register a wake-like state of the brain. At first, he thought the machine was broken (it was ancient, after all). Then, that the subjects were awake and just keeping their eyes shut. Yet after conducting several sessions and tinkering with the EEG machine, Dr. Aserinsky finally concluded that the recordings and observations were correct: Something was indeed happening during sleep that kept the cortex activated and made the subjects’ eyes move in a jerky manner.
Dreams, memory, and thermoregulation
After studying dozens of subjects, including his son and Dr. Kleitman’s daughter, and using miles of polygraph paper, the two scientists published their findings in September 1953 in the journal Science. Dr. Kleitman didn’t expect the discovery to be particularly earth-shattering. When asked in a later interview how much research and excitement he thought the paper would generate, he replied: “none whatsoever.” That’s not how things went, though. “They completely changed the way people think,” Dr. Blumberg said. Once and for all, the REM discovery put to rest the idea that sleep was a passive state where nothing interesting happens.
Dr. Aserinsky soon left the University of Chicago, while Dr. Kleitman continued research on rapid eye movements in sleep with his new student, William Dement, MD. Together, they published studies suggesting that REM periods were when dreaming occurred – they reported that people who were awakened during REM sleep were far more likely to recall dreams than were those awakened outside of that period. “REM sleep = dreams” became established dogma for decades, even though first reports of dreams during non-REM sleep came as early as Dr. Kleitman’s and Dr. Dement’s original research (they assumed these were recollections from the preceding REM episodes).
“It turns out that you can have a perfectly good dream when you haven’t had a previous REM sleep period,” said Jerome Siegel, PhD, professor of psychiatry and biobehavioral sciences at UCLA’s Center for Sleep Research, pointing out that equating REM sleep with dreams is still “a common misconception.”
By the 1960s, REM sleep seemed to be well defined as the combination of rapid eye movement with EEG showing brain activation, first noted by Dr. Aserinsky, as well as muscle atonia – a state of near-total muscle relaxation or paralysis. Today, however, Dr. Blumberg said, things are considerably less clear cut. In one recent paper, Dr. Blumberg and his colleagues went as far as to question whether REM sleep is even “a thing.” REM sleep is prevalent across terrestrial vertebrates, but they found that it is also highly nuanced, messing up old definitions.
Take the platypus, for example, the animal with the most REM sleep (as far as we know): They have rapid eye movements and their bills twitch during REM (stillness punctuated by sudden twitches is typical of that period of sleep), but they don’t have the classic brain activation on EEG. Owls have EEG activation and twitching, but no rapid eye movements, since their eyes are largely immobile. Geese, meanwhile, are missing muscle atonia – that’s why they can sleep standing. And new studies are still coming in, showing, for instance, that even jumping spiders may have REM sleep, complete with jerky eye movements and limb twitching.
For Dr. Siegel, the findings on REM sleep in animals point to the potential explanation of what that bizarre stage of sleep may be all about: thermoregulation. “When you look at differences in sleep among the groups of warm-blooded animals, the correlation is almost perfect, and inverse. The colder they are, the more REM sleep they get,” Dr. Siegel said. During REM sleep, body thermoregulation is basically suspended, and so, as Dr. Siegel argued in The Lancet Neurology last fall, REM sleep could be a vital player in managing our brain’s temperature and metabolic activity during sleep.
Wallace B. Mendelson, MD, professor emeritus of psychiatry at the University of Chicago, said it’s likely, however, that REM sleep has more than one function. “There is no reason why one single theory has to be an answer. Most important physiological functions have multiple functions,” he said. The ideas are many, including that REM sleep helps consolidate our memories and plays an important role in emotion regulation But it’s not that simple. A Swiss study of nearly 1,000 healthy participants did not show any correlation between sleep stage and memory consolidation. Sleep disruption of any stage can prevent memory consolidation and quiet wakefulness with closed eyes can be as effective as sleep for memory recall.
In 1971, researchers from the National Institute of Mental Health published results of their study on total suppression of REM sleep. For as long as 40 days, they administered the monoamine oxidase inhibitor (MAOI) phenelzine, a type of drug that can completely eliminate REM sleep, to six patients with anxiety and depression. They showed that suppression of REM sleep could improve symptoms of depression, seemingly without impairing the patients’ cognitive function. Modern antidepressants, too, can greatly diminish REM sleep, Dr. Siegel said. “I’m not aware that there is any dramatic downside in having REM sleep reduced,” he said.
So do we even need REM sleep for optimal performance? Dr. Siegel said that there is a lot of exaggeration about how great REM sleep is for our health. “People just indulge their imaginations,” he said.
Dr. Blumberg pointed out that, in general, as long as you get enough sleep in the first place, you will get enough REM. “You can’t control the amount of REM sleep you have,” he explained.
REM sleep behavior disorder
Even though we may not need REM sleep to function well, REM sleep behavior disorder (RBD) is a sign that our health may be in trouble. In 1986, scientists from the University of Minnesota reported a bizarre REM sleep pathology in four men and one woman who would act out their dreams. One 67-year-old man, for example, reportedly punched and kicked his wife at night for years. One time he found himself kneeling alongside the bed with his arms extended as if he were holding a rifle (he dreamt he was in a shootout). His overall health, however, seemed unaffected apart from self-injury during some episodes.
However, in 1996 the same group of researchers reported that 11 of 29 men originally diagnosed with RBD went on to develop a parkinsonian disorder. Combined data from 24 centers of the International RBD Study Group puts that number as high as 74% at 12-year follow-up. These patients get diagnosed with Parkinson’s disease, dementia with Lewy bodies, or multiple system atrophy. Scientists believe that the protein alpha-synuclein forms toxic clumps in the brain, which are responsible both for malfunctioning of muscle atonia during REM sleep and subsequent neurodegenerative disorders.
While some researchers say that RBD may offer a unique window into better understanding REM sleep, we’re still a long way off from fully figuring out this biological phenomenon. According to Dr. Blumberg, the story of REM sleep has arguably become more muddled in the 7 decades since Dr. Aserinsky and Dr. Kleitman published their original findings, dispelling myths about ‘fatigue toxins’ and sleep as a passive, coma-like state. Dr. Mendelson concurred: “It truly remains a mystery.”
Dr. Blumberg, Dr. Mendelson, and Dr. Siegel reported no relevant disclosures.
A version of this article originally appeared on Medscape.com.
The truth about the ‘happy hormone’: Why we shouldn’t mess with dopamine
Google the word “dopamine” and you will learn that its nicknames are the “happy hormone” and the “pleasure molecule” and that it is among the most important chemicals in our brains. With The Guardian branding it “the Kim Kardashian of neurotransmitters,” dopamine has become a true pop-science darling – people across the globe have attempted to boost their mood with dopamine fasts and dopamine dressing.
A century ago, however, newly discovered dopamine was seen as an uninspiring chemical, nothing more than a precursor of noradrenaline. It took several stubborn and hardworking scientists to change that view.
Levodopa: An indifferent precursor
When Casimir Funk, PhD, a Polish biochemist and the discoverer of vitamins, first synthesized the dopamine precursor levodopa in 1911, he had no idea how important the molecule would prove to be in pharmacology and neurobiology. Nor did Markus Guggenheim, PhD, a Swiss biochemist, who isolated levodopa in 1913 from the seeds of a broad bean, Vicia faba. Dr. Guggenheim administered 1 g of levodopa to a rabbit, with no apparent negative consequences. He then prepared a larger dose (2.5 g) and tested it on himself. “Ten minutes after taking it, I felt very nauseous, I had to vomit twice,” he wrote in his paper. In the body, levodopa is converted into dopamine, which may act as an emetic – an effect Dr. Guggenheim didn’t understand. He simply abandoned his human study, erroneously concluding, on the basis of his animal research, that levodopa is “pharmacologically fairly indifferent.”
Around the same time, several scientists across Europe successfully synthesized dopamine, but those discoveries were shelved without much fanfare. For the next 3 decades, dopamine and levodopa were pushed into academic obscurity. Just before World War II, a group of German scientists showed that levodopa is metabolized to dopamine in the body, while another German researcher, Hermann Blaschko, MD, discovered that dopamine is an intermediary in the synthesis of noradrenaline. Even these findings, however, were not immediately accepted.
The dopamine story picked up pace in the post-war years with the observation that the hormone was present in various tissues and body fluids, although nowhere as abundantly as in the central nervous system. Intrigued, Dr. Blaschko, who (after escaping Nazi Germany, changing his name to Hugh, and starting work at Oxford [England] University) hypothesized that dopamine couldn’t be an unremarkable precursor of noradrenaline – it had to have some physiologic functions of its own. He asked his postdoctoral fellow, Oheh Hornykiewicz, MD, to test a few ideas. Dr. Hornykiewicz soon confirmed that dopamine lowered blood pressure in guinea pigs, proving that dopamine indeed had physiologic activity that was independent of other catecholamines.
Reserpine and rabbit ears
While Dr. Blaschko and Dr. Hornykiewicz were puzzling over dopamine’s physiologic role in the body, across the ocean at the National Heart Institute in Maryland, pharmacologist Bernard Brodie, PhD and colleagues were laying the groundwork for the discovery of dopamine’s starring role in the brain.
Spoiler alert: Dr. Brodie’s work showed that a new psychiatric drug known as reserpine was capable of fully depleting the brain’s stores of serotonin and – of greatest significance, as it turned out – mimicking the neuromuscular symptoms typical of Parkinson’s disease. The connection to dopamine would be made by new lab colleague Arvid Carlsson, MD, PhD, who would go on to win a Nobel Prize.
Derived from Rauwolfia serpentina (a plant that for centuries has been used in India for the treatment of mental illness, insomnia, and snake bites), reserpine was introduced in the West as a treatment for schizophrenia.
It worked marvels. In 1954, the press lauded the “dramatic” and seemingly “incredible”: results in treating “hopelessly insane patients.” Reserpine had a downside, however. Reports soon changed in tone regarding the drug’s severe side effects, including headaches, dizziness, vomiting, and, far more disturbingly, symptoms mimicking Parkinson’s disease, from muscular rigidity to tremors.
Dr. Brodie observed that, when reserpine was injected, animals became completely immobile. Serotonin nearly vanished from their brains, but bizarrely, drugs that spur serotonin production did not reverse the rabbits’ immobility.
Dr. Carlsson realized that other catecholamines must be involved in reserpine’s side effects, and he began to search for the culprits. He moved back to his native Sweden and ordered a spectrophotofluorimeter. In one of his experiments, Carlsson injected a pair of rabbits with reserpine, which caused the animals to become catatonic with flattened ears. After the researchers injected the animals with levodopa, within 15 minutes, the rabbits were hopping around, ears proudly vertical. “We were just as excited as the rabbits,” Dr. Carlsson later recalled in a 2016 interview. Dr. Carlsson realized that, because there was no noradrenaline in the rabbits’ brains, dopamine depletion must have been directly responsible for producing reserpine’s motor inhibitory effects.
Skeptics are silenced
In 1960, however, the medical community was not yet ready to accept that dopamine was anything but a boring intermediate between levodopa and noradrenaline. At a prestigious London symposium, Dr. Carlsson and his two colleagues presented their hypothesis that dopamine may be a neurotransmitter, thus implicating it in Parkinson’s disease. They were met with harsh criticism. Some of the experts said levodopa was nothing more than a poison. Dr. Carlsson later recalled facing “a profound and nearly unanimous skepticism regarding our points of view.”
That would soon change. Dr. Hornykiewicz, the biochemist who had earlier discovered dopamine’s BP-lowering effects, tested Dr. Carlsson’s ideas using the postmortem brains of Parkinson’s disease patients. It appeared Dr. Carlsson was right: Unlike in healthy brains, the striatum of patients with Parkinson’s disease contained almost no dopamine whatsoever. Beginning in 1961, in collaboration with neurologist Walther Birkmayer, MD, Hornykiewicz injected levodopa into 20 patients with Parkinson’s disease and observed a “miraculous” (albeit temporary) amelioration of rigidity, motionlessness, and speechlessness.
By the late 1960s, levodopa and dopamine were making headlines. A 1969 New York Times article described similar stunning improvements in patients with Parkinson’s disease who were treated with levodopa. A patient who had arrived at a hospital unable to speak, with hands clenched and rigid expression, was suddenly able to stride into his doctor’s office and even jog around. “I might say I’m a human being,” he told reporters. Although the treatment was expensive – equivalent to $210 in 2022 – physicians were deluged with requests for “dopa.” To this day, levodopa remains a gold standard in the treatment of Parkinson’s disease.
Still misunderstood
The history of dopamine, however, is not only about Parkinson’s disease but extends to the treatment of schizophrenia and addiction. When in the1940s a French military surgeon started giving a new antihistamine drug, promethazine, to prevent shock in soldiers undergoing surgery, he noticed a bizarre side effect: the soldiers would become euphoric yet oddly calm at the same time.
After the drug was modified by adding a chlorine atom and renamed chlorpromazine, it fast became a go-to treatment for psychosis. At the time, no one made the connection to dopamine. Contemporary doctors believed that it calmed people by lowering body temperature (common treatments for mental illness back in the day included swaddling patients in cold, wet sheets). Yet just like reserpine, chlorpromazine produced range of nasty side effects that closely mimicked Parkinson’s disease. This led a Dutch pharmacologist, Jacques van Rossum, to hypothesize that dopamine receptor blockade could explain chlorpromazine’s antipsychotic effects – an idea that remains widely accepted today.
In the 1970s, dopamine was linked with addiction through research on rodents, and this novel idea caught people’s imagination over the coming decades. A story on dopamine titled, “How We Get Addicted,” made the cover of Time in 1997.
Yet as the dopamine/addiction connection became widespread, it also became oversimplified. According to a 2015 article in Nature Reviews Neuroscience, a wave of low-quality research followed – nonreplicated, insufficient – which led the authors to conclude that we are “addicted to the dopamine theory of addiction.” Just about every pleasure under the sun was being attributed to dopamine, from eating delicious foods and playing computer games to sex, music, and hot showers. As recent science shows, however, dopamine is not simply about pleasure – it’s about reward prediction, response to stress, memory, learning, and even the functioning of the immune system. Since its first synthesis in the early 20th century, dopamine has often been misunderstood and oversimplified – and it seems the story is repeating itself now.
In one of his final interviews, Dr. Carlsson, who passed away in 2018 at the age of 95, warned about playing around with dopamine and, in particular, prescribing drugs that have an inhibitory action on this neurotransmitter. “Dopamine is involved in everything that happens in our brains – all its important functions,” he said.
We should be careful how we handle such a delicate and still little-known system.
A version of this article first appeared on Medscape.com.
Google the word “dopamine” and you will learn that its nicknames are the “happy hormone” and the “pleasure molecule” and that it is among the most important chemicals in our brains. With The Guardian branding it “the Kim Kardashian of neurotransmitters,” dopamine has become a true pop-science darling – people across the globe have attempted to boost their mood with dopamine fasts and dopamine dressing.
A century ago, however, newly discovered dopamine was seen as an uninspiring chemical, nothing more than a precursor of noradrenaline. It took several stubborn and hardworking scientists to change that view.
Levodopa: An indifferent precursor
When Casimir Funk, PhD, a Polish biochemist and the discoverer of vitamins, first synthesized the dopamine precursor levodopa in 1911, he had no idea how important the molecule would prove to be in pharmacology and neurobiology. Nor did Markus Guggenheim, PhD, a Swiss biochemist, who isolated levodopa in 1913 from the seeds of a broad bean, Vicia faba. Dr. Guggenheim administered 1 g of levodopa to a rabbit, with no apparent negative consequences. He then prepared a larger dose (2.5 g) and tested it on himself. “Ten minutes after taking it, I felt very nauseous, I had to vomit twice,” he wrote in his paper. In the body, levodopa is converted into dopamine, which may act as an emetic – an effect Dr. Guggenheim didn’t understand. He simply abandoned his human study, erroneously concluding, on the basis of his animal research, that levodopa is “pharmacologically fairly indifferent.”
Around the same time, several scientists across Europe successfully synthesized dopamine, but those discoveries were shelved without much fanfare. For the next 3 decades, dopamine and levodopa were pushed into academic obscurity. Just before World War II, a group of German scientists showed that levodopa is metabolized to dopamine in the body, while another German researcher, Hermann Blaschko, MD, discovered that dopamine is an intermediary in the synthesis of noradrenaline. Even these findings, however, were not immediately accepted.
The dopamine story picked up pace in the post-war years with the observation that the hormone was present in various tissues and body fluids, although nowhere as abundantly as in the central nervous system. Intrigued, Dr. Blaschko, who (after escaping Nazi Germany, changing his name to Hugh, and starting work at Oxford [England] University) hypothesized that dopamine couldn’t be an unremarkable precursor of noradrenaline – it had to have some physiologic functions of its own. He asked his postdoctoral fellow, Oheh Hornykiewicz, MD, to test a few ideas. Dr. Hornykiewicz soon confirmed that dopamine lowered blood pressure in guinea pigs, proving that dopamine indeed had physiologic activity that was independent of other catecholamines.
Reserpine and rabbit ears
While Dr. Blaschko and Dr. Hornykiewicz were puzzling over dopamine’s physiologic role in the body, across the ocean at the National Heart Institute in Maryland, pharmacologist Bernard Brodie, PhD and colleagues were laying the groundwork for the discovery of dopamine’s starring role in the brain.
Spoiler alert: Dr. Brodie’s work showed that a new psychiatric drug known as reserpine was capable of fully depleting the brain’s stores of serotonin and – of greatest significance, as it turned out – mimicking the neuromuscular symptoms typical of Parkinson’s disease. The connection to dopamine would be made by new lab colleague Arvid Carlsson, MD, PhD, who would go on to win a Nobel Prize.
Derived from Rauwolfia serpentina (a plant that for centuries has been used in India for the treatment of mental illness, insomnia, and snake bites), reserpine was introduced in the West as a treatment for schizophrenia.
It worked marvels. In 1954, the press lauded the “dramatic” and seemingly “incredible”: results in treating “hopelessly insane patients.” Reserpine had a downside, however. Reports soon changed in tone regarding the drug’s severe side effects, including headaches, dizziness, vomiting, and, far more disturbingly, symptoms mimicking Parkinson’s disease, from muscular rigidity to tremors.
Dr. Brodie observed that, when reserpine was injected, animals became completely immobile. Serotonin nearly vanished from their brains, but bizarrely, drugs that spur serotonin production did not reverse the rabbits’ immobility.
Dr. Carlsson realized that other catecholamines must be involved in reserpine’s side effects, and he began to search for the culprits. He moved back to his native Sweden and ordered a spectrophotofluorimeter. In one of his experiments, Carlsson injected a pair of rabbits with reserpine, which caused the animals to become catatonic with flattened ears. After the researchers injected the animals with levodopa, within 15 minutes, the rabbits were hopping around, ears proudly vertical. “We were just as excited as the rabbits,” Dr. Carlsson later recalled in a 2016 interview. Dr. Carlsson realized that, because there was no noradrenaline in the rabbits’ brains, dopamine depletion must have been directly responsible for producing reserpine’s motor inhibitory effects.
Skeptics are silenced
In 1960, however, the medical community was not yet ready to accept that dopamine was anything but a boring intermediate between levodopa and noradrenaline. At a prestigious London symposium, Dr. Carlsson and his two colleagues presented their hypothesis that dopamine may be a neurotransmitter, thus implicating it in Parkinson’s disease. They were met with harsh criticism. Some of the experts said levodopa was nothing more than a poison. Dr. Carlsson later recalled facing “a profound and nearly unanimous skepticism regarding our points of view.”
That would soon change. Dr. Hornykiewicz, the biochemist who had earlier discovered dopamine’s BP-lowering effects, tested Dr. Carlsson’s ideas using the postmortem brains of Parkinson’s disease patients. It appeared Dr. Carlsson was right: Unlike in healthy brains, the striatum of patients with Parkinson’s disease contained almost no dopamine whatsoever. Beginning in 1961, in collaboration with neurologist Walther Birkmayer, MD, Hornykiewicz injected levodopa into 20 patients with Parkinson’s disease and observed a “miraculous” (albeit temporary) amelioration of rigidity, motionlessness, and speechlessness.
By the late 1960s, levodopa and dopamine were making headlines. A 1969 New York Times article described similar stunning improvements in patients with Parkinson’s disease who were treated with levodopa. A patient who had arrived at a hospital unable to speak, with hands clenched and rigid expression, was suddenly able to stride into his doctor’s office and even jog around. “I might say I’m a human being,” he told reporters. Although the treatment was expensive – equivalent to $210 in 2022 – physicians were deluged with requests for “dopa.” To this day, levodopa remains a gold standard in the treatment of Parkinson’s disease.
Still misunderstood
The history of dopamine, however, is not only about Parkinson’s disease but extends to the treatment of schizophrenia and addiction. When in the1940s a French military surgeon started giving a new antihistamine drug, promethazine, to prevent shock in soldiers undergoing surgery, he noticed a bizarre side effect: the soldiers would become euphoric yet oddly calm at the same time.
After the drug was modified by adding a chlorine atom and renamed chlorpromazine, it fast became a go-to treatment for psychosis. At the time, no one made the connection to dopamine. Contemporary doctors believed that it calmed people by lowering body temperature (common treatments for mental illness back in the day included swaddling patients in cold, wet sheets). Yet just like reserpine, chlorpromazine produced range of nasty side effects that closely mimicked Parkinson’s disease. This led a Dutch pharmacologist, Jacques van Rossum, to hypothesize that dopamine receptor blockade could explain chlorpromazine’s antipsychotic effects – an idea that remains widely accepted today.
In the 1970s, dopamine was linked with addiction through research on rodents, and this novel idea caught people’s imagination over the coming decades. A story on dopamine titled, “How We Get Addicted,” made the cover of Time in 1997.
Yet as the dopamine/addiction connection became widespread, it also became oversimplified. According to a 2015 article in Nature Reviews Neuroscience, a wave of low-quality research followed – nonreplicated, insufficient – which led the authors to conclude that we are “addicted to the dopamine theory of addiction.” Just about every pleasure under the sun was being attributed to dopamine, from eating delicious foods and playing computer games to sex, music, and hot showers. As recent science shows, however, dopamine is not simply about pleasure – it’s about reward prediction, response to stress, memory, learning, and even the functioning of the immune system. Since its first synthesis in the early 20th century, dopamine has often been misunderstood and oversimplified – and it seems the story is repeating itself now.
In one of his final interviews, Dr. Carlsson, who passed away in 2018 at the age of 95, warned about playing around with dopamine and, in particular, prescribing drugs that have an inhibitory action on this neurotransmitter. “Dopamine is involved in everything that happens in our brains – all its important functions,” he said.
We should be careful how we handle such a delicate and still little-known system.
A version of this article first appeared on Medscape.com.
Google the word “dopamine” and you will learn that its nicknames are the “happy hormone” and the “pleasure molecule” and that it is among the most important chemicals in our brains. With The Guardian branding it “the Kim Kardashian of neurotransmitters,” dopamine has become a true pop-science darling – people across the globe have attempted to boost their mood with dopamine fasts and dopamine dressing.
A century ago, however, newly discovered dopamine was seen as an uninspiring chemical, nothing more than a precursor of noradrenaline. It took several stubborn and hardworking scientists to change that view.
Levodopa: An indifferent precursor
When Casimir Funk, PhD, a Polish biochemist and the discoverer of vitamins, first synthesized the dopamine precursor levodopa in 1911, he had no idea how important the molecule would prove to be in pharmacology and neurobiology. Nor did Markus Guggenheim, PhD, a Swiss biochemist, who isolated levodopa in 1913 from the seeds of a broad bean, Vicia faba. Dr. Guggenheim administered 1 g of levodopa to a rabbit, with no apparent negative consequences. He then prepared a larger dose (2.5 g) and tested it on himself. “Ten minutes after taking it, I felt very nauseous, I had to vomit twice,” he wrote in his paper. In the body, levodopa is converted into dopamine, which may act as an emetic – an effect Dr. Guggenheim didn’t understand. He simply abandoned his human study, erroneously concluding, on the basis of his animal research, that levodopa is “pharmacologically fairly indifferent.”
Around the same time, several scientists across Europe successfully synthesized dopamine, but those discoveries were shelved without much fanfare. For the next 3 decades, dopamine and levodopa were pushed into academic obscurity. Just before World War II, a group of German scientists showed that levodopa is metabolized to dopamine in the body, while another German researcher, Hermann Blaschko, MD, discovered that dopamine is an intermediary in the synthesis of noradrenaline. Even these findings, however, were not immediately accepted.
The dopamine story picked up pace in the post-war years with the observation that the hormone was present in various tissues and body fluids, although nowhere as abundantly as in the central nervous system. Intrigued, Dr. Blaschko, who (after escaping Nazi Germany, changing his name to Hugh, and starting work at Oxford [England] University) hypothesized that dopamine couldn’t be an unremarkable precursor of noradrenaline – it had to have some physiologic functions of its own. He asked his postdoctoral fellow, Oheh Hornykiewicz, MD, to test a few ideas. Dr. Hornykiewicz soon confirmed that dopamine lowered blood pressure in guinea pigs, proving that dopamine indeed had physiologic activity that was independent of other catecholamines.
Reserpine and rabbit ears
While Dr. Blaschko and Dr. Hornykiewicz were puzzling over dopamine’s physiologic role in the body, across the ocean at the National Heart Institute in Maryland, pharmacologist Bernard Brodie, PhD and colleagues were laying the groundwork for the discovery of dopamine’s starring role in the brain.
Spoiler alert: Dr. Brodie’s work showed that a new psychiatric drug known as reserpine was capable of fully depleting the brain’s stores of serotonin and – of greatest significance, as it turned out – mimicking the neuromuscular symptoms typical of Parkinson’s disease. The connection to dopamine would be made by new lab colleague Arvid Carlsson, MD, PhD, who would go on to win a Nobel Prize.
Derived from Rauwolfia serpentina (a plant that for centuries has been used in India for the treatment of mental illness, insomnia, and snake bites), reserpine was introduced in the West as a treatment for schizophrenia.
It worked marvels. In 1954, the press lauded the “dramatic” and seemingly “incredible”: results in treating “hopelessly insane patients.” Reserpine had a downside, however. Reports soon changed in tone regarding the drug’s severe side effects, including headaches, dizziness, vomiting, and, far more disturbingly, symptoms mimicking Parkinson’s disease, from muscular rigidity to tremors.
Dr. Brodie observed that, when reserpine was injected, animals became completely immobile. Serotonin nearly vanished from their brains, but bizarrely, drugs that spur serotonin production did not reverse the rabbits’ immobility.
Dr. Carlsson realized that other catecholamines must be involved in reserpine’s side effects, and he began to search for the culprits. He moved back to his native Sweden and ordered a spectrophotofluorimeter. In one of his experiments, Carlsson injected a pair of rabbits with reserpine, which caused the animals to become catatonic with flattened ears. After the researchers injected the animals with levodopa, within 15 minutes, the rabbits were hopping around, ears proudly vertical. “We were just as excited as the rabbits,” Dr. Carlsson later recalled in a 2016 interview. Dr. Carlsson realized that, because there was no noradrenaline in the rabbits’ brains, dopamine depletion must have been directly responsible for producing reserpine’s motor inhibitory effects.
Skeptics are silenced
In 1960, however, the medical community was not yet ready to accept that dopamine was anything but a boring intermediate between levodopa and noradrenaline. At a prestigious London symposium, Dr. Carlsson and his two colleagues presented their hypothesis that dopamine may be a neurotransmitter, thus implicating it in Parkinson’s disease. They were met with harsh criticism. Some of the experts said levodopa was nothing more than a poison. Dr. Carlsson later recalled facing “a profound and nearly unanimous skepticism regarding our points of view.”
That would soon change. Dr. Hornykiewicz, the biochemist who had earlier discovered dopamine’s BP-lowering effects, tested Dr. Carlsson’s ideas using the postmortem brains of Parkinson’s disease patients. It appeared Dr. Carlsson was right: Unlike in healthy brains, the striatum of patients with Parkinson’s disease contained almost no dopamine whatsoever. Beginning in 1961, in collaboration with neurologist Walther Birkmayer, MD, Hornykiewicz injected levodopa into 20 patients with Parkinson’s disease and observed a “miraculous” (albeit temporary) amelioration of rigidity, motionlessness, and speechlessness.
By the late 1960s, levodopa and dopamine were making headlines. A 1969 New York Times article described similar stunning improvements in patients with Parkinson’s disease who were treated with levodopa. A patient who had arrived at a hospital unable to speak, with hands clenched and rigid expression, was suddenly able to stride into his doctor’s office and even jog around. “I might say I’m a human being,” he told reporters. Although the treatment was expensive – equivalent to $210 in 2022 – physicians were deluged with requests for “dopa.” To this day, levodopa remains a gold standard in the treatment of Parkinson’s disease.
Still misunderstood
The history of dopamine, however, is not only about Parkinson’s disease but extends to the treatment of schizophrenia and addiction. When in the1940s a French military surgeon started giving a new antihistamine drug, promethazine, to prevent shock in soldiers undergoing surgery, he noticed a bizarre side effect: the soldiers would become euphoric yet oddly calm at the same time.
After the drug was modified by adding a chlorine atom and renamed chlorpromazine, it fast became a go-to treatment for psychosis. At the time, no one made the connection to dopamine. Contemporary doctors believed that it calmed people by lowering body temperature (common treatments for mental illness back in the day included swaddling patients in cold, wet sheets). Yet just like reserpine, chlorpromazine produced range of nasty side effects that closely mimicked Parkinson’s disease. This led a Dutch pharmacologist, Jacques van Rossum, to hypothesize that dopamine receptor blockade could explain chlorpromazine’s antipsychotic effects – an idea that remains widely accepted today.
In the 1970s, dopamine was linked with addiction through research on rodents, and this novel idea caught people’s imagination over the coming decades. A story on dopamine titled, “How We Get Addicted,” made the cover of Time in 1997.
Yet as the dopamine/addiction connection became widespread, it also became oversimplified. According to a 2015 article in Nature Reviews Neuroscience, a wave of low-quality research followed – nonreplicated, insufficient – which led the authors to conclude that we are “addicted to the dopamine theory of addiction.” Just about every pleasure under the sun was being attributed to dopamine, from eating delicious foods and playing computer games to sex, music, and hot showers. As recent science shows, however, dopamine is not simply about pleasure – it’s about reward prediction, response to stress, memory, learning, and even the functioning of the immune system. Since its first synthesis in the early 20th century, dopamine has often been misunderstood and oversimplified – and it seems the story is repeating itself now.
In one of his final interviews, Dr. Carlsson, who passed away in 2018 at the age of 95, warned about playing around with dopamine and, in particular, prescribing drugs that have an inhibitory action on this neurotransmitter. “Dopamine is involved in everything that happens in our brains – all its important functions,” he said.
We should be careful how we handle such a delicate and still little-known system.
A version of this article first appeared on Medscape.com.
Limiting antibiotic overprescription in pandemics: New guidelines
A statement by the Society for Healthcare Epidemiology of America, published online in Infection Control & Hospital Epidemiology, offers health care providers guidelines on how to prevent inappropriate antibiotic use in future pandemics and to avoid some of the negative scenarios that have been seen with COVID-19.
According to the U.S. Centers of Disease Control and Prevention,
The culprit might be the widespread antibiotic overprescription during the current pandemic. A 2022 meta-analysis revealed that in high-income countries, 58% of patients with COVID-19 were given antibiotics, whereas in lower- and middle-income countries, 89% of patients were put on such drugs. Some hospitals in Europe and the United States reported similarly elevated numbers, sometimes approaching 100%.
“We’ve lost control,” Natasha Pettit, PharmD, pharmacy director at University of Chicago Medicine, told this news organization. Dr. Pettit was not involved in the SHEA study. “Even if CDC didn’t come out with that data, I can tell you right now more of my time is spent trying to figure out how to manage these multi-drug–resistant infections, and we are running out of options for these patients,”
“Dealing with uncertainty, exhaustion, [and] critical illness in often young, otherwise healthy patients meant doctors wanted to do something for their patients,” said Tamar Barlam, MD, an infectious diseases expert at the Boston Medical Center who led the development of the SHEA white paper, in an interview.
That something often was a prescription for antibiotics, even without a clear indication that they were actually needed. A British study revealed that in times of pandemic uncertainty, clinicians often reached for antibiotics “just in case” and referred to conservative prescribing as “bravery.”
Studies have shown, however, that bacterial co-infections in COVID-19 are rare. A 2020 meta-analysis of 24 studies concluded that only 3.5% of patients had a bacterial co-infection on presentation, and 14.3% had a secondary infection. Similar patterns had previously been observed in other viral outbreaks. Research on MERS-CoV, for example, documented only 1% of patients with a bacterial co-infection on admission. During the 2009 H1N1 influenza pandemic, that number was 12% of non–ICU hospitalized patients.
Yet, according to Dr. Pettit, even when such data became available, it didn’t necessarily change prescribing patterns. “Information was coming at us so quickly, I think the providers didn’t have a moment to see the data, to understand what it meant for their prescribing. Having external guidance earlier on would have been hugely helpful,” she told this news organization.
That’s where the newly published SHEA statement comes in: It outlines recommendations on when to prescribe antibiotics during a respiratory viral pandemic, what tests to order, and when to de-escalate or discontinue the treatment. These recommendations include, for instance, advice to not trust inflammatory markers as reliable indicators of bacterial or fungal infection and to not use procalcitonin routinely to aid in the decision to initiate antibiotics.
According to Dr. Barlam, one of the crucial lessons here is that if clinicians see patients with symptoms that are consistent with the current pandemic, they should trust their own impressions and avoid reaching for antimicrobials “just in case.”
Another important lesson is that antibiotic stewardship programs have a huge role to play during pandemics. They should not only monitor prescribing but also compile new information on bacterial co-infections as it gets released and make sure it reaches the clinicians in a clear form.
Evidence suggests that such programs and guidelines do work to limit unnecessary antibiotic use. In one medical center in Chicago, for example, before recommendations on when to initiate and discontinue antimicrobials were released, over 74% of COVID-19 patients received antibiotics. After guidelines were put in place, the use of such drugs fell to 42%.
Dr. Pettit believes, however, that it’s important not to leave each medical center to its own devices. “Hindsight is always twenty-twenty,” she said, “but I think it would be great that, if we start hearing about a pathogen that might lead to another pandemic, we should have a mechanism in place to call together an expert body to get guidance for how antimicrobial stewardship programs should get involved.”
One of the authors of the SHEA statement, Susan Seo, reports an investigator-initiated Merck grant on cost-effectiveness of letermovir in hematopoietic stem cell transplant patients. Another author, Graeme Forrest, reports a clinical study grant from Regeneron for inpatient monoclonals against SARS-CoV-2. All other authors report no conflicts of interest. The study was independently supported.
A version of this article first appeared on Medscape.com.
A statement by the Society for Healthcare Epidemiology of America, published online in Infection Control & Hospital Epidemiology, offers health care providers guidelines on how to prevent inappropriate antibiotic use in future pandemics and to avoid some of the negative scenarios that have been seen with COVID-19.
According to the U.S. Centers of Disease Control and Prevention,
The culprit might be the widespread antibiotic overprescription during the current pandemic. A 2022 meta-analysis revealed that in high-income countries, 58% of patients with COVID-19 were given antibiotics, whereas in lower- and middle-income countries, 89% of patients were put on such drugs. Some hospitals in Europe and the United States reported similarly elevated numbers, sometimes approaching 100%.
“We’ve lost control,” Natasha Pettit, PharmD, pharmacy director at University of Chicago Medicine, told this news organization. Dr. Pettit was not involved in the SHEA study. “Even if CDC didn’t come out with that data, I can tell you right now more of my time is spent trying to figure out how to manage these multi-drug–resistant infections, and we are running out of options for these patients,”
“Dealing with uncertainty, exhaustion, [and] critical illness in often young, otherwise healthy patients meant doctors wanted to do something for their patients,” said Tamar Barlam, MD, an infectious diseases expert at the Boston Medical Center who led the development of the SHEA white paper, in an interview.
That something often was a prescription for antibiotics, even without a clear indication that they were actually needed. A British study revealed that in times of pandemic uncertainty, clinicians often reached for antibiotics “just in case” and referred to conservative prescribing as “bravery.”
Studies have shown, however, that bacterial co-infections in COVID-19 are rare. A 2020 meta-analysis of 24 studies concluded that only 3.5% of patients had a bacterial co-infection on presentation, and 14.3% had a secondary infection. Similar patterns had previously been observed in other viral outbreaks. Research on MERS-CoV, for example, documented only 1% of patients with a bacterial co-infection on admission. During the 2009 H1N1 influenza pandemic, that number was 12% of non–ICU hospitalized patients.
Yet, according to Dr. Pettit, even when such data became available, it didn’t necessarily change prescribing patterns. “Information was coming at us so quickly, I think the providers didn’t have a moment to see the data, to understand what it meant for their prescribing. Having external guidance earlier on would have been hugely helpful,” she told this news organization.
That’s where the newly published SHEA statement comes in: It outlines recommendations on when to prescribe antibiotics during a respiratory viral pandemic, what tests to order, and when to de-escalate or discontinue the treatment. These recommendations include, for instance, advice to not trust inflammatory markers as reliable indicators of bacterial or fungal infection and to not use procalcitonin routinely to aid in the decision to initiate antibiotics.
According to Dr. Barlam, one of the crucial lessons here is that if clinicians see patients with symptoms that are consistent with the current pandemic, they should trust their own impressions and avoid reaching for antimicrobials “just in case.”
Another important lesson is that antibiotic stewardship programs have a huge role to play during pandemics. They should not only monitor prescribing but also compile new information on bacterial co-infections as it gets released and make sure it reaches the clinicians in a clear form.
Evidence suggests that such programs and guidelines do work to limit unnecessary antibiotic use. In one medical center in Chicago, for example, before recommendations on when to initiate and discontinue antimicrobials were released, over 74% of COVID-19 patients received antibiotics. After guidelines were put in place, the use of such drugs fell to 42%.
Dr. Pettit believes, however, that it’s important not to leave each medical center to its own devices. “Hindsight is always twenty-twenty,” she said, “but I think it would be great that, if we start hearing about a pathogen that might lead to another pandemic, we should have a mechanism in place to call together an expert body to get guidance for how antimicrobial stewardship programs should get involved.”
One of the authors of the SHEA statement, Susan Seo, reports an investigator-initiated Merck grant on cost-effectiveness of letermovir in hematopoietic stem cell transplant patients. Another author, Graeme Forrest, reports a clinical study grant from Regeneron for inpatient monoclonals against SARS-CoV-2. All other authors report no conflicts of interest. The study was independently supported.
A version of this article first appeared on Medscape.com.
A statement by the Society for Healthcare Epidemiology of America, published online in Infection Control & Hospital Epidemiology, offers health care providers guidelines on how to prevent inappropriate antibiotic use in future pandemics and to avoid some of the negative scenarios that have been seen with COVID-19.
According to the U.S. Centers of Disease Control and Prevention,
The culprit might be the widespread antibiotic overprescription during the current pandemic. A 2022 meta-analysis revealed that in high-income countries, 58% of patients with COVID-19 were given antibiotics, whereas in lower- and middle-income countries, 89% of patients were put on such drugs. Some hospitals in Europe and the United States reported similarly elevated numbers, sometimes approaching 100%.
“We’ve lost control,” Natasha Pettit, PharmD, pharmacy director at University of Chicago Medicine, told this news organization. Dr. Pettit was not involved in the SHEA study. “Even if CDC didn’t come out with that data, I can tell you right now more of my time is spent trying to figure out how to manage these multi-drug–resistant infections, and we are running out of options for these patients,”
“Dealing with uncertainty, exhaustion, [and] critical illness in often young, otherwise healthy patients meant doctors wanted to do something for their patients,” said Tamar Barlam, MD, an infectious diseases expert at the Boston Medical Center who led the development of the SHEA white paper, in an interview.
That something often was a prescription for antibiotics, even without a clear indication that they were actually needed. A British study revealed that in times of pandemic uncertainty, clinicians often reached for antibiotics “just in case” and referred to conservative prescribing as “bravery.”
Studies have shown, however, that bacterial co-infections in COVID-19 are rare. A 2020 meta-analysis of 24 studies concluded that only 3.5% of patients had a bacterial co-infection on presentation, and 14.3% had a secondary infection. Similar patterns had previously been observed in other viral outbreaks. Research on MERS-CoV, for example, documented only 1% of patients with a bacterial co-infection on admission. During the 2009 H1N1 influenza pandemic, that number was 12% of non–ICU hospitalized patients.
Yet, according to Dr. Pettit, even when such data became available, it didn’t necessarily change prescribing patterns. “Information was coming at us so quickly, I think the providers didn’t have a moment to see the data, to understand what it meant for their prescribing. Having external guidance earlier on would have been hugely helpful,” she told this news organization.
That’s where the newly published SHEA statement comes in: It outlines recommendations on when to prescribe antibiotics during a respiratory viral pandemic, what tests to order, and when to de-escalate or discontinue the treatment. These recommendations include, for instance, advice to not trust inflammatory markers as reliable indicators of bacterial or fungal infection and to not use procalcitonin routinely to aid in the decision to initiate antibiotics.
According to Dr. Barlam, one of the crucial lessons here is that if clinicians see patients with symptoms that are consistent with the current pandemic, they should trust their own impressions and avoid reaching for antimicrobials “just in case.”
Another important lesson is that antibiotic stewardship programs have a huge role to play during pandemics. They should not only monitor prescribing but also compile new information on bacterial co-infections as it gets released and make sure it reaches the clinicians in a clear form.
Evidence suggests that such programs and guidelines do work to limit unnecessary antibiotic use. In one medical center in Chicago, for example, before recommendations on when to initiate and discontinue antimicrobials were released, over 74% of COVID-19 patients received antibiotics. After guidelines were put in place, the use of such drugs fell to 42%.
Dr. Pettit believes, however, that it’s important not to leave each medical center to its own devices. “Hindsight is always twenty-twenty,” she said, “but I think it would be great that, if we start hearing about a pathogen that might lead to another pandemic, we should have a mechanism in place to call together an expert body to get guidance for how antimicrobial stewardship programs should get involved.”
One of the authors of the SHEA statement, Susan Seo, reports an investigator-initiated Merck grant on cost-effectiveness of letermovir in hematopoietic stem cell transplant patients. Another author, Graeme Forrest, reports a clinical study grant from Regeneron for inpatient monoclonals against SARS-CoV-2. All other authors report no conflicts of interest. The study was independently supported.
A version of this article first appeared on Medscape.com.
FROM INFECTION CONTROL & HOSPITAL EPIDEMIOLOGY
Ancient human teeth revise the history of microbial evolution
The cupboard in Dr. Nicolás Rascovan’s microbial paleogenomics lab at Institut Pasteur in Paris is filled up with cardboard boxes that look as if they were shipped from an office supply store. Yet, instead of pencils and Post-it notes, the boxes are filled with ancient human remains from South America – several-thousand-year-old vertebrae, petrus bones (which protect inner ear structures), and teeth – all neatly packed in plastic bags.
It could even, perhaps, rewrite history. “It’s a story of a continent in a closet,” Dr. Rascovan says.Over the past decade, technologic advances in DNA recovery and sequencing have made it possible for scientists such as Dr. Rascovan, an Argentinian molecular biologist, to analyze ancient specimens relatively quickly and affordably. They’ve been hunting for – and finding – DNA of centuries-old microbes in various archeological samples: from smallpox variola virus and Mycobacterium tuberculosis in mummified tissues, to the Black Death bacteria, Yesinia pestis, in neolithic teeth, to Plasmodium falciparum preserved in historical blood stains.
The ultramodern Parisian offices of the microbial paleogenomics group, a team of five scientists led by Dr. Rascovan, clash with the logo they half-jokingly chose for themselves and plastered all over the lab’s walls: a Jurassic Park–inspired dinosaur baring its giant, ancient teeth, made to look like an image seen under a microscope. Ancient teeth are certainly central to the group’s work, because it’s there where ancient pathogens’ DNA is most likely to be preserved – after death, teeth act like tiny, sealed-up boxes for microbes. “If you have a pathogen that is circulating in the blood, it will sometimes get into the teeth, and when you die, the DNA will stay there,” Dr. Rascovan says.
To process ancient teeth, Dr. Rascovan enters a lab clad head to toe in protective gear. That’s not so much to save himself from potentially deadly disease as to save the samples from contamination, he says. According to Sebastian Duchene Garzon, a microbiologist at the University of Melbourne, “the likelihood of ancient pathogen DNA leading to infections at present is remote, although certainly not impossible, because of how degraded the DNA usually is and because it would still need all the molecular machinery to infect a modern host.”
To process ancient teeth in his lab, Dr. Rascovan starts with a thorough cleaning that involves bleach to remove any modern DNA contamination. Next, he cuts the tooth with a Dremel rotary tool to open it up and get into its pulp – which is not only very durable but also naturally sterile – a perfect place to find ancient pathogens. He then scrapes the pulp into a powder that can be poured into a tube for DNA extraction.
So far, Dr. Rascovan’s biggest breakthrough didn’t come from the teeth he cut up himself, though. It came from analyzing publicly available DNA data from studies of ancient human genomes. When such genomes are sequenced from fossil teeth or bones, scientists pick out the material they need for study of our ancestors’ evolutionary history. However, among the double helixes coding hominid genetic instructions often hide scraps of microbial DNA, which in the past were frequently simply discarded.
Dr. Rascovan downloaded data from published articles on ancient human DNA that had been found in teeth and reanalyzed them, searching for bacteria. One night, when he was alone in his office going through lines and lines of data, he spotted it: DNA of the plague-causing bacteria, Y. pestis. When Dr. Rascovan cross-checked to determine in which samples the bacteria’s DNA was found, his heart raced. “It was not supposed to be there,” he says. He had just discovered the most ancient case of plague in humans – which occurred 4,900 years ago in Sweden.
Scientists used to believe that plague pandemics came to Europe from the Eurasian Steppe. Yet here was the DNA of Y. pestis lodged in the teeth of two farmers, a woman and a man, who died in Scandinavia before the plague’s supposed arrival from the East. Their bodies were buried in an unusually large common grave – of itself a possible indication of an epidemic.
When Dr. Rascovan and his colleagues applied molecular-clock analyses of the phylogenetic tree of the plague bacteria and compared various strains to see which one was the most ancestral, they confirmed that the Swedish strain of Y. pestis, named Gok2, was indeed the oldest – the origin of the Steppe strains rather than its distant cousin. Plague, it seemed, wasn’t brought to Europe during mass migrations from the East. Instead, it might have originated there.
Such work is not simply about rewriting history. By updating our knowledge of ancient pandemics, we can learn how different factors influence each other in fostering outbreaks. For Dr. Rascovan, the Swedish plague story underscores the importance of our lifestyle and environment for the emergence and spread of dangerous pathogens. The Gok2 strain didn’t contain a gene that makes plague particularly virulent, called ymt, yet it might have played an important role in Bronze Age Europe. At that time, mega-settlements of 10,000 to 20,000 people existed in what is now Ukraine, Romania, and Moldova, yet those settlements were frequently burned to the ground and abandoned. According to Dr. Rascovan and his colleagues, that could fit with the plague pandemic story (although this remains very much a hypothesis).
In Mexico, environmental factors might have played an important role in the severity of the 16th century “cocoliztli” epidemic (the word means “pestilence” in a local language), considered one of the most devastating epidemics in New World history. The disease, which caused vomiting, red spots on the skin, and bleeding from various body orifices, didn’t have a known cause. Some hypothesized the bug might have been smallpox, judging by the severity of the outbreak. A 2018 study of a victim’s DNA showed it contained the genome of Salmonella enterica, a bacterium that causes enteric fever – a microbe generally milder than smallpox. The study’s authors argued that specific conditions may have been necessary at the onset of the epidemic for the S. enterica microbe to cause such devastating outcomes. A mix of severe draught, forced relocations of the local population by their Spanish rulers, and new subsistence farming practices all negatively affected hygienic conditions in the local settlements. According to Dr. Rascovan, such research can “place pandemics into their broader context” – with potential lessons for the future.
One of the microbes Dr. Rascovan and his team are hoping to find in the ancient teeth stocked in their lab’s closet is tuberculosis – a pathogen that kills 1.5 million people a year, yet whose evolutionary history remains largely a mystery. The focus of Dr. Rascovan and his colleagues remains on fossils shipped from South America, since we still know very little about microbes that were associated with pre-Columbian populations. South Americans have been isolated from the rest of the world for 20,000 years, making them particularly interesting candidates for the study of emergence, evolution, and spread of pathogens.
Dr. Rascovan believes that ancient microbial genomic data can help scientists better understand antibiotic resistance through comparisons of bacterial evolution before and after the discovery of antibiotics. In general, he says, by studying only current pathogens and the modern outbreaks they cause, we see only a narrow sample of something that is much more diverse and much larger. “We are missing an important part of information. Ancient samples can bring us a perspective,” he says.
A version of this article first appeared on Medscape.com.
The cupboard in Dr. Nicolás Rascovan’s microbial paleogenomics lab at Institut Pasteur in Paris is filled up with cardboard boxes that look as if they were shipped from an office supply store. Yet, instead of pencils and Post-it notes, the boxes are filled with ancient human remains from South America – several-thousand-year-old vertebrae, petrus bones (which protect inner ear structures), and teeth – all neatly packed in plastic bags.
It could even, perhaps, rewrite history. “It’s a story of a continent in a closet,” Dr. Rascovan says.Over the past decade, technologic advances in DNA recovery and sequencing have made it possible for scientists such as Dr. Rascovan, an Argentinian molecular biologist, to analyze ancient specimens relatively quickly and affordably. They’ve been hunting for – and finding – DNA of centuries-old microbes in various archeological samples: from smallpox variola virus and Mycobacterium tuberculosis in mummified tissues, to the Black Death bacteria, Yesinia pestis, in neolithic teeth, to Plasmodium falciparum preserved in historical blood stains.
The ultramodern Parisian offices of the microbial paleogenomics group, a team of five scientists led by Dr. Rascovan, clash with the logo they half-jokingly chose for themselves and plastered all over the lab’s walls: a Jurassic Park–inspired dinosaur baring its giant, ancient teeth, made to look like an image seen under a microscope. Ancient teeth are certainly central to the group’s work, because it’s there where ancient pathogens’ DNA is most likely to be preserved – after death, teeth act like tiny, sealed-up boxes for microbes. “If you have a pathogen that is circulating in the blood, it will sometimes get into the teeth, and when you die, the DNA will stay there,” Dr. Rascovan says.
To process ancient teeth, Dr. Rascovan enters a lab clad head to toe in protective gear. That’s not so much to save himself from potentially deadly disease as to save the samples from contamination, he says. According to Sebastian Duchene Garzon, a microbiologist at the University of Melbourne, “the likelihood of ancient pathogen DNA leading to infections at present is remote, although certainly not impossible, because of how degraded the DNA usually is and because it would still need all the molecular machinery to infect a modern host.”
To process ancient teeth in his lab, Dr. Rascovan starts with a thorough cleaning that involves bleach to remove any modern DNA contamination. Next, he cuts the tooth with a Dremel rotary tool to open it up and get into its pulp – which is not only very durable but also naturally sterile – a perfect place to find ancient pathogens. He then scrapes the pulp into a powder that can be poured into a tube for DNA extraction.
So far, Dr. Rascovan’s biggest breakthrough didn’t come from the teeth he cut up himself, though. It came from analyzing publicly available DNA data from studies of ancient human genomes. When such genomes are sequenced from fossil teeth or bones, scientists pick out the material they need for study of our ancestors’ evolutionary history. However, among the double helixes coding hominid genetic instructions often hide scraps of microbial DNA, which in the past were frequently simply discarded.
Dr. Rascovan downloaded data from published articles on ancient human DNA that had been found in teeth and reanalyzed them, searching for bacteria. One night, when he was alone in his office going through lines and lines of data, he spotted it: DNA of the plague-causing bacteria, Y. pestis. When Dr. Rascovan cross-checked to determine in which samples the bacteria’s DNA was found, his heart raced. “It was not supposed to be there,” he says. He had just discovered the most ancient case of plague in humans – which occurred 4,900 years ago in Sweden.
Scientists used to believe that plague pandemics came to Europe from the Eurasian Steppe. Yet here was the DNA of Y. pestis lodged in the teeth of two farmers, a woman and a man, who died in Scandinavia before the plague’s supposed arrival from the East. Their bodies were buried in an unusually large common grave – of itself a possible indication of an epidemic.
When Dr. Rascovan and his colleagues applied molecular-clock analyses of the phylogenetic tree of the plague bacteria and compared various strains to see which one was the most ancestral, they confirmed that the Swedish strain of Y. pestis, named Gok2, was indeed the oldest – the origin of the Steppe strains rather than its distant cousin. Plague, it seemed, wasn’t brought to Europe during mass migrations from the East. Instead, it might have originated there.
Such work is not simply about rewriting history. By updating our knowledge of ancient pandemics, we can learn how different factors influence each other in fostering outbreaks. For Dr. Rascovan, the Swedish plague story underscores the importance of our lifestyle and environment for the emergence and spread of dangerous pathogens. The Gok2 strain didn’t contain a gene that makes plague particularly virulent, called ymt, yet it might have played an important role in Bronze Age Europe. At that time, mega-settlements of 10,000 to 20,000 people existed in what is now Ukraine, Romania, and Moldova, yet those settlements were frequently burned to the ground and abandoned. According to Dr. Rascovan and his colleagues, that could fit with the plague pandemic story (although this remains very much a hypothesis).
In Mexico, environmental factors might have played an important role in the severity of the 16th century “cocoliztli” epidemic (the word means “pestilence” in a local language), considered one of the most devastating epidemics in New World history. The disease, which caused vomiting, red spots on the skin, and bleeding from various body orifices, didn’t have a known cause. Some hypothesized the bug might have been smallpox, judging by the severity of the outbreak. A 2018 study of a victim’s DNA showed it contained the genome of Salmonella enterica, a bacterium that causes enteric fever – a microbe generally milder than smallpox. The study’s authors argued that specific conditions may have been necessary at the onset of the epidemic for the S. enterica microbe to cause such devastating outcomes. A mix of severe draught, forced relocations of the local population by their Spanish rulers, and new subsistence farming practices all negatively affected hygienic conditions in the local settlements. According to Dr. Rascovan, such research can “place pandemics into their broader context” – with potential lessons for the future.
One of the microbes Dr. Rascovan and his team are hoping to find in the ancient teeth stocked in their lab’s closet is tuberculosis – a pathogen that kills 1.5 million people a year, yet whose evolutionary history remains largely a mystery. The focus of Dr. Rascovan and his colleagues remains on fossils shipped from South America, since we still know very little about microbes that were associated with pre-Columbian populations. South Americans have been isolated from the rest of the world for 20,000 years, making them particularly interesting candidates for the study of emergence, evolution, and spread of pathogens.
Dr. Rascovan believes that ancient microbial genomic data can help scientists better understand antibiotic resistance through comparisons of bacterial evolution before and after the discovery of antibiotics. In general, he says, by studying only current pathogens and the modern outbreaks they cause, we see only a narrow sample of something that is much more diverse and much larger. “We are missing an important part of information. Ancient samples can bring us a perspective,” he says.
A version of this article first appeared on Medscape.com.
The cupboard in Dr. Nicolás Rascovan’s microbial paleogenomics lab at Institut Pasteur in Paris is filled up with cardboard boxes that look as if they were shipped from an office supply store. Yet, instead of pencils and Post-it notes, the boxes are filled with ancient human remains from South America – several-thousand-year-old vertebrae, petrus bones (which protect inner ear structures), and teeth – all neatly packed in plastic bags.
It could even, perhaps, rewrite history. “It’s a story of a continent in a closet,” Dr. Rascovan says.Over the past decade, technologic advances in DNA recovery and sequencing have made it possible for scientists such as Dr. Rascovan, an Argentinian molecular biologist, to analyze ancient specimens relatively quickly and affordably. They’ve been hunting for – and finding – DNA of centuries-old microbes in various archeological samples: from smallpox variola virus and Mycobacterium tuberculosis in mummified tissues, to the Black Death bacteria, Yesinia pestis, in neolithic teeth, to Plasmodium falciparum preserved in historical blood stains.
The ultramodern Parisian offices of the microbial paleogenomics group, a team of five scientists led by Dr. Rascovan, clash with the logo they half-jokingly chose for themselves and plastered all over the lab’s walls: a Jurassic Park–inspired dinosaur baring its giant, ancient teeth, made to look like an image seen under a microscope. Ancient teeth are certainly central to the group’s work, because it’s there where ancient pathogens’ DNA is most likely to be preserved – after death, teeth act like tiny, sealed-up boxes for microbes. “If you have a pathogen that is circulating in the blood, it will sometimes get into the teeth, and when you die, the DNA will stay there,” Dr. Rascovan says.
To process ancient teeth, Dr. Rascovan enters a lab clad head to toe in protective gear. That’s not so much to save himself from potentially deadly disease as to save the samples from contamination, he says. According to Sebastian Duchene Garzon, a microbiologist at the University of Melbourne, “the likelihood of ancient pathogen DNA leading to infections at present is remote, although certainly not impossible, because of how degraded the DNA usually is and because it would still need all the molecular machinery to infect a modern host.”
To process ancient teeth in his lab, Dr. Rascovan starts with a thorough cleaning that involves bleach to remove any modern DNA contamination. Next, he cuts the tooth with a Dremel rotary tool to open it up and get into its pulp – which is not only very durable but also naturally sterile – a perfect place to find ancient pathogens. He then scrapes the pulp into a powder that can be poured into a tube for DNA extraction.
So far, Dr. Rascovan’s biggest breakthrough didn’t come from the teeth he cut up himself, though. It came from analyzing publicly available DNA data from studies of ancient human genomes. When such genomes are sequenced from fossil teeth or bones, scientists pick out the material they need for study of our ancestors’ evolutionary history. However, among the double helixes coding hominid genetic instructions often hide scraps of microbial DNA, which in the past were frequently simply discarded.
Dr. Rascovan downloaded data from published articles on ancient human DNA that had been found in teeth and reanalyzed them, searching for bacteria. One night, when he was alone in his office going through lines and lines of data, he spotted it: DNA of the plague-causing bacteria, Y. pestis. When Dr. Rascovan cross-checked to determine in which samples the bacteria’s DNA was found, his heart raced. “It was not supposed to be there,” he says. He had just discovered the most ancient case of plague in humans – which occurred 4,900 years ago in Sweden.
Scientists used to believe that plague pandemics came to Europe from the Eurasian Steppe. Yet here was the DNA of Y. pestis lodged in the teeth of two farmers, a woman and a man, who died in Scandinavia before the plague’s supposed arrival from the East. Their bodies were buried in an unusually large common grave – of itself a possible indication of an epidemic.
When Dr. Rascovan and his colleagues applied molecular-clock analyses of the phylogenetic tree of the plague bacteria and compared various strains to see which one was the most ancestral, they confirmed that the Swedish strain of Y. pestis, named Gok2, was indeed the oldest – the origin of the Steppe strains rather than its distant cousin. Plague, it seemed, wasn’t brought to Europe during mass migrations from the East. Instead, it might have originated there.
Such work is not simply about rewriting history. By updating our knowledge of ancient pandemics, we can learn how different factors influence each other in fostering outbreaks. For Dr. Rascovan, the Swedish plague story underscores the importance of our lifestyle and environment for the emergence and spread of dangerous pathogens. The Gok2 strain didn’t contain a gene that makes plague particularly virulent, called ymt, yet it might have played an important role in Bronze Age Europe. At that time, mega-settlements of 10,000 to 20,000 people existed in what is now Ukraine, Romania, and Moldova, yet those settlements were frequently burned to the ground and abandoned. According to Dr. Rascovan and his colleagues, that could fit with the plague pandemic story (although this remains very much a hypothesis).
In Mexico, environmental factors might have played an important role in the severity of the 16th century “cocoliztli” epidemic (the word means “pestilence” in a local language), considered one of the most devastating epidemics in New World history. The disease, which caused vomiting, red spots on the skin, and bleeding from various body orifices, didn’t have a known cause. Some hypothesized the bug might have been smallpox, judging by the severity of the outbreak. A 2018 study of a victim’s DNA showed it contained the genome of Salmonella enterica, a bacterium that causes enteric fever – a microbe generally milder than smallpox. The study’s authors argued that specific conditions may have been necessary at the onset of the epidemic for the S. enterica microbe to cause such devastating outcomes. A mix of severe draught, forced relocations of the local population by their Spanish rulers, and new subsistence farming practices all negatively affected hygienic conditions in the local settlements. According to Dr. Rascovan, such research can “place pandemics into their broader context” – with potential lessons for the future.
One of the microbes Dr. Rascovan and his team are hoping to find in the ancient teeth stocked in their lab’s closet is tuberculosis – a pathogen that kills 1.5 million people a year, yet whose evolutionary history remains largely a mystery. The focus of Dr. Rascovan and his colleagues remains on fossils shipped from South America, since we still know very little about microbes that were associated with pre-Columbian populations. South Americans have been isolated from the rest of the world for 20,000 years, making them particularly interesting candidates for the study of emergence, evolution, and spread of pathogens.
Dr. Rascovan believes that ancient microbial genomic data can help scientists better understand antibiotic resistance through comparisons of bacterial evolution before and after the discovery of antibiotics. In general, he says, by studying only current pathogens and the modern outbreaks they cause, we see only a narrow sample of something that is much more diverse and much larger. “We are missing an important part of information. Ancient samples can bring us a perspective,” he says.
A version of this article first appeared on Medscape.com.
Dracunculiasis – guinea worm disease – is close to eradication. But will we ever reach the finish line?
When in 1988 former U.S. President Jimmy Carter toured Denchira and Elevanyo, two villages near Accra, Ghana, he noticed a young woman who appeared to be cradling a baby. Carter approached her for a chat, but was stopped in his tracks by a disquieting sight.
“It was not a baby. It was her right breast, which was about a foot long, and it had a guinea worm coming out of its nipple,” Mr. Carter later recalled. During his tour of Ghana that year, Mr. Carter saw hundreds of people affected by the guinea worm, an infection known as dracunculiasis – a disease caused by the nematode parasite Dracunculus medinensis. It’s a condition that can cause fever, severe pain, and even permanent damage to affected limbs.
In the late 1980s the country reported as many as 180,000 cases of guinea worm disease per year. Across the globe, that number was a staggering 3.5 million. However, by 2020, the world was down to just 27 cases, all of them in Africa.
This enormous reduction in prevalence is a direct effect of campaigns by endemic countries assisted by organizations such as the Centers for Disease Control and Prevention, the World Health Organization, and the Carter Center (a not-for-profit founded in 1982 by Jimmy Carter), which have strived since the 1980s to eradicate dracunculiasis, hoping to make it the second human disease purposefully wiped off the face of Earth. (Smallpox was the first.)
“That’s an extraordinary public health achievement,” David Molyneux, PhD, parasitologist at the Liverpool School of Tropical Medicine, said in an interview. Yet the eradication goal, currently set for 2030, seems unlikely to be met. What’s more, some experts argue that chasing eradication may be altogether a misguided idea.
Humanity has known dracunculiasis for millennia. Well-preserved specimens of Dracunculus medinensis were discovered in Egyptian mummies, while some researchers claim that the Old Testament’s “fiery serpents” that descended upon the Israelites near the Red Sea were in fact guinea worms, as the parasite was endemic to the area in the past. Even the serpent coiled around the staff of Asclepius, the god of medicine, might have been a guinea worm, according to some historians.
This would make sense considering how the disease is treated. When an adult worm emerges through the skin, a painful and crippling occurrence, it is wound up around a stick or a piece of gauze, a little at a time, to slowly draw it out of the skin. As the worm can be over 3 feet long, this procedure may take weeks. What you end up with is a stick with a long, snake-like animal coiled around it. Asclepius’s staff.
The first step in the infection is when a person drinks water contaminated with copepods, or water fleas, which contain the larvae of Dracunculus medinensis. Next, the larvae are freed in the stomach and start migrating through the body, looking to mate. The fertilized female worm is the one that causes the debilitating symptoms.
About a year after the initial infection, the pregnant female worm looks for exit points from the body, usually through legs or feet, ready to release new larvae. If the unlucky sufferer steps into a pond or a river, the immature larvae escape into the water, where they are eaten by water fleas. “People are fetching water to drink, and they walk into the water thinking they can get cleaner water not along the edge,” Adam Weiss, MPH, director of the Carter Center’s Guinea Worm Eradication Program, said in an interview. The vicious cycle begins anew.
Dracunculiasis may not be a killer disease, but it is painful and disabling. A study on school attendance in Nigeria showed that in 1995 when guinea worm infection prevalence among schoolchildren was as high as 27.7%, it was responsible for almost all school absences. As the result of the infection, children were seen wandering and sitting around the village helplessly. If it was the parents who got infected, children stayed out of school to help around the home. The dracunculiasis’ impact on work and earning capacity is so profound, in fact, that in Mali the infliction is known as “the disease of the empty granary.”
When in 1986 the Carter Center took the reins of the global dracunculiasis eradication campaign, India was the only country with a national program to get rid of the disease. Yet, once other nations joined the struggle, the results rapidly became visible. By 1993, the American Journal of Tropical Medicine and Hygiene published a paper titled, “Dracunculiasis Eradication: Beginning of the End.” The cases plummeted from 3.5 million in 1986 to 221,000 in 1993 and 32,000 in 2003, then to a mere 22 cases in 2015. What worked was a combination of surveillance, education campaigns, safe water provision, and treating potentially contaminated water with a chemical called Abate, a potent larvicide.
Today, many endemic countries, from Chad and Ethiopia to Mali and South Sudan, follow similar procedures. First and foremost is the supply of clean drinking water. However, Mr. Weiss said, this is not a “silver bullet, given how people live.” Those who are seminomadic or otherwise take care of livestock often fetch water outside of the village, from ponds or rivers. This is why dracunculiasis eradication programs include handing out portable water filters, which can be worn around the neck.
But if you don’t know why you should filter water, in all likelihood you won’t do it – cloth filters distributed for home water purification sometimes ended up as decorations or sewn into wedding dresses. That’s why education is key, too. Poster campaigns, comic books, radio broadcasts, instructions by volunteers, even t-shirts with health messages slowly but surely did change behaviors.
Cash rewards for reporting cases of dracunculiasis, which can be as high as $100, also work well to boost surveillance systems. Once a case is identified, patients may be moved to a containment center, both to treat the wound and to prevent patients from spreading the disease. Local water sources, meanwhile, may be sprayed with Abate.
1995 was the first year set as a target date for the eradication of dracunculiasis. Yet the goal wasn’t met – even though the total number of cases did decline by 97%. Next goals followed: 2009, 2020, and now, finally, 2030. For well over a decade now the world has been down to a trickle of cases per year, but the numbers don’t seem to want to budge lower. Mr. Weiss calls it a “limbo period” – we are almost there, but not quite. The final push, it seems, may be the one that’s the most difficult, especially now that we have two further complications: increasing conflicts in some endemic areas and zoonotic transmission.
According to WHO, in places like the Democratic Republic of the Congo, Mali, South Sudan, and Sudan, insecurity “hinders eradication efforts.” Not only does this insecurity make it difficult for health workers to reach endemic areas, but wars and violence also displace people, pushing those infected with guinea worm to walk far distances in search of safety, and spreading the disease during their travels. Case containment and contact tracing become challenging. A recent study by Dr. Molyneux and colleagues showed that, in the 3 years since 2018, conflicts in the endemic areas have increased dramatically.
And then there are the animals. Up until 2012, eradication of guinea worm seemed fairly simple, at least from a biological perspective: Stop infected humans from contaminating drinking water and the parasites won’t be able to continue their life cycle. But in 2012, news came from Chad that a significant number of local dogs were found infected with the Dracunculus medinensis parasite, the very same one that attacks humans. In 2020, close to 1,600 dogs were reported to be infected with guinea worm, most of them in Chad. This left scientists scratching their heads: Dracunculiasis was supposed to be a purely human infliction. How were the dogs getting infected? Did the parasite jump to a new species because we were so efficient at eliminating it from humans?
“I have first seen a guinea worm transmission in dogs back in 2003,” Teshome Gebre, PhD, said in an interview. Dr. Gebre is regional director for Africa at International Trachoma Initiative and has spent more than 40 years fighting to eradicate various diseases, including smallpox and guinea worm. Yet in 2003, Dr. Gebre’s report was dismissed: it couldn’t have been the same species of the parasite, the reasoning went, since Dracunculus medinensis was exclusive to humans.
“I think it’s fair to say that there were infections in dogs before 2012. I find it difficult to believe, logically, that it just came out of nowhere,” Mr. Weiss said. A 2018 genetic study showed that a novel host switch is an unlikely scenario – the parasites must have been infecting dogs in the past, we just haven’t been looking. By 2012, Chad had a very efficient guinea worm surveillance system, with generous cash rewards for human cases, and people started reporting the dogs, too. Soon money was also offered for news on infected animals, and the cases exploded. This was then followed by accounts of afflicted cats and baboons.
To announce the eradication of dracunculiasis in 2030, the requirement will be no more transmission of the parasite for at least 4 years prior anywhere in the world – not only zero human cases, but also no infections in dogs, cats, or baboons. Seven countries remain to be certified as guinea worm free, all of them in Africa. “We have to be a 100% sure that there is no transmission of the parasite in a country,” said Dr. Molyneux, who participated in country certification teams – a rigorous process to validate country reports. He believes that the presence of animal hosts as well as growing insecurities in the region make such certification extremely challenging over the next few years.
“Eradication as it is defined does not seem feasible by 2030 as things stand, [considering] political and resource constraints, the unknowns of the ecology of dogs, and the possible impact of climate change and geopolitical instability and with countries having other health priorities, including COVID,” Dr. Molyneux said.
For Mr. Weiss, dogs are not that much of a problem – since they can be tethered to prevent the spread of the disease. But you can’t tether baboons. “That does raise that more existential threat–related question of: Is this scientifically possible?” he said. Mr. Weiss and colleagues at the Centers for Disease Control and Prevention are currently working on a serologic assay to test whether baboons are important for human transmission.
For some experts, such as Dr. Gebre, the current struggles to bring cases down to zero put a spotlight on a bigger question: is it worthwhile to strive for eradication at all? That last stretch of the eradication campaign can appear a bit like a game of whack-a-mole. “There were times when we’ve achieved zero cases [in Ethiopia]. Zero. And then, it just reemerges,” Dr. Gebre said. Programs aimed at certification are costly, running up to $1.6 million per year in Nigeria. The funds often come from the same donor pockets that pay for the fight against malaria, HIV, polio, as well as other neglected tropical diseases. Dr. Gebre believed it would be more cost and time efficient to switch the effort from total eradication to elimination as a public health care problem.
Of course, there is the risk that the cases would go up again once we ease up on the pressure to eradicate dracunculiasis. “Do we want to be fighting guinea worm in perpetuity?” Mr. Weiss asked. However, Dr. Gebre believed the cases are unlikely to explode anymore.
“The situation in the countries is not the way it was 30 years ago,” Dr. Gebre said, pointing out increased awareness, higher education levels, and better community-based health facilities. “You can cap it around a trickle number of cases a year – 10, 15, 20 maybe.”
The keys, Dr. Gebre and Dr. Molyneux both said, include the provision of safe drinking water and strengthening the healthcare systems of endemic countries in general, so they can deal with whatever cases may come up. “Water, sanitation, surveillance, good public education – and the maintenance of the guinea worm–specific reward system to maintain awareness, as well as continuing research” are all needed, Dr. Molyneux said.
Getting out of the dracunculiasis limbo period won’t be easy. We certainly need more data on animal transmission to better understand what challenges we might be facing. The experts agree that what’s important is to follow the science and stay flexible. “We have made an incredible progress, our investment has been worthwhile,” Dr. Molyneux said. But “you have to adapt to the changing realities.”
Dr. Gebre received no financial support for the review article and has no other conflicts of interest to declare. Dr. Molyneux is a member of the WHO International Commission for the Certification of Dracunculus Eradication, an independent body appointed by the director general of WHO. He acts as a rapporteur for the ICCDE as a paid consultant. He declared he does not receive any financial support for other related activities. Mr. Weiss receives support from the nonprofit Carter Center.
A version of this article first appeared on Medscape.com.
When in 1988 former U.S. President Jimmy Carter toured Denchira and Elevanyo, two villages near Accra, Ghana, he noticed a young woman who appeared to be cradling a baby. Carter approached her for a chat, but was stopped in his tracks by a disquieting sight.
“It was not a baby. It was her right breast, which was about a foot long, and it had a guinea worm coming out of its nipple,” Mr. Carter later recalled. During his tour of Ghana that year, Mr. Carter saw hundreds of people affected by the guinea worm, an infection known as dracunculiasis – a disease caused by the nematode parasite Dracunculus medinensis. It’s a condition that can cause fever, severe pain, and even permanent damage to affected limbs.
In the late 1980s the country reported as many as 180,000 cases of guinea worm disease per year. Across the globe, that number was a staggering 3.5 million. However, by 2020, the world was down to just 27 cases, all of them in Africa.
This enormous reduction in prevalence is a direct effect of campaigns by endemic countries assisted by organizations such as the Centers for Disease Control and Prevention, the World Health Organization, and the Carter Center (a not-for-profit founded in 1982 by Jimmy Carter), which have strived since the 1980s to eradicate dracunculiasis, hoping to make it the second human disease purposefully wiped off the face of Earth. (Smallpox was the first.)
“That’s an extraordinary public health achievement,” David Molyneux, PhD, parasitologist at the Liverpool School of Tropical Medicine, said in an interview. Yet the eradication goal, currently set for 2030, seems unlikely to be met. What’s more, some experts argue that chasing eradication may be altogether a misguided idea.
Humanity has known dracunculiasis for millennia. Well-preserved specimens of Dracunculus medinensis were discovered in Egyptian mummies, while some researchers claim that the Old Testament’s “fiery serpents” that descended upon the Israelites near the Red Sea were in fact guinea worms, as the parasite was endemic to the area in the past. Even the serpent coiled around the staff of Asclepius, the god of medicine, might have been a guinea worm, according to some historians.
This would make sense considering how the disease is treated. When an adult worm emerges through the skin, a painful and crippling occurrence, it is wound up around a stick or a piece of gauze, a little at a time, to slowly draw it out of the skin. As the worm can be over 3 feet long, this procedure may take weeks. What you end up with is a stick with a long, snake-like animal coiled around it. Asclepius’s staff.
The first step in the infection is when a person drinks water contaminated with copepods, or water fleas, which contain the larvae of Dracunculus medinensis. Next, the larvae are freed in the stomach and start migrating through the body, looking to mate. The fertilized female worm is the one that causes the debilitating symptoms.
About a year after the initial infection, the pregnant female worm looks for exit points from the body, usually through legs or feet, ready to release new larvae. If the unlucky sufferer steps into a pond or a river, the immature larvae escape into the water, where they are eaten by water fleas. “People are fetching water to drink, and they walk into the water thinking they can get cleaner water not along the edge,” Adam Weiss, MPH, director of the Carter Center’s Guinea Worm Eradication Program, said in an interview. The vicious cycle begins anew.
Dracunculiasis may not be a killer disease, but it is painful and disabling. A study on school attendance in Nigeria showed that in 1995 when guinea worm infection prevalence among schoolchildren was as high as 27.7%, it was responsible for almost all school absences. As the result of the infection, children were seen wandering and sitting around the village helplessly. If it was the parents who got infected, children stayed out of school to help around the home. The dracunculiasis’ impact on work and earning capacity is so profound, in fact, that in Mali the infliction is known as “the disease of the empty granary.”
When in 1986 the Carter Center took the reins of the global dracunculiasis eradication campaign, India was the only country with a national program to get rid of the disease. Yet, once other nations joined the struggle, the results rapidly became visible. By 1993, the American Journal of Tropical Medicine and Hygiene published a paper titled, “Dracunculiasis Eradication: Beginning of the End.” The cases plummeted from 3.5 million in 1986 to 221,000 in 1993 and 32,000 in 2003, then to a mere 22 cases in 2015. What worked was a combination of surveillance, education campaigns, safe water provision, and treating potentially contaminated water with a chemical called Abate, a potent larvicide.
Today, many endemic countries, from Chad and Ethiopia to Mali and South Sudan, follow similar procedures. First and foremost is the supply of clean drinking water. However, Mr. Weiss said, this is not a “silver bullet, given how people live.” Those who are seminomadic or otherwise take care of livestock often fetch water outside of the village, from ponds or rivers. This is why dracunculiasis eradication programs include handing out portable water filters, which can be worn around the neck.
But if you don’t know why you should filter water, in all likelihood you won’t do it – cloth filters distributed for home water purification sometimes ended up as decorations or sewn into wedding dresses. That’s why education is key, too. Poster campaigns, comic books, radio broadcasts, instructions by volunteers, even t-shirts with health messages slowly but surely did change behaviors.
Cash rewards for reporting cases of dracunculiasis, which can be as high as $100, also work well to boost surveillance systems. Once a case is identified, patients may be moved to a containment center, both to treat the wound and to prevent patients from spreading the disease. Local water sources, meanwhile, may be sprayed with Abate.
1995 was the first year set as a target date for the eradication of dracunculiasis. Yet the goal wasn’t met – even though the total number of cases did decline by 97%. Next goals followed: 2009, 2020, and now, finally, 2030. For well over a decade now the world has been down to a trickle of cases per year, but the numbers don’t seem to want to budge lower. Mr. Weiss calls it a “limbo period” – we are almost there, but not quite. The final push, it seems, may be the one that’s the most difficult, especially now that we have two further complications: increasing conflicts in some endemic areas and zoonotic transmission.
According to WHO, in places like the Democratic Republic of the Congo, Mali, South Sudan, and Sudan, insecurity “hinders eradication efforts.” Not only does this insecurity make it difficult for health workers to reach endemic areas, but wars and violence also displace people, pushing those infected with guinea worm to walk far distances in search of safety, and spreading the disease during their travels. Case containment and contact tracing become challenging. A recent study by Dr. Molyneux and colleagues showed that, in the 3 years since 2018, conflicts in the endemic areas have increased dramatically.
And then there are the animals. Up until 2012, eradication of guinea worm seemed fairly simple, at least from a biological perspective: Stop infected humans from contaminating drinking water and the parasites won’t be able to continue their life cycle. But in 2012, news came from Chad that a significant number of local dogs were found infected with the Dracunculus medinensis parasite, the very same one that attacks humans. In 2020, close to 1,600 dogs were reported to be infected with guinea worm, most of them in Chad. This left scientists scratching their heads: Dracunculiasis was supposed to be a purely human infliction. How were the dogs getting infected? Did the parasite jump to a new species because we were so efficient at eliminating it from humans?
“I have first seen a guinea worm transmission in dogs back in 2003,” Teshome Gebre, PhD, said in an interview. Dr. Gebre is regional director for Africa at International Trachoma Initiative and has spent more than 40 years fighting to eradicate various diseases, including smallpox and guinea worm. Yet in 2003, Dr. Gebre’s report was dismissed: it couldn’t have been the same species of the parasite, the reasoning went, since Dracunculus medinensis was exclusive to humans.
“I think it’s fair to say that there were infections in dogs before 2012. I find it difficult to believe, logically, that it just came out of nowhere,” Mr. Weiss said. A 2018 genetic study showed that a novel host switch is an unlikely scenario – the parasites must have been infecting dogs in the past, we just haven’t been looking. By 2012, Chad had a very efficient guinea worm surveillance system, with generous cash rewards for human cases, and people started reporting the dogs, too. Soon money was also offered for news on infected animals, and the cases exploded. This was then followed by accounts of afflicted cats and baboons.
To announce the eradication of dracunculiasis in 2030, the requirement will be no more transmission of the parasite for at least 4 years prior anywhere in the world – not only zero human cases, but also no infections in dogs, cats, or baboons. Seven countries remain to be certified as guinea worm free, all of them in Africa. “We have to be a 100% sure that there is no transmission of the parasite in a country,” said Dr. Molyneux, who participated in country certification teams – a rigorous process to validate country reports. He believes that the presence of animal hosts as well as growing insecurities in the region make such certification extremely challenging over the next few years.
“Eradication as it is defined does not seem feasible by 2030 as things stand, [considering] political and resource constraints, the unknowns of the ecology of dogs, and the possible impact of climate change and geopolitical instability and with countries having other health priorities, including COVID,” Dr. Molyneux said.
For Mr. Weiss, dogs are not that much of a problem – since they can be tethered to prevent the spread of the disease. But you can’t tether baboons. “That does raise that more existential threat–related question of: Is this scientifically possible?” he said. Mr. Weiss and colleagues at the Centers for Disease Control and Prevention are currently working on a serologic assay to test whether baboons are important for human transmission.
For some experts, such as Dr. Gebre, the current struggles to bring cases down to zero put a spotlight on a bigger question: is it worthwhile to strive for eradication at all? That last stretch of the eradication campaign can appear a bit like a game of whack-a-mole. “There were times when we’ve achieved zero cases [in Ethiopia]. Zero. And then, it just reemerges,” Dr. Gebre said. Programs aimed at certification are costly, running up to $1.6 million per year in Nigeria. The funds often come from the same donor pockets that pay for the fight against malaria, HIV, polio, as well as other neglected tropical diseases. Dr. Gebre believed it would be more cost and time efficient to switch the effort from total eradication to elimination as a public health care problem.
Of course, there is the risk that the cases would go up again once we ease up on the pressure to eradicate dracunculiasis. “Do we want to be fighting guinea worm in perpetuity?” Mr. Weiss asked. However, Dr. Gebre believed the cases are unlikely to explode anymore.
“The situation in the countries is not the way it was 30 years ago,” Dr. Gebre said, pointing out increased awareness, higher education levels, and better community-based health facilities. “You can cap it around a trickle number of cases a year – 10, 15, 20 maybe.”
The keys, Dr. Gebre and Dr. Molyneux both said, include the provision of safe drinking water and strengthening the healthcare systems of endemic countries in general, so they can deal with whatever cases may come up. “Water, sanitation, surveillance, good public education – and the maintenance of the guinea worm–specific reward system to maintain awareness, as well as continuing research” are all needed, Dr. Molyneux said.
Getting out of the dracunculiasis limbo period won’t be easy. We certainly need more data on animal transmission to better understand what challenges we might be facing. The experts agree that what’s important is to follow the science and stay flexible. “We have made an incredible progress, our investment has been worthwhile,” Dr. Molyneux said. But “you have to adapt to the changing realities.”
Dr. Gebre received no financial support for the review article and has no other conflicts of interest to declare. Dr. Molyneux is a member of the WHO International Commission for the Certification of Dracunculus Eradication, an independent body appointed by the director general of WHO. He acts as a rapporteur for the ICCDE as a paid consultant. He declared he does not receive any financial support for other related activities. Mr. Weiss receives support from the nonprofit Carter Center.
A version of this article first appeared on Medscape.com.
When in 1988 former U.S. President Jimmy Carter toured Denchira and Elevanyo, two villages near Accra, Ghana, he noticed a young woman who appeared to be cradling a baby. Carter approached her for a chat, but was stopped in his tracks by a disquieting sight.
“It was not a baby. It was her right breast, which was about a foot long, and it had a guinea worm coming out of its nipple,” Mr. Carter later recalled. During his tour of Ghana that year, Mr. Carter saw hundreds of people affected by the guinea worm, an infection known as dracunculiasis – a disease caused by the nematode parasite Dracunculus medinensis. It’s a condition that can cause fever, severe pain, and even permanent damage to affected limbs.
In the late 1980s the country reported as many as 180,000 cases of guinea worm disease per year. Across the globe, that number was a staggering 3.5 million. However, by 2020, the world was down to just 27 cases, all of them in Africa.
This enormous reduction in prevalence is a direct effect of campaigns by endemic countries assisted by organizations such as the Centers for Disease Control and Prevention, the World Health Organization, and the Carter Center (a not-for-profit founded in 1982 by Jimmy Carter), which have strived since the 1980s to eradicate dracunculiasis, hoping to make it the second human disease purposefully wiped off the face of Earth. (Smallpox was the first.)
“That’s an extraordinary public health achievement,” David Molyneux, PhD, parasitologist at the Liverpool School of Tropical Medicine, said in an interview. Yet the eradication goal, currently set for 2030, seems unlikely to be met. What’s more, some experts argue that chasing eradication may be altogether a misguided idea.
Humanity has known dracunculiasis for millennia. Well-preserved specimens of Dracunculus medinensis were discovered in Egyptian mummies, while some researchers claim that the Old Testament’s “fiery serpents” that descended upon the Israelites near the Red Sea were in fact guinea worms, as the parasite was endemic to the area in the past. Even the serpent coiled around the staff of Asclepius, the god of medicine, might have been a guinea worm, according to some historians.
This would make sense considering how the disease is treated. When an adult worm emerges through the skin, a painful and crippling occurrence, it is wound up around a stick or a piece of gauze, a little at a time, to slowly draw it out of the skin. As the worm can be over 3 feet long, this procedure may take weeks. What you end up with is a stick with a long, snake-like animal coiled around it. Asclepius’s staff.
The first step in the infection is when a person drinks water contaminated with copepods, or water fleas, which contain the larvae of Dracunculus medinensis. Next, the larvae are freed in the stomach and start migrating through the body, looking to mate. The fertilized female worm is the one that causes the debilitating symptoms.
About a year after the initial infection, the pregnant female worm looks for exit points from the body, usually through legs or feet, ready to release new larvae. If the unlucky sufferer steps into a pond or a river, the immature larvae escape into the water, where they are eaten by water fleas. “People are fetching water to drink, and they walk into the water thinking they can get cleaner water not along the edge,” Adam Weiss, MPH, director of the Carter Center’s Guinea Worm Eradication Program, said in an interview. The vicious cycle begins anew.
Dracunculiasis may not be a killer disease, but it is painful and disabling. A study on school attendance in Nigeria showed that in 1995 when guinea worm infection prevalence among schoolchildren was as high as 27.7%, it was responsible for almost all school absences. As the result of the infection, children were seen wandering and sitting around the village helplessly. If it was the parents who got infected, children stayed out of school to help around the home. The dracunculiasis’ impact on work and earning capacity is so profound, in fact, that in Mali the infliction is known as “the disease of the empty granary.”
When in 1986 the Carter Center took the reins of the global dracunculiasis eradication campaign, India was the only country with a national program to get rid of the disease. Yet, once other nations joined the struggle, the results rapidly became visible. By 1993, the American Journal of Tropical Medicine and Hygiene published a paper titled, “Dracunculiasis Eradication: Beginning of the End.” The cases plummeted from 3.5 million in 1986 to 221,000 in 1993 and 32,000 in 2003, then to a mere 22 cases in 2015. What worked was a combination of surveillance, education campaigns, safe water provision, and treating potentially contaminated water with a chemical called Abate, a potent larvicide.
Today, many endemic countries, from Chad and Ethiopia to Mali and South Sudan, follow similar procedures. First and foremost is the supply of clean drinking water. However, Mr. Weiss said, this is not a “silver bullet, given how people live.” Those who are seminomadic or otherwise take care of livestock often fetch water outside of the village, from ponds or rivers. This is why dracunculiasis eradication programs include handing out portable water filters, which can be worn around the neck.
But if you don’t know why you should filter water, in all likelihood you won’t do it – cloth filters distributed for home water purification sometimes ended up as decorations or sewn into wedding dresses. That’s why education is key, too. Poster campaigns, comic books, radio broadcasts, instructions by volunteers, even t-shirts with health messages slowly but surely did change behaviors.
Cash rewards for reporting cases of dracunculiasis, which can be as high as $100, also work well to boost surveillance systems. Once a case is identified, patients may be moved to a containment center, both to treat the wound and to prevent patients from spreading the disease. Local water sources, meanwhile, may be sprayed with Abate.
1995 was the first year set as a target date for the eradication of dracunculiasis. Yet the goal wasn’t met – even though the total number of cases did decline by 97%. Next goals followed: 2009, 2020, and now, finally, 2030. For well over a decade now the world has been down to a trickle of cases per year, but the numbers don’t seem to want to budge lower. Mr. Weiss calls it a “limbo period” – we are almost there, but not quite. The final push, it seems, may be the one that’s the most difficult, especially now that we have two further complications: increasing conflicts in some endemic areas and zoonotic transmission.
According to WHO, in places like the Democratic Republic of the Congo, Mali, South Sudan, and Sudan, insecurity “hinders eradication efforts.” Not only does this insecurity make it difficult for health workers to reach endemic areas, but wars and violence also displace people, pushing those infected with guinea worm to walk far distances in search of safety, and spreading the disease during their travels. Case containment and contact tracing become challenging. A recent study by Dr. Molyneux and colleagues showed that, in the 3 years since 2018, conflicts in the endemic areas have increased dramatically.
And then there are the animals. Up until 2012, eradication of guinea worm seemed fairly simple, at least from a biological perspective: Stop infected humans from contaminating drinking water and the parasites won’t be able to continue their life cycle. But in 2012, news came from Chad that a significant number of local dogs were found infected with the Dracunculus medinensis parasite, the very same one that attacks humans. In 2020, close to 1,600 dogs were reported to be infected with guinea worm, most of them in Chad. This left scientists scratching their heads: Dracunculiasis was supposed to be a purely human infliction. How were the dogs getting infected? Did the parasite jump to a new species because we were so efficient at eliminating it from humans?
“I have first seen a guinea worm transmission in dogs back in 2003,” Teshome Gebre, PhD, said in an interview. Dr. Gebre is regional director for Africa at International Trachoma Initiative and has spent more than 40 years fighting to eradicate various diseases, including smallpox and guinea worm. Yet in 2003, Dr. Gebre’s report was dismissed: it couldn’t have been the same species of the parasite, the reasoning went, since Dracunculus medinensis was exclusive to humans.
“I think it’s fair to say that there were infections in dogs before 2012. I find it difficult to believe, logically, that it just came out of nowhere,” Mr. Weiss said. A 2018 genetic study showed that a novel host switch is an unlikely scenario – the parasites must have been infecting dogs in the past, we just haven’t been looking. By 2012, Chad had a very efficient guinea worm surveillance system, with generous cash rewards for human cases, and people started reporting the dogs, too. Soon money was also offered for news on infected animals, and the cases exploded. This was then followed by accounts of afflicted cats and baboons.
To announce the eradication of dracunculiasis in 2030, the requirement will be no more transmission of the parasite for at least 4 years prior anywhere in the world – not only zero human cases, but also no infections in dogs, cats, or baboons. Seven countries remain to be certified as guinea worm free, all of them in Africa. “We have to be a 100% sure that there is no transmission of the parasite in a country,” said Dr. Molyneux, who participated in country certification teams – a rigorous process to validate country reports. He believes that the presence of animal hosts as well as growing insecurities in the region make such certification extremely challenging over the next few years.
“Eradication as it is defined does not seem feasible by 2030 as things stand, [considering] political and resource constraints, the unknowns of the ecology of dogs, and the possible impact of climate change and geopolitical instability and with countries having other health priorities, including COVID,” Dr. Molyneux said.
For Mr. Weiss, dogs are not that much of a problem – since they can be tethered to prevent the spread of the disease. But you can’t tether baboons. “That does raise that more existential threat–related question of: Is this scientifically possible?” he said. Mr. Weiss and colleagues at the Centers for Disease Control and Prevention are currently working on a serologic assay to test whether baboons are important for human transmission.
For some experts, such as Dr. Gebre, the current struggles to bring cases down to zero put a spotlight on a bigger question: is it worthwhile to strive for eradication at all? That last stretch of the eradication campaign can appear a bit like a game of whack-a-mole. “There were times when we’ve achieved zero cases [in Ethiopia]. Zero. And then, it just reemerges,” Dr. Gebre said. Programs aimed at certification are costly, running up to $1.6 million per year in Nigeria. The funds often come from the same donor pockets that pay for the fight against malaria, HIV, polio, as well as other neglected tropical diseases. Dr. Gebre believed it would be more cost and time efficient to switch the effort from total eradication to elimination as a public health care problem.
Of course, there is the risk that the cases would go up again once we ease up on the pressure to eradicate dracunculiasis. “Do we want to be fighting guinea worm in perpetuity?” Mr. Weiss asked. However, Dr. Gebre believed the cases are unlikely to explode anymore.
“The situation in the countries is not the way it was 30 years ago,” Dr. Gebre said, pointing out increased awareness, higher education levels, and better community-based health facilities. “You can cap it around a trickle number of cases a year – 10, 15, 20 maybe.”
The keys, Dr. Gebre and Dr. Molyneux both said, include the provision of safe drinking water and strengthening the healthcare systems of endemic countries in general, so they can deal with whatever cases may come up. “Water, sanitation, surveillance, good public education – and the maintenance of the guinea worm–specific reward system to maintain awareness, as well as continuing research” are all needed, Dr. Molyneux said.
Getting out of the dracunculiasis limbo period won’t be easy. We certainly need more data on animal transmission to better understand what challenges we might be facing. The experts agree that what’s important is to follow the science and stay flexible. “We have made an incredible progress, our investment has been worthwhile,” Dr. Molyneux said. But “you have to adapt to the changing realities.”
Dr. Gebre received no financial support for the review article and has no other conflicts of interest to declare. Dr. Molyneux is a member of the WHO International Commission for the Certification of Dracunculus Eradication, an independent body appointed by the director general of WHO. He acts as a rapporteur for the ICCDE as a paid consultant. He declared he does not receive any financial support for other related activities. Mr. Weiss receives support from the nonprofit Carter Center.
A version of this article first appeared on Medscape.com.
The role of probiotics in mental health
In 1950, at Staten Island’s Sea View Hospital, a group of patients with terminal tuberculosis were given a new antibiotic called isoniazid, which caused some unexpected side effects. The patients reported euphoria, mental stimulation, and improved sleep, and even began socializing with more vigor. The press was all over the case, writing about the sick “dancing in the halls tho’ they had holes in their lungs.” Soon doctors started prescribing isoniazid as the first-ever antidepressant.
The Sea View Hospital experiment was an early hint that changing the composition of the gut microbiome – in this case, via antibiotics – might affect our mental health. Yet only in the last 2 decades has research into connections between what we ingest and psychiatric disorders really taken off. In 2004, a landmark study showed that germ-free mice (born in such sterile conditions that they lacked a microbiome) had an exaggerated stress response. The effects were reversed, however, if the mice were fed a bacterial strain, Bifidobacterium infantis, a probiotic. This sparked academic interest, and thousands of research papers followed.
According to Stephen Ilardi, PhD, a clinical psychologist at the University of Kansas, Lawrence, focusing on the etiology and treatment of depression, now is the “time of exciting discovery” in the field of probiotics and psychiatric disorders, although, admittedly, a lot still remains unknown.
Gut microbiome profiles in mental health disorders
We humans have about 100 trillion microbes residing in our guts. Some of these are archaea, some fungi, some protozoans and even viruses, but most are bacteria. Things like diet, sleep, and stress can all impact the composition of our gut microbiome. When the microbiome differs considerably from the typical, doctors and researchers describe it as dysbiosis, or imbalance. Studies have uncovered dysbiosis in patients with depression, anxiety, schizophrenia, and bipolar disorder.
“I think there is now pretty good evidence that the gut microbiome is actually an important factor in a number of psychiatric disorders,” says Allan Young, MBChB, clinical psychiatrist at King’s College London. The gut microbiome composition does seem to differ between psychiatric patients and the healthy. In depression, for example, a recent review of nine studies found an increase on the genus level in Streptococcus and Oscillibacter and low abundance of Lactobacillus and Coprococcus, among others. In generalized anxiety disorder, meanwhile, there appears to be an increase in Fusobacteria and Escherichia/Shigella .
For Dr. Ilardi, the next important question is whether there are plausible mechanisms that could explain how gut microbiota may influence brain function. And, it appears there are.
“The microbes in the gut can release neurotransmitters into blood that cross into the brain and influence brain function. They can release hormones into the blood that again cross into the brain. They’ve got a lot of tricks up their sleeve,” he says.
One particularly important pathway runs through the vagus nerve – the longest nerve that emerges directly from the brain, connecting it to the gut. Another is the immune pathway. Gut bacteria can interact with immune cells and reduce cytokine production, which in turn can reduce systemic inflammation. Inflammatory processes have been implicated in both depression and bipolar disorder. What’s more, gut microbes can upregulate the expression of a protein called BDNF – brain-derived neurotrophic factor – which helps the development and survival of nerve cells in the brain.
Probiotics’ promise varies for different conditions
As the pathways by which gut dysbiosis may influence psychiatric disorders become clearer, the next logical step is to try to influence the composition of the microbiome to prevent and treat depression, anxiety, or schizophrenia. That’s where probiotics come in.
The evidence for the effects of probiotics – live microorganisms which, when ingested in adequate amounts, confer a health benefit – so far is the strongest for depression, says Viktoriya Nikolova, MRes, MSc, a PhD student and researcher at King’s College London. In their 2021 meta-analysis of seven trials, Mr. Nikolova and colleagues revealed that probiotics can significantly reduce depressive symptoms after just 8 weeks. There was a caveat, however – the probiotics only worked when used in addition to an approved antidepressant. Another meta-analysis, published in 2018, also showed that probiotics, when compared with placebo, improve mood in people with depressive symptoms (here, no antidepressant treatment was necessary).
Roumen Milev, MD, PhD, a neuroscientist at Queen’s University, Kingston, Ont., and coauthor of a review on probiotics and depression published in the Annals of General Psychiatry, warns, however, that the research is still in its infancy. “ ,” he says.
When it comes to using probiotics to relieve anxiety, “the evidence in the animal literature is really compelling,” says Dr. Ilardi. Human studies are less convincing, however, which Dr. Dr. Ilardi showed in his 2018 review and meta-analysis involving 743 animals and 1,527 humans. “Studies are small for the most part, and some of them aren’t terribly well conducted, and they often use very low doses of probiotics,” he says. One of the larger double-blind and placebo-controlled trials showed that supplementation with Lactobacillus plantarum helps reduce stress and anxiety, while the levels of proinflammatory cytokines go down. Another meta-analysis, published in June, revealed that, when it comes to reducing stress and anxiety in youth, the results are mixed.
Evidence of probiotics’ efficiency in schizophrenia is emerging, yet also limited. A 2019 review concluded that currently available results only “hint” at a possibility that probiotics could make a difference in schizophrenia. Similarly, a 2020 review summed up that the role of probiotics in bipolar disorder “remains unclear and underexplored.”
Better studies, remaining questions
Apart from small samples, one issue with research on probiotics is that they generally tend to use varied doses of different strains of bacteria, or even multistrain mixtures, making it tough to compare results. Although there are hundreds of species of bacteria in the human gut, only a few have been evaluated for their antidepressant or antianxiety effects.
“To make it even worse, it’s almost certainly the case that depending on a person’s actual genetics or maybe their epigenetics, a strain that is helpful for one person may not be helpful for another. There is almost certainly no one-size-fits-all probiotic formulation,” says Dr. Ilardi.
Another critical question that remains to be answered is that of potential side effects.
“Probiotics are often seen as food supplements, so they don’t follow under the same regulations as drugs would,” says Mr. Nikolova. “They don’t necessarily have to follow the pattern of drug trials in many countries, which means that the monitoring of side effects is not the requirement.”
That’s something that worries King’s College psychiatrist Young too. “If you are giving it to modulate how the brain works, you could potentially induce psychiatric symptoms or a psychiatric disorder. There could be allergic reactions. There could be lots of different things,” he says.
When you search the web for “probiotics,” chances are you will come across sites boasting amazing effects that such products can have on cardiovascular heath, the immune system, and yes, mental well-being. Many also sell various probiotic supplements “formulated” for your gut health or improved moods. However, many such commercially available strains have never been actually tested in clinical trials. What’s more, according to Kathrin Cohen Kadosh, PhD, a neuroscientist at University of Surrey (England), “it is not always clear whether the different strains actually reach the gut intact.”
For now, considering the limited research evidence, a safer bet is to try to improve gut health through consumption of fermented foods that naturally contain probiotics, such as miso, kefir, or sauerkraut. Alternatively, you could reach for prebiotics, such as foods containing fiber (prebiotics enhance the growth of beneficial gut microbes). This, Dr. Kadosh says, could be “a gentler way of improving gut health” than popping a pill. Whether an improved mental well-being might follow still remains to be seen.
A version of this article first appeared on Medscape.com.
In 1950, at Staten Island’s Sea View Hospital, a group of patients with terminal tuberculosis were given a new antibiotic called isoniazid, which caused some unexpected side effects. The patients reported euphoria, mental stimulation, and improved sleep, and even began socializing with more vigor. The press was all over the case, writing about the sick “dancing in the halls tho’ they had holes in their lungs.” Soon doctors started prescribing isoniazid as the first-ever antidepressant.
The Sea View Hospital experiment was an early hint that changing the composition of the gut microbiome – in this case, via antibiotics – might affect our mental health. Yet only in the last 2 decades has research into connections between what we ingest and psychiatric disorders really taken off. In 2004, a landmark study showed that germ-free mice (born in such sterile conditions that they lacked a microbiome) had an exaggerated stress response. The effects were reversed, however, if the mice were fed a bacterial strain, Bifidobacterium infantis, a probiotic. This sparked academic interest, and thousands of research papers followed.
According to Stephen Ilardi, PhD, a clinical psychologist at the University of Kansas, Lawrence, focusing on the etiology and treatment of depression, now is the “time of exciting discovery” in the field of probiotics and psychiatric disorders, although, admittedly, a lot still remains unknown.
Gut microbiome profiles in mental health disorders
We humans have about 100 trillion microbes residing in our guts. Some of these are archaea, some fungi, some protozoans and even viruses, but most are bacteria. Things like diet, sleep, and stress can all impact the composition of our gut microbiome. When the microbiome differs considerably from the typical, doctors and researchers describe it as dysbiosis, or imbalance. Studies have uncovered dysbiosis in patients with depression, anxiety, schizophrenia, and bipolar disorder.
“I think there is now pretty good evidence that the gut microbiome is actually an important factor in a number of psychiatric disorders,” says Allan Young, MBChB, clinical psychiatrist at King’s College London. The gut microbiome composition does seem to differ between psychiatric patients and the healthy. In depression, for example, a recent review of nine studies found an increase on the genus level in Streptococcus and Oscillibacter and low abundance of Lactobacillus and Coprococcus, among others. In generalized anxiety disorder, meanwhile, there appears to be an increase in Fusobacteria and Escherichia/Shigella .
For Dr. Ilardi, the next important question is whether there are plausible mechanisms that could explain how gut microbiota may influence brain function. And, it appears there are.
“The microbes in the gut can release neurotransmitters into blood that cross into the brain and influence brain function. They can release hormones into the blood that again cross into the brain. They’ve got a lot of tricks up their sleeve,” he says.
One particularly important pathway runs through the vagus nerve – the longest nerve that emerges directly from the brain, connecting it to the gut. Another is the immune pathway. Gut bacteria can interact with immune cells and reduce cytokine production, which in turn can reduce systemic inflammation. Inflammatory processes have been implicated in both depression and bipolar disorder. What’s more, gut microbes can upregulate the expression of a protein called BDNF – brain-derived neurotrophic factor – which helps the development and survival of nerve cells in the brain.
Probiotics’ promise varies for different conditions
As the pathways by which gut dysbiosis may influence psychiatric disorders become clearer, the next logical step is to try to influence the composition of the microbiome to prevent and treat depression, anxiety, or schizophrenia. That’s where probiotics come in.
The evidence for the effects of probiotics – live microorganisms which, when ingested in adequate amounts, confer a health benefit – so far is the strongest for depression, says Viktoriya Nikolova, MRes, MSc, a PhD student and researcher at King’s College London. In their 2021 meta-analysis of seven trials, Mr. Nikolova and colleagues revealed that probiotics can significantly reduce depressive symptoms after just 8 weeks. There was a caveat, however – the probiotics only worked when used in addition to an approved antidepressant. Another meta-analysis, published in 2018, also showed that probiotics, when compared with placebo, improve mood in people with depressive symptoms (here, no antidepressant treatment was necessary).
Roumen Milev, MD, PhD, a neuroscientist at Queen’s University, Kingston, Ont., and coauthor of a review on probiotics and depression published in the Annals of General Psychiatry, warns, however, that the research is still in its infancy. “ ,” he says.
When it comes to using probiotics to relieve anxiety, “the evidence in the animal literature is really compelling,” says Dr. Ilardi. Human studies are less convincing, however, which Dr. Dr. Ilardi showed in his 2018 review and meta-analysis involving 743 animals and 1,527 humans. “Studies are small for the most part, and some of them aren’t terribly well conducted, and they often use very low doses of probiotics,” he says. One of the larger double-blind and placebo-controlled trials showed that supplementation with Lactobacillus plantarum helps reduce stress and anxiety, while the levels of proinflammatory cytokines go down. Another meta-analysis, published in June, revealed that, when it comes to reducing stress and anxiety in youth, the results are mixed.
Evidence of probiotics’ efficiency in schizophrenia is emerging, yet also limited. A 2019 review concluded that currently available results only “hint” at a possibility that probiotics could make a difference in schizophrenia. Similarly, a 2020 review summed up that the role of probiotics in bipolar disorder “remains unclear and underexplored.”
Better studies, remaining questions
Apart from small samples, one issue with research on probiotics is that they generally tend to use varied doses of different strains of bacteria, or even multistrain mixtures, making it tough to compare results. Although there are hundreds of species of bacteria in the human gut, only a few have been evaluated for their antidepressant or antianxiety effects.
“To make it even worse, it’s almost certainly the case that depending on a person’s actual genetics or maybe their epigenetics, a strain that is helpful for one person may not be helpful for another. There is almost certainly no one-size-fits-all probiotic formulation,” says Dr. Ilardi.
Another critical question that remains to be answered is that of potential side effects.
“Probiotics are often seen as food supplements, so they don’t follow under the same regulations as drugs would,” says Mr. Nikolova. “They don’t necessarily have to follow the pattern of drug trials in many countries, which means that the monitoring of side effects is not the requirement.”
That’s something that worries King’s College psychiatrist Young too. “If you are giving it to modulate how the brain works, you could potentially induce psychiatric symptoms or a psychiatric disorder. There could be allergic reactions. There could be lots of different things,” he says.
When you search the web for “probiotics,” chances are you will come across sites boasting amazing effects that such products can have on cardiovascular heath, the immune system, and yes, mental well-being. Many also sell various probiotic supplements “formulated” for your gut health or improved moods. However, many such commercially available strains have never been actually tested in clinical trials. What’s more, according to Kathrin Cohen Kadosh, PhD, a neuroscientist at University of Surrey (England), “it is not always clear whether the different strains actually reach the gut intact.”
For now, considering the limited research evidence, a safer bet is to try to improve gut health through consumption of fermented foods that naturally contain probiotics, such as miso, kefir, or sauerkraut. Alternatively, you could reach for prebiotics, such as foods containing fiber (prebiotics enhance the growth of beneficial gut microbes). This, Dr. Kadosh says, could be “a gentler way of improving gut health” than popping a pill. Whether an improved mental well-being might follow still remains to be seen.
A version of this article first appeared on Medscape.com.
In 1950, at Staten Island’s Sea View Hospital, a group of patients with terminal tuberculosis were given a new antibiotic called isoniazid, which caused some unexpected side effects. The patients reported euphoria, mental stimulation, and improved sleep, and even began socializing with more vigor. The press was all over the case, writing about the sick “dancing in the halls tho’ they had holes in their lungs.” Soon doctors started prescribing isoniazid as the first-ever antidepressant.
The Sea View Hospital experiment was an early hint that changing the composition of the gut microbiome – in this case, via antibiotics – might affect our mental health. Yet only in the last 2 decades has research into connections between what we ingest and psychiatric disorders really taken off. In 2004, a landmark study showed that germ-free mice (born in such sterile conditions that they lacked a microbiome) had an exaggerated stress response. The effects were reversed, however, if the mice were fed a bacterial strain, Bifidobacterium infantis, a probiotic. This sparked academic interest, and thousands of research papers followed.
According to Stephen Ilardi, PhD, a clinical psychologist at the University of Kansas, Lawrence, focusing on the etiology and treatment of depression, now is the “time of exciting discovery” in the field of probiotics and psychiatric disorders, although, admittedly, a lot still remains unknown.
Gut microbiome profiles in mental health disorders
We humans have about 100 trillion microbes residing in our guts. Some of these are archaea, some fungi, some protozoans and even viruses, but most are bacteria. Things like diet, sleep, and stress can all impact the composition of our gut microbiome. When the microbiome differs considerably from the typical, doctors and researchers describe it as dysbiosis, or imbalance. Studies have uncovered dysbiosis in patients with depression, anxiety, schizophrenia, and bipolar disorder.
“I think there is now pretty good evidence that the gut microbiome is actually an important factor in a number of psychiatric disorders,” says Allan Young, MBChB, clinical psychiatrist at King’s College London. The gut microbiome composition does seem to differ between psychiatric patients and the healthy. In depression, for example, a recent review of nine studies found an increase on the genus level in Streptococcus and Oscillibacter and low abundance of Lactobacillus and Coprococcus, among others. In generalized anxiety disorder, meanwhile, there appears to be an increase in Fusobacteria and Escherichia/Shigella .
For Dr. Ilardi, the next important question is whether there are plausible mechanisms that could explain how gut microbiota may influence brain function. And, it appears there are.
“The microbes in the gut can release neurotransmitters into blood that cross into the brain and influence brain function. They can release hormones into the blood that again cross into the brain. They’ve got a lot of tricks up their sleeve,” he says.
One particularly important pathway runs through the vagus nerve – the longest nerve that emerges directly from the brain, connecting it to the gut. Another is the immune pathway. Gut bacteria can interact with immune cells and reduce cytokine production, which in turn can reduce systemic inflammation. Inflammatory processes have been implicated in both depression and bipolar disorder. What’s more, gut microbes can upregulate the expression of a protein called BDNF – brain-derived neurotrophic factor – which helps the development and survival of nerve cells in the brain.
Probiotics’ promise varies for different conditions
As the pathways by which gut dysbiosis may influence psychiatric disorders become clearer, the next logical step is to try to influence the composition of the microbiome to prevent and treat depression, anxiety, or schizophrenia. That’s where probiotics come in.
The evidence for the effects of probiotics – live microorganisms which, when ingested in adequate amounts, confer a health benefit – so far is the strongest for depression, says Viktoriya Nikolova, MRes, MSc, a PhD student and researcher at King’s College London. In their 2021 meta-analysis of seven trials, Mr. Nikolova and colleagues revealed that probiotics can significantly reduce depressive symptoms after just 8 weeks. There was a caveat, however – the probiotics only worked when used in addition to an approved antidepressant. Another meta-analysis, published in 2018, also showed that probiotics, when compared with placebo, improve mood in people with depressive symptoms (here, no antidepressant treatment was necessary).
Roumen Milev, MD, PhD, a neuroscientist at Queen’s University, Kingston, Ont., and coauthor of a review on probiotics and depression published in the Annals of General Psychiatry, warns, however, that the research is still in its infancy. “ ,” he says.
When it comes to using probiotics to relieve anxiety, “the evidence in the animal literature is really compelling,” says Dr. Ilardi. Human studies are less convincing, however, which Dr. Dr. Ilardi showed in his 2018 review and meta-analysis involving 743 animals and 1,527 humans. “Studies are small for the most part, and some of them aren’t terribly well conducted, and they often use very low doses of probiotics,” he says. One of the larger double-blind and placebo-controlled trials showed that supplementation with Lactobacillus plantarum helps reduce stress and anxiety, while the levels of proinflammatory cytokines go down. Another meta-analysis, published in June, revealed that, when it comes to reducing stress and anxiety in youth, the results are mixed.
Evidence of probiotics’ efficiency in schizophrenia is emerging, yet also limited. A 2019 review concluded that currently available results only “hint” at a possibility that probiotics could make a difference in schizophrenia. Similarly, a 2020 review summed up that the role of probiotics in bipolar disorder “remains unclear and underexplored.”
Better studies, remaining questions
Apart from small samples, one issue with research on probiotics is that they generally tend to use varied doses of different strains of bacteria, or even multistrain mixtures, making it tough to compare results. Although there are hundreds of species of bacteria in the human gut, only a few have been evaluated for their antidepressant or antianxiety effects.
“To make it even worse, it’s almost certainly the case that depending on a person’s actual genetics or maybe their epigenetics, a strain that is helpful for one person may not be helpful for another. There is almost certainly no one-size-fits-all probiotic formulation,” says Dr. Ilardi.
Another critical question that remains to be answered is that of potential side effects.
“Probiotics are often seen as food supplements, so they don’t follow under the same regulations as drugs would,” says Mr. Nikolova. “They don’t necessarily have to follow the pattern of drug trials in many countries, which means that the monitoring of side effects is not the requirement.”
That’s something that worries King’s College psychiatrist Young too. “If you are giving it to modulate how the brain works, you could potentially induce psychiatric symptoms or a psychiatric disorder. There could be allergic reactions. There could be lots of different things,” he says.
When you search the web for “probiotics,” chances are you will come across sites boasting amazing effects that such products can have on cardiovascular heath, the immune system, and yes, mental well-being. Many also sell various probiotic supplements “formulated” for your gut health or improved moods. However, many such commercially available strains have never been actually tested in clinical trials. What’s more, according to Kathrin Cohen Kadosh, PhD, a neuroscientist at University of Surrey (England), “it is not always clear whether the different strains actually reach the gut intact.”
For now, considering the limited research evidence, a safer bet is to try to improve gut health through consumption of fermented foods that naturally contain probiotics, such as miso, kefir, or sauerkraut. Alternatively, you could reach for prebiotics, such as foods containing fiber (prebiotics enhance the growth of beneficial gut microbes). This, Dr. Kadosh says, could be “a gentler way of improving gut health” than popping a pill. Whether an improved mental well-being might follow still remains to be seen.
A version of this article first appeared on Medscape.com.