User login
New science reveals the best way to take a pill
I want to tell you a story about forgetfulness and haste, and how the combination of the two can lead to frightening consequences. A few years ago, I was lying in bed about to turn out the light when I realized I’d forgotten to take “my pill.”
Like some 161 million other American adults, I was then a consumer of a prescription medication. Being conscientious, I got up, retrieved said pill, and tossed it back. Being lazy, I didn’t bother to grab a glass of water to help the thing go down. Instead, I promptly returned to bed, threw a pillow over my head, and prepared for sleep.
Within seconds, I began to feel a burning sensation in my chest. After about a minute, that burn became a crippling pain. Not wanting to alarm my wife, I went into the living room, where I spent the next 30 minutes doubled over in agony. Was I having a heart attack? I phoned my sister, a hospitalist in Texas. She advised me to take myself to the ED to get checked out.
If only I’d known then about “Duke.” He could have told me how critical body posture is when people swallow pills.
Who’s Duke?
Duke is a computer representation of a 34-year-old, anatomically normal human male created by computer scientists at the IT’IS Foundation, a nonprofit group based in Switzerland that works on a variety of projects in health care technology. Using Duke, Rajat Mittal, PhD, a professor of medicine at the Johns Hopkins University, Baltimore, created a computer model called “StomachSim” to explore the process of digestion.
Their research, published in the journal Physics of Fluids, turned up several surprising findings about the dynamics of swallowing pills – the most common way medication is used worldwide.
Dr. Mittal said he chose to study the stomach because the functions of most other organ systems, from the heart to the brain, have already attracted plenty of attention from scientists.
“As I was looking to initiate research in some new directions, the implications of stomach biomechanics on important conditions such as diabetes, obesity, and gastroparesis became apparent to me,” he said. “It was clear that bioengineering research in this arena lags other more ‘sexy’ areas such as cardiovascular flows by at least 20 years, and there seemed to be a great opportunity to do impactful work.”
Your posture may help a pill work better
Several well-known things affect a pill’s ability to disperse its contents into the gut and be used by the body, such as the stomach’s contents (a heavy breakfast, a mix of liquids like juice, milk, and coffee) and the motion of the organ’s walls. But Dr. Mittal’s group learned that Duke’s posture also played a major role.
The researchers ran Duke through computer simulations in varying postures: upright, leaning right, leaning left, and leaning back, while keeping all the other parts of their analyses (like the things mentioned above) the same.
They found that posture determined as much as 83% of how quickly a pill disperses into the intestines. The most efficient position was leaning right. The least was leaning left, which prevented the pill from reaching the antrum, or bottom section of the stomach, and thus kept all but traces of the dissolved drug from entering the duodenum, where the stomach joins the small intestine. (Interestingly, Jews who observe Passover are advised to recline to the left during the meal as a symbol of freedom and leisure.)
That makes sense if you think about the stomach’s shape, which looks kind of like a bean, curving from the left to the right side of the body. Because of gravity, your position will change where the pill lands.
a condition in which the stomach loses the ability to empty properly.
How this could help people
Among the groups most likely to benefit from such studies, Dr. Mittal said, are the elderly – who both take a lot of pills and are more prone to trouble swallowing because of age-related changes in their esophagus – and the bedridden, who can’t easily shift their posture. The findings may also lead to improvements in the ability to treat people with gastroparesis, a particular problem for people with diabetes.
Future studies with Duke and similar simulations will look at how the GI system digests proteins, carbohydrates, and fatty meals, Dr. Mittal said.
In the meantime, Dr. Mittal offered the following advice: “Standing or sitting upright after taking a pill is fine. If you have to take a pill lying down, stay on your back or on your right side. Avoid lying on your left side after taking a pill.”
As for what happened to me, any gastroenterologist reading this has figured out that my condition was not heart-related. Instead, I likely was having a bout of pill esophagitis, irritation that can result from medications that aggravate the mucosa of the food tube. Although painful, esophagitis isn’t life-threatening. After about an hour, the pain began to subside, and by the next morning I was fine, with only a faint ache in my chest to remind me of my earlier torment. (Researchers noted an increase in the condition early in the COVID-19 pandemic, linked to the antibiotic doxycycline.)
And, in the interest of accuracy, my pill problem began above the stomach. Nothing in the Hopkins research suggests that the alignment of the esophagus plays a role in how drugs disperse in the gut – unless, of course, it prevents those pills from reaching the stomach in the first place.
A version of this article first appeared on WebMD.com.
I want to tell you a story about forgetfulness and haste, and how the combination of the two can lead to frightening consequences. A few years ago, I was lying in bed about to turn out the light when I realized I’d forgotten to take “my pill.”
Like some 161 million other American adults, I was then a consumer of a prescription medication. Being conscientious, I got up, retrieved said pill, and tossed it back. Being lazy, I didn’t bother to grab a glass of water to help the thing go down. Instead, I promptly returned to bed, threw a pillow over my head, and prepared for sleep.
Within seconds, I began to feel a burning sensation in my chest. After about a minute, that burn became a crippling pain. Not wanting to alarm my wife, I went into the living room, where I spent the next 30 minutes doubled over in agony. Was I having a heart attack? I phoned my sister, a hospitalist in Texas. She advised me to take myself to the ED to get checked out.
If only I’d known then about “Duke.” He could have told me how critical body posture is when people swallow pills.
Who’s Duke?
Duke is a computer representation of a 34-year-old, anatomically normal human male created by computer scientists at the IT’IS Foundation, a nonprofit group based in Switzerland that works on a variety of projects in health care technology. Using Duke, Rajat Mittal, PhD, a professor of medicine at the Johns Hopkins University, Baltimore, created a computer model called “StomachSim” to explore the process of digestion.
Their research, published in the journal Physics of Fluids, turned up several surprising findings about the dynamics of swallowing pills – the most common way medication is used worldwide.
Dr. Mittal said he chose to study the stomach because the functions of most other organ systems, from the heart to the brain, have already attracted plenty of attention from scientists.
“As I was looking to initiate research in some new directions, the implications of stomach biomechanics on important conditions such as diabetes, obesity, and gastroparesis became apparent to me,” he said. “It was clear that bioengineering research in this arena lags other more ‘sexy’ areas such as cardiovascular flows by at least 20 years, and there seemed to be a great opportunity to do impactful work.”
Your posture may help a pill work better
Several well-known things affect a pill’s ability to disperse its contents into the gut and be used by the body, such as the stomach’s contents (a heavy breakfast, a mix of liquids like juice, milk, and coffee) and the motion of the organ’s walls. But Dr. Mittal’s group learned that Duke’s posture also played a major role.
The researchers ran Duke through computer simulations in varying postures: upright, leaning right, leaning left, and leaning back, while keeping all the other parts of their analyses (like the things mentioned above) the same.
They found that posture determined as much as 83% of how quickly a pill disperses into the intestines. The most efficient position was leaning right. The least was leaning left, which prevented the pill from reaching the antrum, or bottom section of the stomach, and thus kept all but traces of the dissolved drug from entering the duodenum, where the stomach joins the small intestine. (Interestingly, Jews who observe Passover are advised to recline to the left during the meal as a symbol of freedom and leisure.)
That makes sense if you think about the stomach’s shape, which looks kind of like a bean, curving from the left to the right side of the body. Because of gravity, your position will change where the pill lands.
a condition in which the stomach loses the ability to empty properly.
How this could help people
Among the groups most likely to benefit from such studies, Dr. Mittal said, are the elderly – who both take a lot of pills and are more prone to trouble swallowing because of age-related changes in their esophagus – and the bedridden, who can’t easily shift their posture. The findings may also lead to improvements in the ability to treat people with gastroparesis, a particular problem for people with diabetes.
Future studies with Duke and similar simulations will look at how the GI system digests proteins, carbohydrates, and fatty meals, Dr. Mittal said.
In the meantime, Dr. Mittal offered the following advice: “Standing or sitting upright after taking a pill is fine. If you have to take a pill lying down, stay on your back or on your right side. Avoid lying on your left side after taking a pill.”
As for what happened to me, any gastroenterologist reading this has figured out that my condition was not heart-related. Instead, I likely was having a bout of pill esophagitis, irritation that can result from medications that aggravate the mucosa of the food tube. Although painful, esophagitis isn’t life-threatening. After about an hour, the pain began to subside, and by the next morning I was fine, with only a faint ache in my chest to remind me of my earlier torment. (Researchers noted an increase in the condition early in the COVID-19 pandemic, linked to the antibiotic doxycycline.)
And, in the interest of accuracy, my pill problem began above the stomach. Nothing in the Hopkins research suggests that the alignment of the esophagus plays a role in how drugs disperse in the gut – unless, of course, it prevents those pills from reaching the stomach in the first place.
A version of this article first appeared on WebMD.com.
I want to tell you a story about forgetfulness and haste, and how the combination of the two can lead to frightening consequences. A few years ago, I was lying in bed about to turn out the light when I realized I’d forgotten to take “my pill.”
Like some 161 million other American adults, I was then a consumer of a prescription medication. Being conscientious, I got up, retrieved said pill, and tossed it back. Being lazy, I didn’t bother to grab a glass of water to help the thing go down. Instead, I promptly returned to bed, threw a pillow over my head, and prepared for sleep.
Within seconds, I began to feel a burning sensation in my chest. After about a minute, that burn became a crippling pain. Not wanting to alarm my wife, I went into the living room, where I spent the next 30 minutes doubled over in agony. Was I having a heart attack? I phoned my sister, a hospitalist in Texas. She advised me to take myself to the ED to get checked out.
If only I’d known then about “Duke.” He could have told me how critical body posture is when people swallow pills.
Who’s Duke?
Duke is a computer representation of a 34-year-old, anatomically normal human male created by computer scientists at the IT’IS Foundation, a nonprofit group based in Switzerland that works on a variety of projects in health care technology. Using Duke, Rajat Mittal, PhD, a professor of medicine at the Johns Hopkins University, Baltimore, created a computer model called “StomachSim” to explore the process of digestion.
Their research, published in the journal Physics of Fluids, turned up several surprising findings about the dynamics of swallowing pills – the most common way medication is used worldwide.
Dr. Mittal said he chose to study the stomach because the functions of most other organ systems, from the heart to the brain, have already attracted plenty of attention from scientists.
“As I was looking to initiate research in some new directions, the implications of stomach biomechanics on important conditions such as diabetes, obesity, and gastroparesis became apparent to me,” he said. “It was clear that bioengineering research in this arena lags other more ‘sexy’ areas such as cardiovascular flows by at least 20 years, and there seemed to be a great opportunity to do impactful work.”
Your posture may help a pill work better
Several well-known things affect a pill’s ability to disperse its contents into the gut and be used by the body, such as the stomach’s contents (a heavy breakfast, a mix of liquids like juice, milk, and coffee) and the motion of the organ’s walls. But Dr. Mittal’s group learned that Duke’s posture also played a major role.
The researchers ran Duke through computer simulations in varying postures: upright, leaning right, leaning left, and leaning back, while keeping all the other parts of their analyses (like the things mentioned above) the same.
They found that posture determined as much as 83% of how quickly a pill disperses into the intestines. The most efficient position was leaning right. The least was leaning left, which prevented the pill from reaching the antrum, or bottom section of the stomach, and thus kept all but traces of the dissolved drug from entering the duodenum, where the stomach joins the small intestine. (Interestingly, Jews who observe Passover are advised to recline to the left during the meal as a symbol of freedom and leisure.)
That makes sense if you think about the stomach’s shape, which looks kind of like a bean, curving from the left to the right side of the body. Because of gravity, your position will change where the pill lands.
a condition in which the stomach loses the ability to empty properly.
How this could help people
Among the groups most likely to benefit from such studies, Dr. Mittal said, are the elderly – who both take a lot of pills and are more prone to trouble swallowing because of age-related changes in their esophagus – and the bedridden, who can’t easily shift their posture. The findings may also lead to improvements in the ability to treat people with gastroparesis, a particular problem for people with diabetes.
Future studies with Duke and similar simulations will look at how the GI system digests proteins, carbohydrates, and fatty meals, Dr. Mittal said.
In the meantime, Dr. Mittal offered the following advice: “Standing or sitting upright after taking a pill is fine. If you have to take a pill lying down, stay on your back or on your right side. Avoid lying on your left side after taking a pill.”
As for what happened to me, any gastroenterologist reading this has figured out that my condition was not heart-related. Instead, I likely was having a bout of pill esophagitis, irritation that can result from medications that aggravate the mucosa of the food tube. Although painful, esophagitis isn’t life-threatening. After about an hour, the pain began to subside, and by the next morning I was fine, with only a faint ache in my chest to remind me of my earlier torment. (Researchers noted an increase in the condition early in the COVID-19 pandemic, linked to the antibiotic doxycycline.)
And, in the interest of accuracy, my pill problem began above the stomach. Nothing in the Hopkins research suggests that the alignment of the esophagus plays a role in how drugs disperse in the gut – unless, of course, it prevents those pills from reaching the stomach in the first place.
A version of this article first appeared on WebMD.com.
COVID-19 linked to increased Alzheimer’s risk
The study of more than 6 million people aged 65 years or older found a 50%-80% increased risk for AD in the year after COVID-19; the risk was especially high for women older than 85 years.
However, the investigators were quick to point out that the observational retrospective study offers no evidence that COVID-19 causes AD. There could be a viral etiology at play, or the connection could be related to inflammation in neural tissue from the SARS-CoV-2 infection. Or it could simply be that exposure to the health care system for COVID-19 increased the odds of detection of existing undiagnosed AD cases.
Whatever the case, these findings point to a potential spike in AD cases, which is a cause for concern, study investigator Pamela Davis, MD, PhD, a professor in the Center for Community Health Integration at Case Western Reserve University, Cleveland, said in an interview.
“COVID may be giving us a legacy of ongoing medical difficulties,” Dr. Davis said. “We were already concerned about having a very large care burden and cost burden from Alzheimer’s disease. If this is another burden that’s increased by COVID, this is something we’re really going to have to prepare for.”
The findings were published online in Journal of Alzheimer’s Disease.
Increased risk
Earlier research points to a potential link between COVID-19 and increased risk for AD and Parkinson’s disease.
For the current study, researchers analyzed anonymous electronic health records of 6.2 million adults aged 65 years or older who received medical treatment between February 2020 and May 2021 and had no prior diagnosis of AD. The database includes information on almost 30% of the entire U.S. population.
Overall, there were 410,748 cases of COVID-19 during the study period.
The overall risk for new diagnosis of AD in the COVID-19 cohort was close to double that of those who did not have COVID-19 (0.68% vs. 0.35%, respectively).
After propensity-score matching, those who have had COVID-19 had a significantly higher risk for an AD diagnosis compared with those who were not infected (hazard ratio [HR], 1.69; 95% confidence interval [CI],1.53-1.72).
Risk for AD was elevated in all age groups, regardless of gender or ethnicity. Researchers did not collect data on COVID-19 severity, and the medical codes for long COVID were not published until after the study had ended.
Those with the highest risk were individuals older than 85 years (HR, 1.89; 95% CI, 1.73-2.07) and women (HR, 1.82; 95% CI, 1.69-1.97).
“We expected to see some impact, but I was surprised that it was as potent as it was,” Dr. Davis said.
Association, not causation
Heather Snyder, PhD, Alzheimer’s Association vice president of medical and scientific relations, who commented on the findings for this article, called the study interesting but emphasized caution in interpreting the results.
“Because this study only showed an association through medical records, we cannot know what the underlying mechanisms driving this association are without more research,” Dr. Snyder said. “If you have had COVID-19, it doesn’t mean you’re going to get dementia. But if you have had COVID-19 and are experiencing long-term symptoms including cognitive difficulties, talk to your doctor.”
Dr. Davis agreed, noting that this type of study offers information on association, but not causation. “I do think that this makes it imperative that we continue to follow the population for what’s going on in various neurodegenerative diseases,” Dr. Davis said.
The study was funded by the National Institute of Aging, National Institute on Alcohol Abuse and Alcoholism, the Clinical and Translational Science Collaborative of Cleveland, and the National Cancer Institute. Dr. Synder reports no relevant financial conflicts.
A version of this article first appeared on Medscape.com.
The study of more than 6 million people aged 65 years or older found a 50%-80% increased risk for AD in the year after COVID-19; the risk was especially high for women older than 85 years.
However, the investigators were quick to point out that the observational retrospective study offers no evidence that COVID-19 causes AD. There could be a viral etiology at play, or the connection could be related to inflammation in neural tissue from the SARS-CoV-2 infection. Or it could simply be that exposure to the health care system for COVID-19 increased the odds of detection of existing undiagnosed AD cases.
Whatever the case, these findings point to a potential spike in AD cases, which is a cause for concern, study investigator Pamela Davis, MD, PhD, a professor in the Center for Community Health Integration at Case Western Reserve University, Cleveland, said in an interview.
“COVID may be giving us a legacy of ongoing medical difficulties,” Dr. Davis said. “We were already concerned about having a very large care burden and cost burden from Alzheimer’s disease. If this is another burden that’s increased by COVID, this is something we’re really going to have to prepare for.”
The findings were published online in Journal of Alzheimer’s Disease.
Increased risk
Earlier research points to a potential link between COVID-19 and increased risk for AD and Parkinson’s disease.
For the current study, researchers analyzed anonymous electronic health records of 6.2 million adults aged 65 years or older who received medical treatment between February 2020 and May 2021 and had no prior diagnosis of AD. The database includes information on almost 30% of the entire U.S. population.
Overall, there were 410,748 cases of COVID-19 during the study period.
The overall risk for new diagnosis of AD in the COVID-19 cohort was close to double that of those who did not have COVID-19 (0.68% vs. 0.35%, respectively).
After propensity-score matching, those who have had COVID-19 had a significantly higher risk for an AD diagnosis compared with those who were not infected (hazard ratio [HR], 1.69; 95% confidence interval [CI],1.53-1.72).
Risk for AD was elevated in all age groups, regardless of gender or ethnicity. Researchers did not collect data on COVID-19 severity, and the medical codes for long COVID were not published until after the study had ended.
Those with the highest risk were individuals older than 85 years (HR, 1.89; 95% CI, 1.73-2.07) and women (HR, 1.82; 95% CI, 1.69-1.97).
“We expected to see some impact, but I was surprised that it was as potent as it was,” Dr. Davis said.
Association, not causation
Heather Snyder, PhD, Alzheimer’s Association vice president of medical and scientific relations, who commented on the findings for this article, called the study interesting but emphasized caution in interpreting the results.
“Because this study only showed an association through medical records, we cannot know what the underlying mechanisms driving this association are without more research,” Dr. Snyder said. “If you have had COVID-19, it doesn’t mean you’re going to get dementia. But if you have had COVID-19 and are experiencing long-term symptoms including cognitive difficulties, talk to your doctor.”
Dr. Davis agreed, noting that this type of study offers information on association, but not causation. “I do think that this makes it imperative that we continue to follow the population for what’s going on in various neurodegenerative diseases,” Dr. Davis said.
The study was funded by the National Institute of Aging, National Institute on Alcohol Abuse and Alcoholism, the Clinical and Translational Science Collaborative of Cleveland, and the National Cancer Institute. Dr. Synder reports no relevant financial conflicts.
A version of this article first appeared on Medscape.com.
The study of more than 6 million people aged 65 years or older found a 50%-80% increased risk for AD in the year after COVID-19; the risk was especially high for women older than 85 years.
However, the investigators were quick to point out that the observational retrospective study offers no evidence that COVID-19 causes AD. There could be a viral etiology at play, or the connection could be related to inflammation in neural tissue from the SARS-CoV-2 infection. Or it could simply be that exposure to the health care system for COVID-19 increased the odds of detection of existing undiagnosed AD cases.
Whatever the case, these findings point to a potential spike in AD cases, which is a cause for concern, study investigator Pamela Davis, MD, PhD, a professor in the Center for Community Health Integration at Case Western Reserve University, Cleveland, said in an interview.
“COVID may be giving us a legacy of ongoing medical difficulties,” Dr. Davis said. “We were already concerned about having a very large care burden and cost burden from Alzheimer’s disease. If this is another burden that’s increased by COVID, this is something we’re really going to have to prepare for.”
The findings were published online in Journal of Alzheimer’s Disease.
Increased risk
Earlier research points to a potential link between COVID-19 and increased risk for AD and Parkinson’s disease.
For the current study, researchers analyzed anonymous electronic health records of 6.2 million adults aged 65 years or older who received medical treatment between February 2020 and May 2021 and had no prior diagnosis of AD. The database includes information on almost 30% of the entire U.S. population.
Overall, there were 410,748 cases of COVID-19 during the study period.
The overall risk for new diagnosis of AD in the COVID-19 cohort was close to double that of those who did not have COVID-19 (0.68% vs. 0.35%, respectively).
After propensity-score matching, those who have had COVID-19 had a significantly higher risk for an AD diagnosis compared with those who were not infected (hazard ratio [HR], 1.69; 95% confidence interval [CI],1.53-1.72).
Risk for AD was elevated in all age groups, regardless of gender or ethnicity. Researchers did not collect data on COVID-19 severity, and the medical codes for long COVID were not published until after the study had ended.
Those with the highest risk were individuals older than 85 years (HR, 1.89; 95% CI, 1.73-2.07) and women (HR, 1.82; 95% CI, 1.69-1.97).
“We expected to see some impact, but I was surprised that it was as potent as it was,” Dr. Davis said.
Association, not causation
Heather Snyder, PhD, Alzheimer’s Association vice president of medical and scientific relations, who commented on the findings for this article, called the study interesting but emphasized caution in interpreting the results.
“Because this study only showed an association through medical records, we cannot know what the underlying mechanisms driving this association are without more research,” Dr. Snyder said. “If you have had COVID-19, it doesn’t mean you’re going to get dementia. But if you have had COVID-19 and are experiencing long-term symptoms including cognitive difficulties, talk to your doctor.”
Dr. Davis agreed, noting that this type of study offers information on association, but not causation. “I do think that this makes it imperative that we continue to follow the population for what’s going on in various neurodegenerative diseases,” Dr. Davis said.
The study was funded by the National Institute of Aging, National Institute on Alcohol Abuse and Alcoholism, the Clinical and Translational Science Collaborative of Cleveland, and the National Cancer Institute. Dr. Synder reports no relevant financial conflicts.
A version of this article first appeared on Medscape.com.
FROM THE JOURNAL OF ALZHEIMER’S DISEASE
Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients
Study 1 Overview (Oberhaus et al)
Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.
Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.
Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.
Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.
Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).
Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.
Study 2 Overview (Shenkin et al)
Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.
Design: Prospective randomized diagnostic test accuracy study.
Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.
Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.
Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).
Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.
Commentary
Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.
In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.
In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IV–based evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.
Application for Clinical Practice and System Implementation
The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.
Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.
The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Practice Points
- Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
- Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai
1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24
2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865
3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8
4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x
Study 1 Overview (Oberhaus et al)
Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.
Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.
Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.
Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.
Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).
Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.
Study 2 Overview (Shenkin et al)
Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.
Design: Prospective randomized diagnostic test accuracy study.
Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.
Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.
Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).
Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.
Commentary
Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.
In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.
In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IV–based evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.
Application for Clinical Practice and System Implementation
The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.
Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.
The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Practice Points
- Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
- Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai
Study 1 Overview (Oberhaus et al)
Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.
Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.
Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.
Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.
Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).
Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.
Study 2 Overview (Shenkin et al)
Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.
Design: Prospective randomized diagnostic test accuracy study.
Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.
Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.
Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).
Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.
Commentary
Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.
In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.
In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IV–based evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.
Application for Clinical Practice and System Implementation
The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.
Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.
The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Practice Points
- Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
- Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.
Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai
1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24
2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865
3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8
4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x
1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24
2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865
3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8
4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x
New ESC guidelines for cutting CV risk in noncardiac surgery
The European Society of Cardiology guidelines on cardiovascular assessment and management of patients undergoing noncardiac surgery have seen extensive revision since the 2014 version.
They still have the same aim – to prevent surgery-related bleeding complications, perioperative myocardial infarction/injury (PMI), stent thrombosis, acute heart failure, arrhythmias, pulmonary embolism, ischemic stroke, and cardiovascular (CV) death.
Cochairpersons Sigrun Halvorsen, MD, PhD, and Julinda Mehilli, MD, presented highlights from the guidelines at the annual congress of the European Society of Cardiology and the document was simultaneously published online in the European Heart Journal.
The document classifies noncardiac surgery into three levels of 30-day risk of CV death, MI, or stroke. Low (< 1%) risk includes eye or thyroid surgery; intermediate (1%-5%) risk includes knee or hip replacement or renal transplant; and high (> 5%) risk includes aortic aneurysm, lung transplant, or pancreatic or bladder cancer surgery (see more examples below).
It classifies patients as low risk if they are younger than 65 without CV disease or CV risk factors (smoking, hypertension, diabetes, dyslipidemia, family history); intermediate risk if they are 65 or older or have CV risk factors; and high risk if they have CVD.
In an interview, Dr. Halvorsen, professor in cardiology, University of Oslo, zeroed in on three important revisions:
First, recommendations for preoperative ECG and biomarkers are more specific, he noted.
The guidelines advise that before intermediate- or high-risk noncardiac surgery, in patients who have known CVD, CV risk factors (including age 65 or older), or symptoms suggestive of CVD:
- It is recommended to obtain a preoperative 12-lead ECG (class I).
- It is recommended to measure high-sensitivity cardiac troponin T (hs-cTn T) or high-sensitivity cardiac troponin I (hs-cTn I). It is also recommended to measure these biomarkers at 24 hours and 48 hours post surgery (class I).
- It should be considered to measure B-type natriuretic peptide or N-terminal of the prohormone BNP (NT-proBNP).
However, for low-risk patients undergoing low- and intermediate-risk noncardiac surgery, it is not recommended to routinely obtain preoperative ECG, hs-cTn T/I, or BNP/NT-proBNP concentrations (class III).
Troponins have a stronger class I recommendation, compared with the IIA recommendation for BNP, because they are useful for preoperative risk stratification and for diagnosis of PMI, Dr. Halvorsen explained. “Patients receive painkillers after surgery and may have no pain,” she noted, but they may have PMI, which has a bad prognosis.
Second, the guidelines recommend that “all patients should stop smoking 4 weeks before noncardiac surgery [class I],” she noted. Clinicians should also “measure hemoglobin, and if the patient is anemic, treat the anemia.”
Third, the sections on antithrombotic treatment have been significantly revised. “Bridging – stopping an oral antithrombotic drug and switching to a subcutaneous or IV drug – has been common,” Dr. Halvorsen said, “but recently we have new evidence that in most cases that increases the risk of bleeding.”
“We are [now] much more restrictive with respect to bridging” with unfractionated heparin or low-molecular-weight heparin, she said. “We recommend against bridging in patients with low to moderate thrombotic risk,” and bridging should only be considered in patients with mechanical prosthetic heart valves or with very high thrombotic risk.
More preoperative recommendations
In the guideline overview session at the congress, Dr. Halverson highlighted some of the new recommendations for preoperative risk assessment.
If time allows, it is recommended to optimize guideline-recommended treatment of CVD and control of CV risk factors including blood pressure, dyslipidemia, and diabetes, before noncardiac surgery (class I).
Patients commonly have “murmurs, chest pain, dyspnea, and edema that may suggest severe CVD, but may also be caused by noncardiac disease,” she noted. The guidelines state that “for patients with a newly detected murmur and symptoms or signs of CVD, transthoracic echocardiography is recommended before noncardiac surgery (class I).
“Many studies have been performed to try to find out if initiation of specific drugs before surgery could reduce the risk of complications,” Dr. Halvorsen noted. However, few have shown any benefit and “the question of presurgery initiation of beta-blockers has been greatly debated,” she said. “We have again reviewed the literature and concluded ‘Routine initiation of beta-blockers perioperatively is not recommended (class IIIA).’ “
“We adhere to the guidelines on acute and chronic coronary syndrome recommending 6-12 months of dual antiplatelet treatment as a standard before elective surgery,” she said. “However, in case of time-sensitive surgery, the duration of that treatment can be shortened down to a minimum of 1 month after elective PCI and a minimum of 3 months after PCI and ACS.”
Patients with specific types of CVD
Dr. Mehilli, a professor at Landshut-Achdorf (Germany) Hospital, highlighted some new guideline recommendations for patients who have specific types of cardiovascular disease.
Coronary artery disease (CAD). “For chronic coronary syndrome, a cardiac workup is recommended only for patients undergoing intermediate risk or high-risk noncardiac surgery.”
“Stress imaging should be considered before any high risk, noncardiac surgery in asymptomatic patients with poor functional capacity and prior PCI or coronary artery bypass graft (new recommendation, class IIa).”
Mitral valve regurgitation. For patients undergoing scheduled noncardiac surgery, who remain symptomatic despite guideline-directed medical treatment for mitral valve regurgitation (including resynchronization and myocardial revascularization), consider a valve intervention – either transcatheter or surgical – before noncardiac surgery in eligible patients with acceptable procedural risk (new recommendation).
Cardiac implantable electronic devices (CIED). For high-risk patients with CIEDs undergoing noncardiac surgery with high probability of electromagnetic interference, a CIED checkup and necessary reprogramming immediately before the procedure should be considered (new recommendation).
Arrhythmias. “I want only to stress,” Dr. Mehilli said, “in patients with atrial fibrillation with acute or worsening hemodynamic instability undergoing noncardiac surgery, an emergency electrical cardioversion is recommended (class I).”
Peripheral artery disease (PAD) and abdominal aortic aneurysm. For these patients “we do not recommend a routine referral for a cardiac workup. But we recommend it for patients with poor functional capacity or with significant risk factors or symptoms (new recommendations).”
Chronic arterial hypertension. “We have modified the recommendation, recommending avoidance of large perioperative fluctuations in blood pressure, and we do not recommend deferring noncardiac surgery in patients with stage 1 or 2 hypertension,” she said.
Postoperative cardiovascular complications
The most frequent postoperative cardiovascular complication is PMI, Dr. Mehilli noted.
“In the BASEL-PMI registry, the incidence of this complication around intermediate or high-risk noncardiac surgery was up to 15% among patients older than 65 years or with a history of CAD or PAD, which makes this kind of complication really important to prevent, to assess, and to know how to treat.”
“It is recommended to have a high awareness for perioperative cardiovascular complications, combined with surveillance for PMI in patients undergoing intermediate- or high-risk noncardiac surgery” based on serial measurements of high-sensitivity cardiac troponin.
The guidelines define PMI as “an increase in the delta of high-sensitivity troponin more than the upper level of normal,” Dr. Mehilli said. “It’s different from the one used in a rule-in algorithm for non-STEMI acute coronary syndrome.”
Postoperative atrial fibrillation (AFib) is observed in 2%-30% of noncardiac surgery patients in different registries, particularly in patients undergoing intermediate or high-risk noncardiac surgery, she noted.
“We propose an algorithm on how to prevent and treat this complication. I want to highlight that in patients with hemodynamic unstable postoperative AF[ib], an emergency cardioversion is indicated. For the others, a rate control with the target heart rate of less than 110 beats per minute is indicated.”
In patients with postoperative AFib, long-term oral anticoagulation therapy should be considered in all patients at risk for stroke, considering the anticipated net clinical benefit of oral anticoagulation therapy as well as informed patient preference (new recommendations).
Routine use of beta-blockers to prevent postoperative AFib in patients undergoing noncardiac surgery is not recommended.
The document also covers the management of patients with kidney disease, diabetes, cancer, obesity, and COVID-19. In general, elective noncardiac surgery should be postponed after a patient has COVID-19, until he or she recovers completely, and coexisting conditions are optimized.
The guidelines are available from the ESC website in several formats: pocket guidelines, pocket guidelines smartphone app, guidelines slide set, essential messages, and the European Heart Journal article.
Noncardiac surgery risk categories
The guideline includes a table that classifies noncardiac surgeries into three groups, based on the associated 30-day risk of death, MI, or stroke:
- Low (< 1%): breast, dental, eye, thyroid, and minor gynecologic, orthopedic, and urologic surgery.
- Intermediate (1%-5%): carotid surgery, endovascular aortic aneurysm repair, gallbladder surgery, head or neck surgery, hernia repair, peripheral arterial angioplasty, renal transplant, major gynecologic, orthopedic, or neurologic (hip or spine) surgery, or urologic surgery
- High (> 5%): aortic and major vascular surgery (including aortic aneurysm), bladder removal (usually as a result of cancer), limb amputation, lung or liver transplant, pancreatic surgery, or perforated bowel repair.
The guidelines were endorsed by the European Society of Anaesthesiology and Intensive Care. The guideline authors reported numerous disclosures.
A version of this article first appeared on Medscape.com.
The European Society of Cardiology guidelines on cardiovascular assessment and management of patients undergoing noncardiac surgery have seen extensive revision since the 2014 version.
They still have the same aim – to prevent surgery-related bleeding complications, perioperative myocardial infarction/injury (PMI), stent thrombosis, acute heart failure, arrhythmias, pulmonary embolism, ischemic stroke, and cardiovascular (CV) death.
Cochairpersons Sigrun Halvorsen, MD, PhD, and Julinda Mehilli, MD, presented highlights from the guidelines at the annual congress of the European Society of Cardiology and the document was simultaneously published online in the European Heart Journal.
The document classifies noncardiac surgery into three levels of 30-day risk of CV death, MI, or stroke. Low (< 1%) risk includes eye or thyroid surgery; intermediate (1%-5%) risk includes knee or hip replacement or renal transplant; and high (> 5%) risk includes aortic aneurysm, lung transplant, or pancreatic or bladder cancer surgery (see more examples below).
It classifies patients as low risk if they are younger than 65 without CV disease or CV risk factors (smoking, hypertension, diabetes, dyslipidemia, family history); intermediate risk if they are 65 or older or have CV risk factors; and high risk if they have CVD.
In an interview, Dr. Halvorsen, professor in cardiology, University of Oslo, zeroed in on three important revisions:
First, recommendations for preoperative ECG and biomarkers are more specific, he noted.
The guidelines advise that before intermediate- or high-risk noncardiac surgery, in patients who have known CVD, CV risk factors (including age 65 or older), or symptoms suggestive of CVD:
- It is recommended to obtain a preoperative 12-lead ECG (class I).
- It is recommended to measure high-sensitivity cardiac troponin T (hs-cTn T) or high-sensitivity cardiac troponin I (hs-cTn I). It is also recommended to measure these biomarkers at 24 hours and 48 hours post surgery (class I).
- It should be considered to measure B-type natriuretic peptide or N-terminal of the prohormone BNP (NT-proBNP).
However, for low-risk patients undergoing low- and intermediate-risk noncardiac surgery, it is not recommended to routinely obtain preoperative ECG, hs-cTn T/I, or BNP/NT-proBNP concentrations (class III).
Troponins have a stronger class I recommendation, compared with the IIA recommendation for BNP, because they are useful for preoperative risk stratification and for diagnosis of PMI, Dr. Halvorsen explained. “Patients receive painkillers after surgery and may have no pain,” she noted, but they may have PMI, which has a bad prognosis.
Second, the guidelines recommend that “all patients should stop smoking 4 weeks before noncardiac surgery [class I],” she noted. Clinicians should also “measure hemoglobin, and if the patient is anemic, treat the anemia.”
Third, the sections on antithrombotic treatment have been significantly revised. “Bridging – stopping an oral antithrombotic drug and switching to a subcutaneous or IV drug – has been common,” Dr. Halvorsen said, “but recently we have new evidence that in most cases that increases the risk of bleeding.”
“We are [now] much more restrictive with respect to bridging” with unfractionated heparin or low-molecular-weight heparin, she said. “We recommend against bridging in patients with low to moderate thrombotic risk,” and bridging should only be considered in patients with mechanical prosthetic heart valves or with very high thrombotic risk.
More preoperative recommendations
In the guideline overview session at the congress, Dr. Halverson highlighted some of the new recommendations for preoperative risk assessment.
If time allows, it is recommended to optimize guideline-recommended treatment of CVD and control of CV risk factors including blood pressure, dyslipidemia, and diabetes, before noncardiac surgery (class I).
Patients commonly have “murmurs, chest pain, dyspnea, and edema that may suggest severe CVD, but may also be caused by noncardiac disease,” she noted. The guidelines state that “for patients with a newly detected murmur and symptoms or signs of CVD, transthoracic echocardiography is recommended before noncardiac surgery (class I).
“Many studies have been performed to try to find out if initiation of specific drugs before surgery could reduce the risk of complications,” Dr. Halvorsen noted. However, few have shown any benefit and “the question of presurgery initiation of beta-blockers has been greatly debated,” she said. “We have again reviewed the literature and concluded ‘Routine initiation of beta-blockers perioperatively is not recommended (class IIIA).’ “
“We adhere to the guidelines on acute and chronic coronary syndrome recommending 6-12 months of dual antiplatelet treatment as a standard before elective surgery,” she said. “However, in case of time-sensitive surgery, the duration of that treatment can be shortened down to a minimum of 1 month after elective PCI and a minimum of 3 months after PCI and ACS.”
Patients with specific types of CVD
Dr. Mehilli, a professor at Landshut-Achdorf (Germany) Hospital, highlighted some new guideline recommendations for patients who have specific types of cardiovascular disease.
Coronary artery disease (CAD). “For chronic coronary syndrome, a cardiac workup is recommended only for patients undergoing intermediate risk or high-risk noncardiac surgery.”
“Stress imaging should be considered before any high risk, noncardiac surgery in asymptomatic patients with poor functional capacity and prior PCI or coronary artery bypass graft (new recommendation, class IIa).”
Mitral valve regurgitation. For patients undergoing scheduled noncardiac surgery, who remain symptomatic despite guideline-directed medical treatment for mitral valve regurgitation (including resynchronization and myocardial revascularization), consider a valve intervention – either transcatheter or surgical – before noncardiac surgery in eligible patients with acceptable procedural risk (new recommendation).
Cardiac implantable electronic devices (CIED). For high-risk patients with CIEDs undergoing noncardiac surgery with high probability of electromagnetic interference, a CIED checkup and necessary reprogramming immediately before the procedure should be considered (new recommendation).
Arrhythmias. “I want only to stress,” Dr. Mehilli said, “in patients with atrial fibrillation with acute or worsening hemodynamic instability undergoing noncardiac surgery, an emergency electrical cardioversion is recommended (class I).”
Peripheral artery disease (PAD) and abdominal aortic aneurysm. For these patients “we do not recommend a routine referral for a cardiac workup. But we recommend it for patients with poor functional capacity or with significant risk factors or symptoms (new recommendations).”
Chronic arterial hypertension. “We have modified the recommendation, recommending avoidance of large perioperative fluctuations in blood pressure, and we do not recommend deferring noncardiac surgery in patients with stage 1 or 2 hypertension,” she said.
Postoperative cardiovascular complications
The most frequent postoperative cardiovascular complication is PMI, Dr. Mehilli noted.
“In the BASEL-PMI registry, the incidence of this complication around intermediate or high-risk noncardiac surgery was up to 15% among patients older than 65 years or with a history of CAD or PAD, which makes this kind of complication really important to prevent, to assess, and to know how to treat.”
“It is recommended to have a high awareness for perioperative cardiovascular complications, combined with surveillance for PMI in patients undergoing intermediate- or high-risk noncardiac surgery” based on serial measurements of high-sensitivity cardiac troponin.
The guidelines define PMI as “an increase in the delta of high-sensitivity troponin more than the upper level of normal,” Dr. Mehilli said. “It’s different from the one used in a rule-in algorithm for non-STEMI acute coronary syndrome.”
Postoperative atrial fibrillation (AFib) is observed in 2%-30% of noncardiac surgery patients in different registries, particularly in patients undergoing intermediate or high-risk noncardiac surgery, she noted.
“We propose an algorithm on how to prevent and treat this complication. I want to highlight that in patients with hemodynamic unstable postoperative AF[ib], an emergency cardioversion is indicated. For the others, a rate control with the target heart rate of less than 110 beats per minute is indicated.”
In patients with postoperative AFib, long-term oral anticoagulation therapy should be considered in all patients at risk for stroke, considering the anticipated net clinical benefit of oral anticoagulation therapy as well as informed patient preference (new recommendations).
Routine use of beta-blockers to prevent postoperative AFib in patients undergoing noncardiac surgery is not recommended.
The document also covers the management of patients with kidney disease, diabetes, cancer, obesity, and COVID-19. In general, elective noncardiac surgery should be postponed after a patient has COVID-19, until he or she recovers completely, and coexisting conditions are optimized.
The guidelines are available from the ESC website in several formats: pocket guidelines, pocket guidelines smartphone app, guidelines slide set, essential messages, and the European Heart Journal article.
Noncardiac surgery risk categories
The guideline includes a table that classifies noncardiac surgeries into three groups, based on the associated 30-day risk of death, MI, or stroke:
- Low (< 1%): breast, dental, eye, thyroid, and minor gynecologic, orthopedic, and urologic surgery.
- Intermediate (1%-5%): carotid surgery, endovascular aortic aneurysm repair, gallbladder surgery, head or neck surgery, hernia repair, peripheral arterial angioplasty, renal transplant, major gynecologic, orthopedic, or neurologic (hip or spine) surgery, or urologic surgery
- High (> 5%): aortic and major vascular surgery (including aortic aneurysm), bladder removal (usually as a result of cancer), limb amputation, lung or liver transplant, pancreatic surgery, or perforated bowel repair.
The guidelines were endorsed by the European Society of Anaesthesiology and Intensive Care. The guideline authors reported numerous disclosures.
A version of this article first appeared on Medscape.com.
The European Society of Cardiology guidelines on cardiovascular assessment and management of patients undergoing noncardiac surgery have seen extensive revision since the 2014 version.
They still have the same aim – to prevent surgery-related bleeding complications, perioperative myocardial infarction/injury (PMI), stent thrombosis, acute heart failure, arrhythmias, pulmonary embolism, ischemic stroke, and cardiovascular (CV) death.
Cochairpersons Sigrun Halvorsen, MD, PhD, and Julinda Mehilli, MD, presented highlights from the guidelines at the annual congress of the European Society of Cardiology and the document was simultaneously published online in the European Heart Journal.
The document classifies noncardiac surgery into three levels of 30-day risk of CV death, MI, or stroke. Low (< 1%) risk includes eye or thyroid surgery; intermediate (1%-5%) risk includes knee or hip replacement or renal transplant; and high (> 5%) risk includes aortic aneurysm, lung transplant, or pancreatic or bladder cancer surgery (see more examples below).
It classifies patients as low risk if they are younger than 65 without CV disease or CV risk factors (smoking, hypertension, diabetes, dyslipidemia, family history); intermediate risk if they are 65 or older or have CV risk factors; and high risk if they have CVD.
In an interview, Dr. Halvorsen, professor in cardiology, University of Oslo, zeroed in on three important revisions:
First, recommendations for preoperative ECG and biomarkers are more specific, he noted.
The guidelines advise that before intermediate- or high-risk noncardiac surgery, in patients who have known CVD, CV risk factors (including age 65 or older), or symptoms suggestive of CVD:
- It is recommended to obtain a preoperative 12-lead ECG (class I).
- It is recommended to measure high-sensitivity cardiac troponin T (hs-cTn T) or high-sensitivity cardiac troponin I (hs-cTn I). It is also recommended to measure these biomarkers at 24 hours and 48 hours post surgery (class I).
- It should be considered to measure B-type natriuretic peptide or N-terminal of the prohormone BNP (NT-proBNP).
However, for low-risk patients undergoing low- and intermediate-risk noncardiac surgery, it is not recommended to routinely obtain preoperative ECG, hs-cTn T/I, or BNP/NT-proBNP concentrations (class III).
Troponins have a stronger class I recommendation, compared with the IIA recommendation for BNP, because they are useful for preoperative risk stratification and for diagnosis of PMI, Dr. Halvorsen explained. “Patients receive painkillers after surgery and may have no pain,” she noted, but they may have PMI, which has a bad prognosis.
Second, the guidelines recommend that “all patients should stop smoking 4 weeks before noncardiac surgery [class I],” she noted. Clinicians should also “measure hemoglobin, and if the patient is anemic, treat the anemia.”
Third, the sections on antithrombotic treatment have been significantly revised. “Bridging – stopping an oral antithrombotic drug and switching to a subcutaneous or IV drug – has been common,” Dr. Halvorsen said, “but recently we have new evidence that in most cases that increases the risk of bleeding.”
“We are [now] much more restrictive with respect to bridging” with unfractionated heparin or low-molecular-weight heparin, she said. “We recommend against bridging in patients with low to moderate thrombotic risk,” and bridging should only be considered in patients with mechanical prosthetic heart valves or with very high thrombotic risk.
More preoperative recommendations
In the guideline overview session at the congress, Dr. Halverson highlighted some of the new recommendations for preoperative risk assessment.
If time allows, it is recommended to optimize guideline-recommended treatment of CVD and control of CV risk factors including blood pressure, dyslipidemia, and diabetes, before noncardiac surgery (class I).
Patients commonly have “murmurs, chest pain, dyspnea, and edema that may suggest severe CVD, but may also be caused by noncardiac disease,” she noted. The guidelines state that “for patients with a newly detected murmur and symptoms or signs of CVD, transthoracic echocardiography is recommended before noncardiac surgery (class I).
“Many studies have been performed to try to find out if initiation of specific drugs before surgery could reduce the risk of complications,” Dr. Halvorsen noted. However, few have shown any benefit and “the question of presurgery initiation of beta-blockers has been greatly debated,” she said. “We have again reviewed the literature and concluded ‘Routine initiation of beta-blockers perioperatively is not recommended (class IIIA).’ “
“We adhere to the guidelines on acute and chronic coronary syndrome recommending 6-12 months of dual antiplatelet treatment as a standard before elective surgery,” she said. “However, in case of time-sensitive surgery, the duration of that treatment can be shortened down to a minimum of 1 month after elective PCI and a minimum of 3 months after PCI and ACS.”
Patients with specific types of CVD
Dr. Mehilli, a professor at Landshut-Achdorf (Germany) Hospital, highlighted some new guideline recommendations for patients who have specific types of cardiovascular disease.
Coronary artery disease (CAD). “For chronic coronary syndrome, a cardiac workup is recommended only for patients undergoing intermediate risk or high-risk noncardiac surgery.”
“Stress imaging should be considered before any high risk, noncardiac surgery in asymptomatic patients with poor functional capacity and prior PCI or coronary artery bypass graft (new recommendation, class IIa).”
Mitral valve regurgitation. For patients undergoing scheduled noncardiac surgery, who remain symptomatic despite guideline-directed medical treatment for mitral valve regurgitation (including resynchronization and myocardial revascularization), consider a valve intervention – either transcatheter or surgical – before noncardiac surgery in eligible patients with acceptable procedural risk (new recommendation).
Cardiac implantable electronic devices (CIED). For high-risk patients with CIEDs undergoing noncardiac surgery with high probability of electromagnetic interference, a CIED checkup and necessary reprogramming immediately before the procedure should be considered (new recommendation).
Arrhythmias. “I want only to stress,” Dr. Mehilli said, “in patients with atrial fibrillation with acute or worsening hemodynamic instability undergoing noncardiac surgery, an emergency electrical cardioversion is recommended (class I).”
Peripheral artery disease (PAD) and abdominal aortic aneurysm. For these patients “we do not recommend a routine referral for a cardiac workup. But we recommend it for patients with poor functional capacity or with significant risk factors or symptoms (new recommendations).”
Chronic arterial hypertension. “We have modified the recommendation, recommending avoidance of large perioperative fluctuations in blood pressure, and we do not recommend deferring noncardiac surgery in patients with stage 1 or 2 hypertension,” she said.
Postoperative cardiovascular complications
The most frequent postoperative cardiovascular complication is PMI, Dr. Mehilli noted.
“In the BASEL-PMI registry, the incidence of this complication around intermediate or high-risk noncardiac surgery was up to 15% among patients older than 65 years or with a history of CAD or PAD, which makes this kind of complication really important to prevent, to assess, and to know how to treat.”
“It is recommended to have a high awareness for perioperative cardiovascular complications, combined with surveillance for PMI in patients undergoing intermediate- or high-risk noncardiac surgery” based on serial measurements of high-sensitivity cardiac troponin.
The guidelines define PMI as “an increase in the delta of high-sensitivity troponin more than the upper level of normal,” Dr. Mehilli said. “It’s different from the one used in a rule-in algorithm for non-STEMI acute coronary syndrome.”
Postoperative atrial fibrillation (AFib) is observed in 2%-30% of noncardiac surgery patients in different registries, particularly in patients undergoing intermediate or high-risk noncardiac surgery, she noted.
“We propose an algorithm on how to prevent and treat this complication. I want to highlight that in patients with hemodynamic unstable postoperative AF[ib], an emergency cardioversion is indicated. For the others, a rate control with the target heart rate of less than 110 beats per minute is indicated.”
In patients with postoperative AFib, long-term oral anticoagulation therapy should be considered in all patients at risk for stroke, considering the anticipated net clinical benefit of oral anticoagulation therapy as well as informed patient preference (new recommendations).
Routine use of beta-blockers to prevent postoperative AFib in patients undergoing noncardiac surgery is not recommended.
The document also covers the management of patients with kidney disease, diabetes, cancer, obesity, and COVID-19. In general, elective noncardiac surgery should be postponed after a patient has COVID-19, until he or she recovers completely, and coexisting conditions are optimized.
The guidelines are available from the ESC website in several formats: pocket guidelines, pocket guidelines smartphone app, guidelines slide set, essential messages, and the European Heart Journal article.
Noncardiac surgery risk categories
The guideline includes a table that classifies noncardiac surgeries into three groups, based on the associated 30-day risk of death, MI, or stroke:
- Low (< 1%): breast, dental, eye, thyroid, and minor gynecologic, orthopedic, and urologic surgery.
- Intermediate (1%-5%): carotid surgery, endovascular aortic aneurysm repair, gallbladder surgery, head or neck surgery, hernia repair, peripheral arterial angioplasty, renal transplant, major gynecologic, orthopedic, or neurologic (hip or spine) surgery, or urologic surgery
- High (> 5%): aortic and major vascular surgery (including aortic aneurysm), bladder removal (usually as a result of cancer), limb amputation, lung or liver transplant, pancreatic surgery, or perforated bowel repair.
The guidelines were endorsed by the European Society of Anaesthesiology and Intensive Care. The guideline authors reported numerous disclosures.
A version of this article first appeared on Medscape.com.
FROM ESC CONGRESS 2022
Vitamins or cocoa: Which preserves cognition?
Unexpected results from a phase 3 trial exploring the effect of multivitamins and cognition have now been published.
Originally presented last November at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference, this is the first large-scale, long-term randomized controlled trial to examine the effects of cocoa extract and multivitamins on global cognition. The trial’s primary focus was on cocoa extract, which earlier studies suggest may preserve cognitive function. Analyzing the effect of multivitamins was a secondary outcome.
Showing vitamins, but not cocoa, were beneficial is the exact opposite of what researchers expected. Still, the results offer an interesting new direction for future study, lead investigator Laura D. Baker, PhD, professor of gerontology and geriatric medicine at Wake Forest University, Winston-Salem, N.C., said in an interview.
“This study made us take notice of a pathway for possible cognitive protection,” Dr. Baker said. “Without this study, we would never have looked down that road.”
The full results were published online in Alzheimer’s and Dementia.
Unexpected effect
The COSMOS-Mind study is a substudy to a larger parent trial called COSMOS. It investigated the effects of cocoa extract and a standard multivitamin-mineral on cardiovascular and cancer outcomes in more than 21,000 older participants.
In COSMOS-Mind, researchers tested whether daily intake of cocoa extract vs. placebo and a multivitamin-mineral vs. placebo improved cognition in older adults.
More than 2,200 participants aged 65 and older were enrolled and followed for 3 years. They completed tests over the telephone at baseline and annually to evaluate memory and other cognitive abilities.
Results showed cocoa extract had no effect on global cognition compared with placebo (mean z-score, 0.03; P = .28). Daily multivitamin use, however, did show significant benefits on global cognition vs. placebo (mean z, 0.07, P = .007).
The beneficial effect was most pronounced in participants with a history of cardiovascular disease (no history 0.06 vs. history 0.14; P = .01).
Researchers found similar protective effects for memory and executive function.
Dr. Baker suggested one possible explanation for the positive effects of multivitamins may be the boost in micronutrients and essential minerals they provided.
“With nutrient-deficient diets plus a high prevalence of cardiovascular disease, diabetes, and other medical comorbidities that we know impact the bioavailability of these nutrients, we are possibly dealing with older adults who are at below optimum in terms of their essential micronutrients and minerals,” she said.
“Even suboptimum levels of micronutrients and essential minerals can have significant consequences for brain health,” she added.
More research needed
Intriguing as the results may be, more work is needed before the findings could affect nutritional guidance, according to Maria C. Carrillo, PhD, chief science officer for the Alzheimer’s Association.
“While the Alzheimer’s Association is encouraged by these results, we are not ready to recommend widespread use of a multivitamin supplement to reduce risk of cognitive decline in older adults,” Dr. Carrillo said in a statement.
“For now, and until there is more data, people should talk with their health care providers about the benefits and risks of all dietary supplements, including multivitamins,” she added.
Dr. Baker agreed, noting that the study was not designed to measure multivitamin use as a primary outcome. In addition, nearly 90% of the participants were non-Hispanic White, which is not representative of the overall population demographics.
The investigators are now designing another, larger trial that would include a more diverse participant pool. It will be aimed specifically at learning more about how and why multivitamins seem to offer a protective effect on cognition, Dr. Baker noted.
The study was funded by the National Institute on Aging of the National Institutes of Health. Dr. Baker and Dr. Carrillo report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Unexpected results from a phase 3 trial exploring the effect of multivitamins and cognition have now been published.
Originally presented last November at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference, this is the first large-scale, long-term randomized controlled trial to examine the effects of cocoa extract and multivitamins on global cognition. The trial’s primary focus was on cocoa extract, which earlier studies suggest may preserve cognitive function. Analyzing the effect of multivitamins was a secondary outcome.
Showing vitamins, but not cocoa, were beneficial is the exact opposite of what researchers expected. Still, the results offer an interesting new direction for future study, lead investigator Laura D. Baker, PhD, professor of gerontology and geriatric medicine at Wake Forest University, Winston-Salem, N.C., said in an interview.
“This study made us take notice of a pathway for possible cognitive protection,” Dr. Baker said. “Without this study, we would never have looked down that road.”
The full results were published online in Alzheimer’s and Dementia.
Unexpected effect
The COSMOS-Mind study is a substudy to a larger parent trial called COSMOS. It investigated the effects of cocoa extract and a standard multivitamin-mineral on cardiovascular and cancer outcomes in more than 21,000 older participants.
In COSMOS-Mind, researchers tested whether daily intake of cocoa extract vs. placebo and a multivitamin-mineral vs. placebo improved cognition in older adults.
More than 2,200 participants aged 65 and older were enrolled and followed for 3 years. They completed tests over the telephone at baseline and annually to evaluate memory and other cognitive abilities.
Results showed cocoa extract had no effect on global cognition compared with placebo (mean z-score, 0.03; P = .28). Daily multivitamin use, however, did show significant benefits on global cognition vs. placebo (mean z, 0.07, P = .007).
The beneficial effect was most pronounced in participants with a history of cardiovascular disease (no history 0.06 vs. history 0.14; P = .01).
Researchers found similar protective effects for memory and executive function.
Dr. Baker suggested one possible explanation for the positive effects of multivitamins may be the boost in micronutrients and essential minerals they provided.
“With nutrient-deficient diets plus a high prevalence of cardiovascular disease, diabetes, and other medical comorbidities that we know impact the bioavailability of these nutrients, we are possibly dealing with older adults who are at below optimum in terms of their essential micronutrients and minerals,” she said.
“Even suboptimum levels of micronutrients and essential minerals can have significant consequences for brain health,” she added.
More research needed
Intriguing as the results may be, more work is needed before the findings could affect nutritional guidance, according to Maria C. Carrillo, PhD, chief science officer for the Alzheimer’s Association.
“While the Alzheimer’s Association is encouraged by these results, we are not ready to recommend widespread use of a multivitamin supplement to reduce risk of cognitive decline in older adults,” Dr. Carrillo said in a statement.
“For now, and until there is more data, people should talk with their health care providers about the benefits and risks of all dietary supplements, including multivitamins,” she added.
Dr. Baker agreed, noting that the study was not designed to measure multivitamin use as a primary outcome. In addition, nearly 90% of the participants were non-Hispanic White, which is not representative of the overall population demographics.
The investigators are now designing another, larger trial that would include a more diverse participant pool. It will be aimed specifically at learning more about how and why multivitamins seem to offer a protective effect on cognition, Dr. Baker noted.
The study was funded by the National Institute on Aging of the National Institutes of Health. Dr. Baker and Dr. Carrillo report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Unexpected results from a phase 3 trial exploring the effect of multivitamins and cognition have now been published.
Originally presented last November at the 14th Clinical Trials on Alzheimer’s Disease (CTAD) conference, this is the first large-scale, long-term randomized controlled trial to examine the effects of cocoa extract and multivitamins on global cognition. The trial’s primary focus was on cocoa extract, which earlier studies suggest may preserve cognitive function. Analyzing the effect of multivitamins was a secondary outcome.
Showing vitamins, but not cocoa, were beneficial is the exact opposite of what researchers expected. Still, the results offer an interesting new direction for future study, lead investigator Laura D. Baker, PhD, professor of gerontology and geriatric medicine at Wake Forest University, Winston-Salem, N.C., said in an interview.
“This study made us take notice of a pathway for possible cognitive protection,” Dr. Baker said. “Without this study, we would never have looked down that road.”
The full results were published online in Alzheimer’s and Dementia.
Unexpected effect
The COSMOS-Mind study is a substudy to a larger parent trial called COSMOS. It investigated the effects of cocoa extract and a standard multivitamin-mineral on cardiovascular and cancer outcomes in more than 21,000 older participants.
In COSMOS-Mind, researchers tested whether daily intake of cocoa extract vs. placebo and a multivitamin-mineral vs. placebo improved cognition in older adults.
More than 2,200 participants aged 65 and older were enrolled and followed for 3 years. They completed tests over the telephone at baseline and annually to evaluate memory and other cognitive abilities.
Results showed cocoa extract had no effect on global cognition compared with placebo (mean z-score, 0.03; P = .28). Daily multivitamin use, however, did show significant benefits on global cognition vs. placebo (mean z, 0.07, P = .007).
The beneficial effect was most pronounced in participants with a history of cardiovascular disease (no history 0.06 vs. history 0.14; P = .01).
Researchers found similar protective effects for memory and executive function.
Dr. Baker suggested one possible explanation for the positive effects of multivitamins may be the boost in micronutrients and essential minerals they provided.
“With nutrient-deficient diets plus a high prevalence of cardiovascular disease, diabetes, and other medical comorbidities that we know impact the bioavailability of these nutrients, we are possibly dealing with older adults who are at below optimum in terms of their essential micronutrients and minerals,” she said.
“Even suboptimum levels of micronutrients and essential minerals can have significant consequences for brain health,” she added.
More research needed
Intriguing as the results may be, more work is needed before the findings could affect nutritional guidance, according to Maria C. Carrillo, PhD, chief science officer for the Alzheimer’s Association.
“While the Alzheimer’s Association is encouraged by these results, we are not ready to recommend widespread use of a multivitamin supplement to reduce risk of cognitive decline in older adults,” Dr. Carrillo said in a statement.
“For now, and until there is more data, people should talk with their health care providers about the benefits and risks of all dietary supplements, including multivitamins,” she added.
Dr. Baker agreed, noting that the study was not designed to measure multivitamin use as a primary outcome. In addition, nearly 90% of the participants were non-Hispanic White, which is not representative of the overall population demographics.
The investigators are now designing another, larger trial that would include a more diverse participant pool. It will be aimed specifically at learning more about how and why multivitamins seem to offer a protective effect on cognition, Dr. Baker noted.
The study was funded by the National Institute on Aging of the National Institutes of Health. Dr. Baker and Dr. Carrillo report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM ALZHEIMER’S AND DEMENTIA
TBI is an unrecognized risk factor for cardiovascular disease
(CVD). More severe TBI is associated with higher risk of CVD, new research shows.
Given the relatively young age of post-9/11–era veterans with TBI, there may be an increased burden of heart disease in the future as these veterans age and develop traditional risk factors for CVD, the investigators, led by Ian J. Stewart, MD, with Uniformed Services University, Bethesda, Md., wrote.
The study was published online in JAMA Neurology.
Novel data
Since Sept. 11, 2001, 4.5 million people have served in the U.S. military, with their time in service defined by the long-running wars in Iraq and Afghanistan. Estimates suggest that up to 20% of post-9/11 veterans sustained a TBI.
While some evidence suggests that TBI increases the risk of CVD, prior reports have focused mainly on cerebrovascular outcomes. Until now, the potential association of TBI with CVD has not been comprehensively examined in post-9/11–era veterans.
The retrospective cohort study included 1,559,928 predominantly male post-9/11 veterans, including 301,169 (19.3%) with a history of TBI and 1,258,759 (81%) with no TBI history.
In fully adjusted models, compared with veterans with no TBI history, a history of mild, moderate/severe, or penetrating TBI was associated with increased risk of developing the composite CVD endpoint (coronary artery disease, stroke, peripheral artery disease, and CVD death).
TBIs of all severities were associated with the individual components of the composite outcome, except penetrating TBI and CVD death.
“The association of TBI with subsequent CVD was not attenuated in multivariable models, suggesting that TBI may be accounting for risk that is independent from the other variables,” Dr. Stewart and colleagues wrote.
They noted that the risk was highest shortly after injury, but TBI remained significantly associated with CVD for years after the initial insult.
Why TBI may raise the risk of subsequent CVD remains unclear.
It’s possible that patients with TBI develop more traditional risk factors for CVD through time than do patients without TBI. A study in mice found that TBI led to increased rates of atherosclerosis, the researchers said.
An additional mechanism may be disruption of autonomic regulation, which has been known to occur after TBI.
Another potential pathway is through mental health diagnoses, such as posttraumatic stress disorder; a large body of work has identified associations between PTSD and CVD, including among post-9/11 veterans.
Further work is needed to determine how this risk can be modified to improve outcomes for post-9/11–era veterans, the researchers write.
Unrecognized CVD risk factor?
Reached for comment, Shaheen E. Lakhan, MD, PhD, a neurologist and researcher from Boston who wasn’t involved in the study, said the effects of TBI on heart health are “very underreported, and most clinicians would not make the link.”
“When the brain suffers a traumatic injury, it activates a cascade of neuro-inflammation that goes haywire in an attempt to protect further brain damage. Oftentimes, these inflammatory by-products leak into the body, especially in trauma, when the barriers are broken between brain and body, and can cause systemic body inflammation, which is well associated with heart disease,” Dr. Lakhan said.
In addition, Dr. Lakhan said, “TBI itself localized to just the brain can negatively affect good health habits, leading to worsening heart health, too.”
“Research like this brings light where not much exists and underscores the importance of protecting our brains from physical trauma,” he said.
The study was supported by the assistant secretary of defense for health affairs, endorsed by the Department of Defense through the Psychological Health/Traumatic Brain Injury Research Program Long-Term Impact of Military-Relevant Brain Injury Consortium, and by the U.S. Department of Veterans Affairs. Dr. Stewart and Dr. Lakhan have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
(CVD). More severe TBI is associated with higher risk of CVD, new research shows.
Given the relatively young age of post-9/11–era veterans with TBI, there may be an increased burden of heart disease in the future as these veterans age and develop traditional risk factors for CVD, the investigators, led by Ian J. Stewart, MD, with Uniformed Services University, Bethesda, Md., wrote.
The study was published online in JAMA Neurology.
Novel data
Since Sept. 11, 2001, 4.5 million people have served in the U.S. military, with their time in service defined by the long-running wars in Iraq and Afghanistan. Estimates suggest that up to 20% of post-9/11 veterans sustained a TBI.
While some evidence suggests that TBI increases the risk of CVD, prior reports have focused mainly on cerebrovascular outcomes. Until now, the potential association of TBI with CVD has not been comprehensively examined in post-9/11–era veterans.
The retrospective cohort study included 1,559,928 predominantly male post-9/11 veterans, including 301,169 (19.3%) with a history of TBI and 1,258,759 (81%) with no TBI history.
In fully adjusted models, compared with veterans with no TBI history, a history of mild, moderate/severe, or penetrating TBI was associated with increased risk of developing the composite CVD endpoint (coronary artery disease, stroke, peripheral artery disease, and CVD death).
TBIs of all severities were associated with the individual components of the composite outcome, except penetrating TBI and CVD death.
“The association of TBI with subsequent CVD was not attenuated in multivariable models, suggesting that TBI may be accounting for risk that is independent from the other variables,” Dr. Stewart and colleagues wrote.
They noted that the risk was highest shortly after injury, but TBI remained significantly associated with CVD for years after the initial insult.
Why TBI may raise the risk of subsequent CVD remains unclear.
It’s possible that patients with TBI develop more traditional risk factors for CVD through time than do patients without TBI. A study in mice found that TBI led to increased rates of atherosclerosis, the researchers said.
An additional mechanism may be disruption of autonomic regulation, which has been known to occur after TBI.
Another potential pathway is through mental health diagnoses, such as posttraumatic stress disorder; a large body of work has identified associations between PTSD and CVD, including among post-9/11 veterans.
Further work is needed to determine how this risk can be modified to improve outcomes for post-9/11–era veterans, the researchers write.
Unrecognized CVD risk factor?
Reached for comment, Shaheen E. Lakhan, MD, PhD, a neurologist and researcher from Boston who wasn’t involved in the study, said the effects of TBI on heart health are “very underreported, and most clinicians would not make the link.”
“When the brain suffers a traumatic injury, it activates a cascade of neuro-inflammation that goes haywire in an attempt to protect further brain damage. Oftentimes, these inflammatory by-products leak into the body, especially in trauma, when the barriers are broken between brain and body, and can cause systemic body inflammation, which is well associated with heart disease,” Dr. Lakhan said.
In addition, Dr. Lakhan said, “TBI itself localized to just the brain can negatively affect good health habits, leading to worsening heart health, too.”
“Research like this brings light where not much exists and underscores the importance of protecting our brains from physical trauma,” he said.
The study was supported by the assistant secretary of defense for health affairs, endorsed by the Department of Defense through the Psychological Health/Traumatic Brain Injury Research Program Long-Term Impact of Military-Relevant Brain Injury Consortium, and by the U.S. Department of Veterans Affairs. Dr. Stewart and Dr. Lakhan have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
(CVD). More severe TBI is associated with higher risk of CVD, new research shows.
Given the relatively young age of post-9/11–era veterans with TBI, there may be an increased burden of heart disease in the future as these veterans age and develop traditional risk factors for CVD, the investigators, led by Ian J. Stewart, MD, with Uniformed Services University, Bethesda, Md., wrote.
The study was published online in JAMA Neurology.
Novel data
Since Sept. 11, 2001, 4.5 million people have served in the U.S. military, with their time in service defined by the long-running wars in Iraq and Afghanistan. Estimates suggest that up to 20% of post-9/11 veterans sustained a TBI.
While some evidence suggests that TBI increases the risk of CVD, prior reports have focused mainly on cerebrovascular outcomes. Until now, the potential association of TBI with CVD has not been comprehensively examined in post-9/11–era veterans.
The retrospective cohort study included 1,559,928 predominantly male post-9/11 veterans, including 301,169 (19.3%) with a history of TBI and 1,258,759 (81%) with no TBI history.
In fully adjusted models, compared with veterans with no TBI history, a history of mild, moderate/severe, or penetrating TBI was associated with increased risk of developing the composite CVD endpoint (coronary artery disease, stroke, peripheral artery disease, and CVD death).
TBIs of all severities were associated with the individual components of the composite outcome, except penetrating TBI and CVD death.
“The association of TBI with subsequent CVD was not attenuated in multivariable models, suggesting that TBI may be accounting for risk that is independent from the other variables,” Dr. Stewart and colleagues wrote.
They noted that the risk was highest shortly after injury, but TBI remained significantly associated with CVD for years after the initial insult.
Why TBI may raise the risk of subsequent CVD remains unclear.
It’s possible that patients with TBI develop more traditional risk factors for CVD through time than do patients without TBI. A study in mice found that TBI led to increased rates of atherosclerosis, the researchers said.
An additional mechanism may be disruption of autonomic regulation, which has been known to occur after TBI.
Another potential pathway is through mental health diagnoses, such as posttraumatic stress disorder; a large body of work has identified associations between PTSD and CVD, including among post-9/11 veterans.
Further work is needed to determine how this risk can be modified to improve outcomes for post-9/11–era veterans, the researchers write.
Unrecognized CVD risk factor?
Reached for comment, Shaheen E. Lakhan, MD, PhD, a neurologist and researcher from Boston who wasn’t involved in the study, said the effects of TBI on heart health are “very underreported, and most clinicians would not make the link.”
“When the brain suffers a traumatic injury, it activates a cascade of neuro-inflammation that goes haywire in an attempt to protect further brain damage. Oftentimes, these inflammatory by-products leak into the body, especially in trauma, when the barriers are broken between brain and body, and can cause systemic body inflammation, which is well associated with heart disease,” Dr. Lakhan said.
In addition, Dr. Lakhan said, “TBI itself localized to just the brain can negatively affect good health habits, leading to worsening heart health, too.”
“Research like this brings light where not much exists and underscores the importance of protecting our brains from physical trauma,” he said.
The study was supported by the assistant secretary of defense for health affairs, endorsed by the Department of Defense through the Psychological Health/Traumatic Brain Injury Research Program Long-Term Impact of Military-Relevant Brain Injury Consortium, and by the U.S. Department of Veterans Affairs. Dr. Stewart and Dr. Lakhan have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Ages and Stages Questionnaire a first step to find developmental delays
The commonly used but sometimes debated Ages and Stages Questionnaire (ASQ), has modest utility for identifying developmental delays in young children, an Australian review and meta-analysis found.
On this easily administered parent-completed screening tool, scores of more than 2 standard deviations below the mean in more than one of five domains had moderate sensitivity and specificity to predict any delay, severe delay, motor delay, and cognitive delay, according to neonatologist Shripada Rao, PhD, a clinical associate professor in the neonatal intensive care unit at Perth Hospital and the University of Western Australia, also in Perth, and colleagues.
If a child of 12-60 months passes all ASQ domains, there is a moderate probability that child does not have severe developmental delay, the researchers concluded. If a child in that age range fails the motor or cognitive domain, there is a moderate probability that some motor or cognitive delay is present. The authors say the tool may work best as a screening test to identify children in need of more formal assessment.
“Our meta-analysis found that ASQ was somewhat more predictive in older children (older than 24 months), compared with younger age groups of 12-24 months,” Dr. Rao said in an interview. “However, the sample size for these comparisons was too small to reach definite conclusions, and we have called for future studies to evaluate ASQ separately for different age groups.”
Early identification of developmental delay in children is essential to enable timely intervention,” Dr. Rao and associates wrote in JAMA Pediatrics.
While formal assessments such as the Bayley Scales of Infant and Toddler Development are the gold standard, they are time-consuming and expensive, need the physical attendance of both the child and caregivers, and “thus may not be feasible in resource-limited settings or in pandemic conditions.”
According to Barbara J. Howard, MD, commenting on a recent update to the Center for Disease Control and Prevention’s developmental milestones guide, Learn the Signs. Act Early, fewer than 25% of children with delays or disabilities receive intervention before age 3 and most with emotional, behavioral, and developmental condition, other than autism spectrum disorder receive no intervention before age 5.
The ASQ
As an accessible alternative, the ASQ consists of questions on communication (language), gross-motor, fine-motor, problem-solving (cognitive), and personal-adaptive skills. The survey requires only 10-15 minutes, is relatively inexpensive, and also establishes a sense of parental involvement, the authors noted.
“Based on the generally accepted interpretation of LR [likelihood ratio] values, if a child passes ASQ-2SD, there is a moderate probability that the child does not have severe delay,” the investigators concluded.
The analysis
The final meta-analysis reviewed 36 eligible ASQ studies published from 1997 to 2022. Looking at the four indicators of pooled sensitivity, specificity, and positive and negative likelihood ratios, the following respective predictive values emerged for scores of more than 2 SDs below the mean across several domains: sensitivity of 0.77 (95% confidence interval, 0.64-0.86), specificity of 0.81 (95% CI 0.75-0.86), positive likelihood ratio of 4.10 (95% CI 3.17-5.30), and a negative likelihood ratio of 0.28 (95% CI, 0.18-0.44)
They cautioned, however, that the certainty of evidence from the reviewed studies was low or very low and given the small sample sizes for comparing domains, clinicians should be circumspect in interpreting the results.
An initial step
Commenting on the paper but not involved in it, David G. Fagan, MD, vice chairman of pediatric ambulatory administration in the department of pediatrics at Cohen Children’s Medical Center, New York, agreed that screening tools such as the ASQ have a place in clinical practice. “However, the purpose of a screening tool is not to make the diagnosis but to identify children at risk for developmental delays,” he said in an interview. “The meta-analysis highlights the fact that no screening is 100% accurate and that results need to be interpreted in context.
“Before screening tools were widely used, pediatricians trusted their gut,” Dr. Fagan continued. “‘I know it when I see it,’ which obviously resulted in tremendous variability based on experience.”
He added that, even if a child passes this validated questionnaire, any concern on the part of a parent or pediatrician about developmental delay should be addressed with further assessment.
The future
According to Dr. Rao, clinicians should continue to screen for developmental delays in young children using the ASQ. “Given the long wait times to see a developmental pediatrician or a clinical psychologist, a screening tool such as ASQ will enable appropriate triaging.”
Going forward, however, studies should evaluate this questionnaire separately for different age groups such as less than 12 months, 12-23 months, and at least 24 months. They should also be prospective in design and entail a low risk of bias, as well as report raw numbers for true and false positives and negatives. “Even if they use their own cutoff ASQ scores, they should also give results for the conventional cutoff scores to enable comparison with other studies,” the authors wrote.
The authors disclosed no specific funding for this study and no competing interests. Dr. Fagan disclosed no competing interests with regard to his comments.
The commonly used but sometimes debated Ages and Stages Questionnaire (ASQ), has modest utility for identifying developmental delays in young children, an Australian review and meta-analysis found.
On this easily administered parent-completed screening tool, scores of more than 2 standard deviations below the mean in more than one of five domains had moderate sensitivity and specificity to predict any delay, severe delay, motor delay, and cognitive delay, according to neonatologist Shripada Rao, PhD, a clinical associate professor in the neonatal intensive care unit at Perth Hospital and the University of Western Australia, also in Perth, and colleagues.
If a child of 12-60 months passes all ASQ domains, there is a moderate probability that child does not have severe developmental delay, the researchers concluded. If a child in that age range fails the motor or cognitive domain, there is a moderate probability that some motor or cognitive delay is present. The authors say the tool may work best as a screening test to identify children in need of more formal assessment.
“Our meta-analysis found that ASQ was somewhat more predictive in older children (older than 24 months), compared with younger age groups of 12-24 months,” Dr. Rao said in an interview. “However, the sample size for these comparisons was too small to reach definite conclusions, and we have called for future studies to evaluate ASQ separately for different age groups.”
Early identification of developmental delay in children is essential to enable timely intervention,” Dr. Rao and associates wrote in JAMA Pediatrics.
While formal assessments such as the Bayley Scales of Infant and Toddler Development are the gold standard, they are time-consuming and expensive, need the physical attendance of both the child and caregivers, and “thus may not be feasible in resource-limited settings or in pandemic conditions.”
According to Barbara J. Howard, MD, commenting on a recent update to the Center for Disease Control and Prevention’s developmental milestones guide, Learn the Signs. Act Early, fewer than 25% of children with delays or disabilities receive intervention before age 3 and most with emotional, behavioral, and developmental condition, other than autism spectrum disorder receive no intervention before age 5.
The ASQ
As an accessible alternative, the ASQ consists of questions on communication (language), gross-motor, fine-motor, problem-solving (cognitive), and personal-adaptive skills. The survey requires only 10-15 minutes, is relatively inexpensive, and also establishes a sense of parental involvement, the authors noted.
“Based on the generally accepted interpretation of LR [likelihood ratio] values, if a child passes ASQ-2SD, there is a moderate probability that the child does not have severe delay,” the investigators concluded.
The analysis
The final meta-analysis reviewed 36 eligible ASQ studies published from 1997 to 2022. Looking at the four indicators of pooled sensitivity, specificity, and positive and negative likelihood ratios, the following respective predictive values emerged for scores of more than 2 SDs below the mean across several domains: sensitivity of 0.77 (95% confidence interval, 0.64-0.86), specificity of 0.81 (95% CI 0.75-0.86), positive likelihood ratio of 4.10 (95% CI 3.17-5.30), and a negative likelihood ratio of 0.28 (95% CI, 0.18-0.44)
They cautioned, however, that the certainty of evidence from the reviewed studies was low or very low and given the small sample sizes for comparing domains, clinicians should be circumspect in interpreting the results.
An initial step
Commenting on the paper but not involved in it, David G. Fagan, MD, vice chairman of pediatric ambulatory administration in the department of pediatrics at Cohen Children’s Medical Center, New York, agreed that screening tools such as the ASQ have a place in clinical practice. “However, the purpose of a screening tool is not to make the diagnosis but to identify children at risk for developmental delays,” he said in an interview. “The meta-analysis highlights the fact that no screening is 100% accurate and that results need to be interpreted in context.
“Before screening tools were widely used, pediatricians trusted their gut,” Dr. Fagan continued. “‘I know it when I see it,’ which obviously resulted in tremendous variability based on experience.”
He added that, even if a child passes this validated questionnaire, any concern on the part of a parent or pediatrician about developmental delay should be addressed with further assessment.
The future
According to Dr. Rao, clinicians should continue to screen for developmental delays in young children using the ASQ. “Given the long wait times to see a developmental pediatrician or a clinical psychologist, a screening tool such as ASQ will enable appropriate triaging.”
Going forward, however, studies should evaluate this questionnaire separately for different age groups such as less than 12 months, 12-23 months, and at least 24 months. They should also be prospective in design and entail a low risk of bias, as well as report raw numbers for true and false positives and negatives. “Even if they use their own cutoff ASQ scores, they should also give results for the conventional cutoff scores to enable comparison with other studies,” the authors wrote.
The authors disclosed no specific funding for this study and no competing interests. Dr. Fagan disclosed no competing interests with regard to his comments.
The commonly used but sometimes debated Ages and Stages Questionnaire (ASQ), has modest utility for identifying developmental delays in young children, an Australian review and meta-analysis found.
On this easily administered parent-completed screening tool, scores of more than 2 standard deviations below the mean in more than one of five domains had moderate sensitivity and specificity to predict any delay, severe delay, motor delay, and cognitive delay, according to neonatologist Shripada Rao, PhD, a clinical associate professor in the neonatal intensive care unit at Perth Hospital and the University of Western Australia, also in Perth, and colleagues.
If a child of 12-60 months passes all ASQ domains, there is a moderate probability that child does not have severe developmental delay, the researchers concluded. If a child in that age range fails the motor or cognitive domain, there is a moderate probability that some motor or cognitive delay is present. The authors say the tool may work best as a screening test to identify children in need of more formal assessment.
“Our meta-analysis found that ASQ was somewhat more predictive in older children (older than 24 months), compared with younger age groups of 12-24 months,” Dr. Rao said in an interview. “However, the sample size for these comparisons was too small to reach definite conclusions, and we have called for future studies to evaluate ASQ separately for different age groups.”
Early identification of developmental delay in children is essential to enable timely intervention,” Dr. Rao and associates wrote in JAMA Pediatrics.
While formal assessments such as the Bayley Scales of Infant and Toddler Development are the gold standard, they are time-consuming and expensive, need the physical attendance of both the child and caregivers, and “thus may not be feasible in resource-limited settings or in pandemic conditions.”
According to Barbara J. Howard, MD, commenting on a recent update to the Center for Disease Control and Prevention’s developmental milestones guide, Learn the Signs. Act Early, fewer than 25% of children with delays or disabilities receive intervention before age 3 and most with emotional, behavioral, and developmental condition, other than autism spectrum disorder receive no intervention before age 5.
The ASQ
As an accessible alternative, the ASQ consists of questions on communication (language), gross-motor, fine-motor, problem-solving (cognitive), and personal-adaptive skills. The survey requires only 10-15 minutes, is relatively inexpensive, and also establishes a sense of parental involvement, the authors noted.
“Based on the generally accepted interpretation of LR [likelihood ratio] values, if a child passes ASQ-2SD, there is a moderate probability that the child does not have severe delay,” the investigators concluded.
The analysis
The final meta-analysis reviewed 36 eligible ASQ studies published from 1997 to 2022. Looking at the four indicators of pooled sensitivity, specificity, and positive and negative likelihood ratios, the following respective predictive values emerged for scores of more than 2 SDs below the mean across several domains: sensitivity of 0.77 (95% confidence interval, 0.64-0.86), specificity of 0.81 (95% CI 0.75-0.86), positive likelihood ratio of 4.10 (95% CI 3.17-5.30), and a negative likelihood ratio of 0.28 (95% CI, 0.18-0.44)
They cautioned, however, that the certainty of evidence from the reviewed studies was low or very low and given the small sample sizes for comparing domains, clinicians should be circumspect in interpreting the results.
An initial step
Commenting on the paper but not involved in it, David G. Fagan, MD, vice chairman of pediatric ambulatory administration in the department of pediatrics at Cohen Children’s Medical Center, New York, agreed that screening tools such as the ASQ have a place in clinical practice. “However, the purpose of a screening tool is not to make the diagnosis but to identify children at risk for developmental delays,” he said in an interview. “The meta-analysis highlights the fact that no screening is 100% accurate and that results need to be interpreted in context.
“Before screening tools were widely used, pediatricians trusted their gut,” Dr. Fagan continued. “‘I know it when I see it,’ which obviously resulted in tremendous variability based on experience.”
He added that, even if a child passes this validated questionnaire, any concern on the part of a parent or pediatrician about developmental delay should be addressed with further assessment.
The future
According to Dr. Rao, clinicians should continue to screen for developmental delays in young children using the ASQ. “Given the long wait times to see a developmental pediatrician or a clinical psychologist, a screening tool such as ASQ will enable appropriate triaging.”
Going forward, however, studies should evaluate this questionnaire separately for different age groups such as less than 12 months, 12-23 months, and at least 24 months. They should also be prospective in design and entail a low risk of bias, as well as report raw numbers for true and false positives and negatives. “Even if they use their own cutoff ASQ scores, they should also give results for the conventional cutoff scores to enable comparison with other studies,” the authors wrote.
The authors disclosed no specific funding for this study and no competing interests. Dr. Fagan disclosed no competing interests with regard to his comments.
FROM JAMA PEDIATRICS
Ketamine promising for rare condition linked to autism
Also known as Helsmoortel–Van Der Aa syndrome, ADNP syndrome is caused by mutations in the ADNP gene. Studies in animal models suggest that low-dose ketamine increases expression of ADNP and is neuroprotective.
Intrigued by the preclinical evidence, Alexander Kolevzon, MD, clinical director of the Seaver Autism Center at Mount Sinai, New York, and colleagues treated 10 children with ADNP syndrome with a single low dose of ketamine (0.5mg/kg) infused intravenously over 40 minutes. The children ranged in ages 6-12 years.
Using parent-report instruments to assess treatment effects, ketamine was associated with “nominally significant” improvement in a variety of domains, including social behavior, attention-deficit and hyperactivity, restricted and repetitive behaviors, and sensory sensitivities.
Parent reports of improvement in these domains aligned with clinician-rated assessments based on the Clinical Global Impressions–Improvement scale.
The results also highlight the potential utility of electrophysiological measurement of auditory steady-state response and eye-tracking to track change with ketamine treatment, the researchers say.
The study was published online in Human Genetics and Genomic (HGG) Advances.
Hypothesis-generating
Ketamine was generally well tolerated. There were no clinically significant abnormalities in laboratory or cardiac monitoring, and there were no serious adverse events (AEs).
Treatment emergent AEs were all mild to moderate and no child required any interventions.
The most common AEs were elation/silliness in five children (50%), all of whom had a history of similar symptoms. Drowsiness and fatigue occurred in four children (40%) and two of them had a history of drowsiness. Aggression was likewise relatively common, reported in four children (40%), all of whom had aggression at baseline.
Decreased appetite emerged as a new AE in three children (30%), increased anxiety occurred in three children (30%), and irritability, nausea/vomiting, and restlessness each occurred in two children (20%).
The researchers caution that the findings are intended to be “hypothesis generating.”
“We are encouraged by these findings, which provide preliminary support for ketamine to help reduce negative effects of this devastating syndrome,” Dr. Kolevzon said in a news release from Mount Sinai.
Ketamine might help ease symptoms of ADNP syndrome “by increasing expression of the ADNP gene or by promoting synaptic plasticity through glutamatergic pathways,” Dr. Kolevzon told this news organization.
The next step, he said, is to get “a larger, placebo-controlled study approved for funding using repeated dosing over a longer duration of time. We are working with the FDA to get the design approved for an investigational new drug application.”
Support for the study was provided by the ADNP Kids Foundation and the Foundation for Mood Disorders. Support for mediKanren was provided by the National Center for Advancing Translational Sciences, and National Institutes of Health through the Biomedical Data Translator Program. Dr. Kolevzon is on the scientific advisory board of Ovid Therapeutics, Ritrova Therapeutics, and Jaguar Therapeutics and consults to Acadia, Alkermes, GW Pharmaceuticals, Neuren Pharmaceuticals, Clinilabs Drug Development Corporation, and Scioto Biosciences.
A version of this article first appeared on Medscape.com.
Also known as Helsmoortel–Van Der Aa syndrome, ADNP syndrome is caused by mutations in the ADNP gene. Studies in animal models suggest that low-dose ketamine increases expression of ADNP and is neuroprotective.
Intrigued by the preclinical evidence, Alexander Kolevzon, MD, clinical director of the Seaver Autism Center at Mount Sinai, New York, and colleagues treated 10 children with ADNP syndrome with a single low dose of ketamine (0.5mg/kg) infused intravenously over 40 minutes. The children ranged in ages 6-12 years.
Using parent-report instruments to assess treatment effects, ketamine was associated with “nominally significant” improvement in a variety of domains, including social behavior, attention-deficit and hyperactivity, restricted and repetitive behaviors, and sensory sensitivities.
Parent reports of improvement in these domains aligned with clinician-rated assessments based on the Clinical Global Impressions–Improvement scale.
The results also highlight the potential utility of electrophysiological measurement of auditory steady-state response and eye-tracking to track change with ketamine treatment, the researchers say.
The study was published online in Human Genetics and Genomic (HGG) Advances.
Hypothesis-generating
Ketamine was generally well tolerated. There were no clinically significant abnormalities in laboratory or cardiac monitoring, and there were no serious adverse events (AEs).
Treatment emergent AEs were all mild to moderate and no child required any interventions.
The most common AEs were elation/silliness in five children (50%), all of whom had a history of similar symptoms. Drowsiness and fatigue occurred in four children (40%) and two of them had a history of drowsiness. Aggression was likewise relatively common, reported in four children (40%), all of whom had aggression at baseline.
Decreased appetite emerged as a new AE in three children (30%), increased anxiety occurred in three children (30%), and irritability, nausea/vomiting, and restlessness each occurred in two children (20%).
The researchers caution that the findings are intended to be “hypothesis generating.”
“We are encouraged by these findings, which provide preliminary support for ketamine to help reduce negative effects of this devastating syndrome,” Dr. Kolevzon said in a news release from Mount Sinai.
Ketamine might help ease symptoms of ADNP syndrome “by increasing expression of the ADNP gene or by promoting synaptic plasticity through glutamatergic pathways,” Dr. Kolevzon told this news organization.
The next step, he said, is to get “a larger, placebo-controlled study approved for funding using repeated dosing over a longer duration of time. We are working with the FDA to get the design approved for an investigational new drug application.”
Support for the study was provided by the ADNP Kids Foundation and the Foundation for Mood Disorders. Support for mediKanren was provided by the National Center for Advancing Translational Sciences, and National Institutes of Health through the Biomedical Data Translator Program. Dr. Kolevzon is on the scientific advisory board of Ovid Therapeutics, Ritrova Therapeutics, and Jaguar Therapeutics and consults to Acadia, Alkermes, GW Pharmaceuticals, Neuren Pharmaceuticals, Clinilabs Drug Development Corporation, and Scioto Biosciences.
A version of this article first appeared on Medscape.com.
Also known as Helsmoortel–Van Der Aa syndrome, ADNP syndrome is caused by mutations in the ADNP gene. Studies in animal models suggest that low-dose ketamine increases expression of ADNP and is neuroprotective.
Intrigued by the preclinical evidence, Alexander Kolevzon, MD, clinical director of the Seaver Autism Center at Mount Sinai, New York, and colleagues treated 10 children with ADNP syndrome with a single low dose of ketamine (0.5mg/kg) infused intravenously over 40 minutes. The children ranged in ages 6-12 years.
Using parent-report instruments to assess treatment effects, ketamine was associated with “nominally significant” improvement in a variety of domains, including social behavior, attention-deficit and hyperactivity, restricted and repetitive behaviors, and sensory sensitivities.
Parent reports of improvement in these domains aligned with clinician-rated assessments based on the Clinical Global Impressions–Improvement scale.
The results also highlight the potential utility of electrophysiological measurement of auditory steady-state response and eye-tracking to track change with ketamine treatment, the researchers say.
The study was published online in Human Genetics and Genomic (HGG) Advances.
Hypothesis-generating
Ketamine was generally well tolerated. There were no clinically significant abnormalities in laboratory or cardiac monitoring, and there were no serious adverse events (AEs).
Treatment emergent AEs were all mild to moderate and no child required any interventions.
The most common AEs were elation/silliness in five children (50%), all of whom had a history of similar symptoms. Drowsiness and fatigue occurred in four children (40%) and two of them had a history of drowsiness. Aggression was likewise relatively common, reported in four children (40%), all of whom had aggression at baseline.
Decreased appetite emerged as a new AE in three children (30%), increased anxiety occurred in three children (30%), and irritability, nausea/vomiting, and restlessness each occurred in two children (20%).
The researchers caution that the findings are intended to be “hypothesis generating.”
“We are encouraged by these findings, which provide preliminary support for ketamine to help reduce negative effects of this devastating syndrome,” Dr. Kolevzon said in a news release from Mount Sinai.
Ketamine might help ease symptoms of ADNP syndrome “by increasing expression of the ADNP gene or by promoting synaptic plasticity through glutamatergic pathways,” Dr. Kolevzon told this news organization.
The next step, he said, is to get “a larger, placebo-controlled study approved for funding using repeated dosing over a longer duration of time. We are working with the FDA to get the design approved for an investigational new drug application.”
Support for the study was provided by the ADNP Kids Foundation and the Foundation for Mood Disorders. Support for mediKanren was provided by the National Center for Advancing Translational Sciences, and National Institutes of Health through the Biomedical Data Translator Program. Dr. Kolevzon is on the scientific advisory board of Ovid Therapeutics, Ritrova Therapeutics, and Jaguar Therapeutics and consults to Acadia, Alkermes, GW Pharmaceuticals, Neuren Pharmaceuticals, Clinilabs Drug Development Corporation, and Scioto Biosciences.
A version of this article first appeared on Medscape.com.
From Human Genetics and Genomic Advances
Parent training pays off for children with autism
“Referrals for parent training should now be considered the expected standard for medical practice,” said a member of the research team, Timothy B. Smith, PhD, a professor of psychology at Brigham Young University, Provo, Utah.
Programs that show parents how to teach functional skills and address maladaptive behaviors, also known as parent-mediated or parent-implemented interventions, offer an alternative to one-on-one professional services, which are in short supply, according to the paper, which was published in the Journal of Autism and Developmental Disorders.
Methods and results
The meta-analysis included 54 papers based on randomized clinical trials involving 2,895 children, which compared the effects of various parent interventions with professional treatment, treatment as usual, or being on a wait-list to receive an intervention.
Overall the research team reported “moderately strong” average benefits from the parent-mediated interventions (Hedges’ g, 0.553), indicating a medium effect size. Parent interventions had the greatest effect on outcomes involving positive behavior and social skills (0.603), followed by language and communication (0.545), maladaptive behavior (0.519), and life skills (0.239).
Similar benefits were observed regardless of a child’s age or sex or which parent or parents implemented an intervention. The effects also appeared to be consistent regardless of intervention characteristics, such as the number of training sessions parents received, although the researchers noted that many studies did not provide data on such details.
Paul Carbone, MD, a professor of pediatrics at the University of Utah, Salt Lake City, who was not involved in the review, said it demonstrates that such parental engagement is “vitally important” and pediatricians “should not hesitate to refer interested families.”
Dr. Carbone, who is the medical director of an assessment program for children with suspected developmental disabilities, said many training programs for parents have adopted telehealth, adding to their convenience. To make appropriate referrals, primary care clinicians should become acquainted with local programs and learn which outcomes they target, he said.
Dr. Smith noted that primary care physicians are “better trained now than ever” to identify autism spectrum disorder and therefore are among the first to identify those conditions and help parents understand “that their actions at home absolutely make a difference in the child’s development.”
Overcoming limitations, future research needs
The research team attempted to overcome limitations with previous reviews by using comprehensive search terms and other methods to identify relevant studies, including some that had not been published. They included only studies that reflect common practice of training multiple parents simultaneously, they wrote.
Dr. Smith noted that long-term outcomes data and further study to compare effects on children with mild, moderate, and severe autism are needed.
Although logic would suggest greater benefits for children with severe disease, there are no data to demonstrate that, he said.
The authors of the study and Dr. Carbone reported no relevant competing interests.
“Referrals for parent training should now be considered the expected standard for medical practice,” said a member of the research team, Timothy B. Smith, PhD, a professor of psychology at Brigham Young University, Provo, Utah.
Programs that show parents how to teach functional skills and address maladaptive behaviors, also known as parent-mediated or parent-implemented interventions, offer an alternative to one-on-one professional services, which are in short supply, according to the paper, which was published in the Journal of Autism and Developmental Disorders.
Methods and results
The meta-analysis included 54 papers based on randomized clinical trials involving 2,895 children, which compared the effects of various parent interventions with professional treatment, treatment as usual, or being on a wait-list to receive an intervention.
Overall the research team reported “moderately strong” average benefits from the parent-mediated interventions (Hedges’ g, 0.553), indicating a medium effect size. Parent interventions had the greatest effect on outcomes involving positive behavior and social skills (0.603), followed by language and communication (0.545), maladaptive behavior (0.519), and life skills (0.239).
Similar benefits were observed regardless of a child’s age or sex or which parent or parents implemented an intervention. The effects also appeared to be consistent regardless of intervention characteristics, such as the number of training sessions parents received, although the researchers noted that many studies did not provide data on such details.
Paul Carbone, MD, a professor of pediatrics at the University of Utah, Salt Lake City, who was not involved in the review, said it demonstrates that such parental engagement is “vitally important” and pediatricians “should not hesitate to refer interested families.”
Dr. Carbone, who is the medical director of an assessment program for children with suspected developmental disabilities, said many training programs for parents have adopted telehealth, adding to their convenience. To make appropriate referrals, primary care clinicians should become acquainted with local programs and learn which outcomes they target, he said.
Dr. Smith noted that primary care physicians are “better trained now than ever” to identify autism spectrum disorder and therefore are among the first to identify those conditions and help parents understand “that their actions at home absolutely make a difference in the child’s development.”
Overcoming limitations, future research needs
The research team attempted to overcome limitations with previous reviews by using comprehensive search terms and other methods to identify relevant studies, including some that had not been published. They included only studies that reflect common practice of training multiple parents simultaneously, they wrote.
Dr. Smith noted that long-term outcomes data and further study to compare effects on children with mild, moderate, and severe autism are needed.
Although logic would suggest greater benefits for children with severe disease, there are no data to demonstrate that, he said.
The authors of the study and Dr. Carbone reported no relevant competing interests.
“Referrals for parent training should now be considered the expected standard for medical practice,” said a member of the research team, Timothy B. Smith, PhD, a professor of psychology at Brigham Young University, Provo, Utah.
Programs that show parents how to teach functional skills and address maladaptive behaviors, also known as parent-mediated or parent-implemented interventions, offer an alternative to one-on-one professional services, which are in short supply, according to the paper, which was published in the Journal of Autism and Developmental Disorders.
Methods and results
The meta-analysis included 54 papers based on randomized clinical trials involving 2,895 children, which compared the effects of various parent interventions with professional treatment, treatment as usual, or being on a wait-list to receive an intervention.
Overall the research team reported “moderately strong” average benefits from the parent-mediated interventions (Hedges’ g, 0.553), indicating a medium effect size. Parent interventions had the greatest effect on outcomes involving positive behavior and social skills (0.603), followed by language and communication (0.545), maladaptive behavior (0.519), and life skills (0.239).
Similar benefits were observed regardless of a child’s age or sex or which parent or parents implemented an intervention. The effects also appeared to be consistent regardless of intervention characteristics, such as the number of training sessions parents received, although the researchers noted that many studies did not provide data on such details.
Paul Carbone, MD, a professor of pediatrics at the University of Utah, Salt Lake City, who was not involved in the review, said it demonstrates that such parental engagement is “vitally important” and pediatricians “should not hesitate to refer interested families.”
Dr. Carbone, who is the medical director of an assessment program for children with suspected developmental disabilities, said many training programs for parents have adopted telehealth, adding to their convenience. To make appropriate referrals, primary care clinicians should become acquainted with local programs and learn which outcomes they target, he said.
Dr. Smith noted that primary care physicians are “better trained now than ever” to identify autism spectrum disorder and therefore are among the first to identify those conditions and help parents understand “that their actions at home absolutely make a difference in the child’s development.”
Overcoming limitations, future research needs
The research team attempted to overcome limitations with previous reviews by using comprehensive search terms and other methods to identify relevant studies, including some that had not been published. They included only studies that reflect common practice of training multiple parents simultaneously, they wrote.
Dr. Smith noted that long-term outcomes data and further study to compare effects on children with mild, moderate, and severe autism are needed.
Although logic would suggest greater benefits for children with severe disease, there are no data to demonstrate that, he said.
The authors of the study and Dr. Carbone reported no relevant competing interests.
FROM JOURNAL OF AUTISM AND DEVELOPMENTAL DISORDERS
Largest-ever study into the effects of cannabis on the brain
The largest-ever independent study into the effects of cannabis on the brain is being carried out in the United Kingdom.
Even though cannabis is the most commonly used illegal drug in the United Kingdom and medicinal cannabis has been legal there since 2018 little is known about why some people react badly to it and others seem to benefit from it.
According to Home Office figures on drug use from 2019, 7.6% of adults aged 16-59 used cannabis in the previous year.
Medicinal cannabis in the United Kingdom can only be prescribed if no other licensed medicine could help the patient. At the moment, GPs can’t prescribe it, only specialist hospital doctors can. The National Health Service says it can only be used in three circumstances: in rare, severe epilepsy; to deal with chemotherapy side effects such as nausea; or to help with multiple sclerosis.
As part of the Cannabis&Me study, KCL needs to get 3,000 current cannabis users and 3,000 non–cannabis users to take part in an online survey, with a third of those survey respondents then taking part in a face-to-face assessment that includes virtual reality (VR) and psychological analysis. The study also aims to determine how the DNA of cannabis users and their endocannabinoid system impacts their experiences, both negative and positive, with the drug.
The study is spearheaded by Marta Di Forti, MD, PhD, and has been allocated over £2.5 million in funding by the Medical Research Council.
This news organization asked Dr. Di Forti about the study.
Question: How do you describe the study?
Answer: “It’s a really unique study. We are aiming to see what’s happening to people using cannabis in the privacy of their homes for medicinal, recreational reasons, or whatever other reason.
“The debate on cannabis has always been quite polarized. There have been people who experience adversities with cannabis use, especially psychosis, whose families may perhaps like cannabis to be abolished if possible. Then there are other people who are saying they get positive benefits from using cannabis.”
Q: So where does the study come in?
A: “The study wants to bring the two sides of the argument together and understand what’s really happening. The group I see as a clinician comes to severe harm when they use cannabis regularly. We want to find out who they are and whether we can identify them. While we need to make sure they never come to harm when using cannabis, we need to consider others who won’t come to harm from using cannabis and give them a chance to use it in a way that’s beneficial.”
Q: How does the study work?
A: “The first step of the study is to use an online questionnaire that can be filled in by anyone aged 18-45 who lives in the London area or can travel here if selected. The first set of questions are a general idea of their cannabis use: ‘Why do they use it?’ ‘What are its benefits?’ Then, general questions on what their life has been like up to that point: ‘Did they have any adversities in childhood?’ ‘How is their mood and anxiety levels?’ ‘Do they experience any paranoid responses in everyday life?’ It probably takes between 30 and 40 minutes to fill out the questionnaire.”
Q: Can you explain about paranoid responses?
A: “We go through the questionnaires looking at people’s paranoid response to everyday life, not in a clinical disorder term, just in terms of the differences in how we respond to certain circumstances. For example: ‘How do you feel if someone’s staring at you on the Tube?’ Some people are afraid, some feel uncomfortable, some people don’t notice, and others think a person is staring at them as they look good or another such positive feeling. So, we give people a paranoia score and will invite some at the top and some at the bottom of that score for a face-to-face assessment. We want to select those people who are using cannabis daily and they are getting either no paranoia or high paranoia.”
Q: What happens at the face-to-face assessments?
A: “We do two things which are very novel. We ask them to take part in a virtual reality experience. They are in a lovely shop and within this experience they come across challenges, which may or may not induce a benign paranoia response. We will ask them to donate a sample of blood before they go into the VR set. We will test for tetrahydrocannabinol (THC) and cannabidiol (CBD). We will also look at the metabolites of the two. People don’t take into account how differently individuals metabolize cannabis, which could be one of the reasons why some people can tolerate it and others can’t.”
Q: There’s also a genetic aspect of the study?
A: “From the same sample, we will extract DNA to look at the genetics across the genome and compare genetic variations between high and low paranoia in the context of cannabis use. Also, we will look at the epigenetics, as we have learned from neuroscience, and also cancer, that sometimes a substance we ingest has an effect on our health. It’s perhaps an interaction with the way our DNA is written but also with the changes to the way our DNA is read and translated into biology if exposed to that substance. We know that smoking tobacco does have an impact at an epigenetic level on the DNA. We do know that in people who stop smoking, these impacts on the epigenetics are partially reversed. This work hasn’t been done properly for cannabis.
“There have been four published studies that have looked at the effect of cannabis use on epigenetics but they have been quite inconclusive, and they haven’t looked at large numbers of current users taking into account how much they are using. Moreover, we do know that when THC and CBD get into our bodies, they interact with something that is already embedded in our biology which is the endocannabinoid system. Therefore, in the blood samples we also aim to measures the levels of the endocannabinoids we naturally produce.
“All of this data will then be analyzed to see if we can get close to understanding what makes some cannabis users susceptible to paranoia while others who are using cannabis get some benefits, even in the domain of mental health.”
Q: Who are you looking for to take part in your study?
A: “What we don’t want is to get only people who are the classic friends and family of academics to do the study. We want a representative sample of people out there who are using cannabis. My ideal candidate would be someone who hates me and usually sends me abusive emails saying I’m against cannabis, which is wrong. All I want to find out is who is susceptible to harm which will keep everybody else safe. We are not trying to demonize cannabis; it’s exactly the opposite. We would like people from all ethnic and socioeconomic backgrounds to join to give voice to everyone out there using cannabis, the reasons why, and the effects they experience.”
Q: Will this study perhaps give more information of when it’s appropriate to prescribe medicinal cannabis, as it’s still quite unusual for it to be prescribed in the United Kingdom isn’t it?
A: “Absolutely spot on. That’s exactly the point. We want to hear from people who are receiving medicinal cannabis as a prescription, as they are likely to take it on a daily basis and daily use is what epidemiological studies have linked to the highest risk of psychosis. There will be people taking THC everyday for pain, nausea, for Crohn’s disease, and more.
“Normally when you receive a prescription for a medication the physician in charge will tell you the potential side effects which will be monitored to make sure it’s safe, and you may have to swap to a different medication. Now this isn’t really happening with medicinal cannabis, which is one of the reasons clinicians are anxious about prescribing it, and they have been criticized for not prescribing it very much. There’s much less structure and guidance about ‘psychosis-related’ side effects monitoring. If we can really identify those people who are likely to develop psychosis or disabling paranoia when they use cannabis, physicians might be more prepared to prescribe more widely when indicated.
“You could even have a virtual reality scenario available as a screening tool when you get prescribed medicinal cannabis, to see if there are changes in your perception of the world, which is ultimately what psychosis is about. Could this be a way of implementing safe prescribing which will encourage physicians to use safe cannabis compounds and make some people less anxious about it?
“This study is not here to highlight the negativity of cannabis, on the contrary it’s to understand how it can be used recreationally, but even more important, medicinally in a safe way so people that are coming to no harm can continue to do so and people who are at risk can be kept safe, or at least monitored adequately.”
A version of this article first appeared on Medscape UK.
The largest-ever independent study into the effects of cannabis on the brain is being carried out in the United Kingdom.
Even though cannabis is the most commonly used illegal drug in the United Kingdom and medicinal cannabis has been legal there since 2018 little is known about why some people react badly to it and others seem to benefit from it.
According to Home Office figures on drug use from 2019, 7.6% of adults aged 16-59 used cannabis in the previous year.
Medicinal cannabis in the United Kingdom can only be prescribed if no other licensed medicine could help the patient. At the moment, GPs can’t prescribe it, only specialist hospital doctors can. The National Health Service says it can only be used in three circumstances: in rare, severe epilepsy; to deal with chemotherapy side effects such as nausea; or to help with multiple sclerosis.
As part of the Cannabis&Me study, KCL needs to get 3,000 current cannabis users and 3,000 non–cannabis users to take part in an online survey, with a third of those survey respondents then taking part in a face-to-face assessment that includes virtual reality (VR) and psychological analysis. The study also aims to determine how the DNA of cannabis users and their endocannabinoid system impacts their experiences, both negative and positive, with the drug.
The study is spearheaded by Marta Di Forti, MD, PhD, and has been allocated over £2.5 million in funding by the Medical Research Council.
This news organization asked Dr. Di Forti about the study.
Question: How do you describe the study?
Answer: “It’s a really unique study. We are aiming to see what’s happening to people using cannabis in the privacy of their homes for medicinal, recreational reasons, or whatever other reason.
“The debate on cannabis has always been quite polarized. There have been people who experience adversities with cannabis use, especially psychosis, whose families may perhaps like cannabis to be abolished if possible. Then there are other people who are saying they get positive benefits from using cannabis.”
Q: So where does the study come in?
A: “The study wants to bring the two sides of the argument together and understand what’s really happening. The group I see as a clinician comes to severe harm when they use cannabis regularly. We want to find out who they are and whether we can identify them. While we need to make sure they never come to harm when using cannabis, we need to consider others who won’t come to harm from using cannabis and give them a chance to use it in a way that’s beneficial.”
Q: How does the study work?
A: “The first step of the study is to use an online questionnaire that can be filled in by anyone aged 18-45 who lives in the London area or can travel here if selected. The first set of questions are a general idea of their cannabis use: ‘Why do they use it?’ ‘What are its benefits?’ Then, general questions on what their life has been like up to that point: ‘Did they have any adversities in childhood?’ ‘How is their mood and anxiety levels?’ ‘Do they experience any paranoid responses in everyday life?’ It probably takes between 30 and 40 minutes to fill out the questionnaire.”
Q: Can you explain about paranoid responses?
A: “We go through the questionnaires looking at people’s paranoid response to everyday life, not in a clinical disorder term, just in terms of the differences in how we respond to certain circumstances. For example: ‘How do you feel if someone’s staring at you on the Tube?’ Some people are afraid, some feel uncomfortable, some people don’t notice, and others think a person is staring at them as they look good or another such positive feeling. So, we give people a paranoia score and will invite some at the top and some at the bottom of that score for a face-to-face assessment. We want to select those people who are using cannabis daily and they are getting either no paranoia or high paranoia.”
Q: What happens at the face-to-face assessments?
A: “We do two things which are very novel. We ask them to take part in a virtual reality experience. They are in a lovely shop and within this experience they come across challenges, which may or may not induce a benign paranoia response. We will ask them to donate a sample of blood before they go into the VR set. We will test for tetrahydrocannabinol (THC) and cannabidiol (CBD). We will also look at the metabolites of the two. People don’t take into account how differently individuals metabolize cannabis, which could be one of the reasons why some people can tolerate it and others can’t.”
Q: There’s also a genetic aspect of the study?
A: “From the same sample, we will extract DNA to look at the genetics across the genome and compare genetic variations between high and low paranoia in the context of cannabis use. Also, we will look at the epigenetics, as we have learned from neuroscience, and also cancer, that sometimes a substance we ingest has an effect on our health. It’s perhaps an interaction with the way our DNA is written but also with the changes to the way our DNA is read and translated into biology if exposed to that substance. We know that smoking tobacco does have an impact at an epigenetic level on the DNA. We do know that in people who stop smoking, these impacts on the epigenetics are partially reversed. This work hasn’t been done properly for cannabis.
“There have been four published studies that have looked at the effect of cannabis use on epigenetics but they have been quite inconclusive, and they haven’t looked at large numbers of current users taking into account how much they are using. Moreover, we do know that when THC and CBD get into our bodies, they interact with something that is already embedded in our biology which is the endocannabinoid system. Therefore, in the blood samples we also aim to measures the levels of the endocannabinoids we naturally produce.
“All of this data will then be analyzed to see if we can get close to understanding what makes some cannabis users susceptible to paranoia while others who are using cannabis get some benefits, even in the domain of mental health.”
Q: Who are you looking for to take part in your study?
A: “What we don’t want is to get only people who are the classic friends and family of academics to do the study. We want a representative sample of people out there who are using cannabis. My ideal candidate would be someone who hates me and usually sends me abusive emails saying I’m against cannabis, which is wrong. All I want to find out is who is susceptible to harm which will keep everybody else safe. We are not trying to demonize cannabis; it’s exactly the opposite. We would like people from all ethnic and socioeconomic backgrounds to join to give voice to everyone out there using cannabis, the reasons why, and the effects they experience.”
Q: Will this study perhaps give more information of when it’s appropriate to prescribe medicinal cannabis, as it’s still quite unusual for it to be prescribed in the United Kingdom isn’t it?
A: “Absolutely spot on. That’s exactly the point. We want to hear from people who are receiving medicinal cannabis as a prescription, as they are likely to take it on a daily basis and daily use is what epidemiological studies have linked to the highest risk of psychosis. There will be people taking THC everyday for pain, nausea, for Crohn’s disease, and more.
“Normally when you receive a prescription for a medication the physician in charge will tell you the potential side effects which will be monitored to make sure it’s safe, and you may have to swap to a different medication. Now this isn’t really happening with medicinal cannabis, which is one of the reasons clinicians are anxious about prescribing it, and they have been criticized for not prescribing it very much. There’s much less structure and guidance about ‘psychosis-related’ side effects monitoring. If we can really identify those people who are likely to develop psychosis or disabling paranoia when they use cannabis, physicians might be more prepared to prescribe more widely when indicated.
“You could even have a virtual reality scenario available as a screening tool when you get prescribed medicinal cannabis, to see if there are changes in your perception of the world, which is ultimately what psychosis is about. Could this be a way of implementing safe prescribing which will encourage physicians to use safe cannabis compounds and make some people less anxious about it?
“This study is not here to highlight the negativity of cannabis, on the contrary it’s to understand how it can be used recreationally, but even more important, medicinally in a safe way so people that are coming to no harm can continue to do so and people who are at risk can be kept safe, or at least monitored adequately.”
A version of this article first appeared on Medscape UK.
The largest-ever independent study into the effects of cannabis on the brain is being carried out in the United Kingdom.
Even though cannabis is the most commonly used illegal drug in the United Kingdom and medicinal cannabis has been legal there since 2018 little is known about why some people react badly to it and others seem to benefit from it.
According to Home Office figures on drug use from 2019, 7.6% of adults aged 16-59 used cannabis in the previous year.
Medicinal cannabis in the United Kingdom can only be prescribed if no other licensed medicine could help the patient. At the moment, GPs can’t prescribe it, only specialist hospital doctors can. The National Health Service says it can only be used in three circumstances: in rare, severe epilepsy; to deal with chemotherapy side effects such as nausea; or to help with multiple sclerosis.
As part of the Cannabis&Me study, KCL needs to get 3,000 current cannabis users and 3,000 non–cannabis users to take part in an online survey, with a third of those survey respondents then taking part in a face-to-face assessment that includes virtual reality (VR) and psychological analysis. The study also aims to determine how the DNA of cannabis users and their endocannabinoid system impacts their experiences, both negative and positive, with the drug.
The study is spearheaded by Marta Di Forti, MD, PhD, and has been allocated over £2.5 million in funding by the Medical Research Council.
This news organization asked Dr. Di Forti about the study.
Question: How do you describe the study?
Answer: “It’s a really unique study. We are aiming to see what’s happening to people using cannabis in the privacy of their homes for medicinal, recreational reasons, or whatever other reason.
“The debate on cannabis has always been quite polarized. There have been people who experience adversities with cannabis use, especially psychosis, whose families may perhaps like cannabis to be abolished if possible. Then there are other people who are saying they get positive benefits from using cannabis.”
Q: So where does the study come in?
A: “The study wants to bring the two sides of the argument together and understand what’s really happening. The group I see as a clinician comes to severe harm when they use cannabis regularly. We want to find out who they are and whether we can identify them. While we need to make sure they never come to harm when using cannabis, we need to consider others who won’t come to harm from using cannabis and give them a chance to use it in a way that’s beneficial.”
Q: How does the study work?
A: “The first step of the study is to use an online questionnaire that can be filled in by anyone aged 18-45 who lives in the London area or can travel here if selected. The first set of questions are a general idea of their cannabis use: ‘Why do they use it?’ ‘What are its benefits?’ Then, general questions on what their life has been like up to that point: ‘Did they have any adversities in childhood?’ ‘How is their mood and anxiety levels?’ ‘Do they experience any paranoid responses in everyday life?’ It probably takes between 30 and 40 minutes to fill out the questionnaire.”
Q: Can you explain about paranoid responses?
A: “We go through the questionnaires looking at people’s paranoid response to everyday life, not in a clinical disorder term, just in terms of the differences in how we respond to certain circumstances. For example: ‘How do you feel if someone’s staring at you on the Tube?’ Some people are afraid, some feel uncomfortable, some people don’t notice, and others think a person is staring at them as they look good or another such positive feeling. So, we give people a paranoia score and will invite some at the top and some at the bottom of that score for a face-to-face assessment. We want to select those people who are using cannabis daily and they are getting either no paranoia or high paranoia.”
Q: What happens at the face-to-face assessments?
A: “We do two things which are very novel. We ask them to take part in a virtual reality experience. They are in a lovely shop and within this experience they come across challenges, which may or may not induce a benign paranoia response. We will ask them to donate a sample of blood before they go into the VR set. We will test for tetrahydrocannabinol (THC) and cannabidiol (CBD). We will also look at the metabolites of the two. People don’t take into account how differently individuals metabolize cannabis, which could be one of the reasons why some people can tolerate it and others can’t.”
Q: There’s also a genetic aspect of the study?
A: “From the same sample, we will extract DNA to look at the genetics across the genome and compare genetic variations between high and low paranoia in the context of cannabis use. Also, we will look at the epigenetics, as we have learned from neuroscience, and also cancer, that sometimes a substance we ingest has an effect on our health. It’s perhaps an interaction with the way our DNA is written but also with the changes to the way our DNA is read and translated into biology if exposed to that substance. We know that smoking tobacco does have an impact at an epigenetic level on the DNA. We do know that in people who stop smoking, these impacts on the epigenetics are partially reversed. This work hasn’t been done properly for cannabis.
“There have been four published studies that have looked at the effect of cannabis use on epigenetics but they have been quite inconclusive, and they haven’t looked at large numbers of current users taking into account how much they are using. Moreover, we do know that when THC and CBD get into our bodies, they interact with something that is already embedded in our biology which is the endocannabinoid system. Therefore, in the blood samples we also aim to measures the levels of the endocannabinoids we naturally produce.
“All of this data will then be analyzed to see if we can get close to understanding what makes some cannabis users susceptible to paranoia while others who are using cannabis get some benefits, even in the domain of mental health.”
Q: Who are you looking for to take part in your study?
A: “What we don’t want is to get only people who are the classic friends and family of academics to do the study. We want a representative sample of people out there who are using cannabis. My ideal candidate would be someone who hates me and usually sends me abusive emails saying I’m against cannabis, which is wrong. All I want to find out is who is susceptible to harm which will keep everybody else safe. We are not trying to demonize cannabis; it’s exactly the opposite. We would like people from all ethnic and socioeconomic backgrounds to join to give voice to everyone out there using cannabis, the reasons why, and the effects they experience.”
Q: Will this study perhaps give more information of when it’s appropriate to prescribe medicinal cannabis, as it’s still quite unusual for it to be prescribed in the United Kingdom isn’t it?
A: “Absolutely spot on. That’s exactly the point. We want to hear from people who are receiving medicinal cannabis as a prescription, as they are likely to take it on a daily basis and daily use is what epidemiological studies have linked to the highest risk of psychosis. There will be people taking THC everyday for pain, nausea, for Crohn’s disease, and more.
“Normally when you receive a prescription for a medication the physician in charge will tell you the potential side effects which will be monitored to make sure it’s safe, and you may have to swap to a different medication. Now this isn’t really happening with medicinal cannabis, which is one of the reasons clinicians are anxious about prescribing it, and they have been criticized for not prescribing it very much. There’s much less structure and guidance about ‘psychosis-related’ side effects monitoring. If we can really identify those people who are likely to develop psychosis or disabling paranoia when they use cannabis, physicians might be more prepared to prescribe more widely when indicated.
“You could even have a virtual reality scenario available as a screening tool when you get prescribed medicinal cannabis, to see if there are changes in your perception of the world, which is ultimately what psychosis is about. Could this be a way of implementing safe prescribing which will encourage physicians to use safe cannabis compounds and make some people less anxious about it?
“This study is not here to highlight the negativity of cannabis, on the contrary it’s to understand how it can be used recreationally, but even more important, medicinally in a safe way so people that are coming to no harm can continue to do so and people who are at risk can be kept safe, or at least monitored adequately.”
A version of this article first appeared on Medscape UK.