Five Steps to Improved Colonoscopy Performance

Article Type
Changed
Thu, 11/07/2024 - 10:53

As quality indicators and benchmarks for colonoscopy increase in coming years, gastroenterologists must think about ways to improve performance across the procedure continuum.

According to several experts who spoke at the American Gastroenterological Association’s Postgraduate Course this spring, which was offered at Digestive Disease Week (DDW), gastroenterologists can take these five steps to improve their performance: Addressing poor bowel prep, improving polyp detection, following the best intervals for polyp surveillance, reducing the environmental impact of gastrointestinal (GI) practice, and implementing artificial intelligence (AI) tools for efficiency and quality.
 

Addressing Poor Prep

To improve bowel preparation rates, clinicians may consider identifying those at high risk for inadequate prep, which could include known risk factors such as age, body mass index, inpatient status, constipation, tobacco use, and hypertension. However, other variables tend to serve as bigger predictors of inadequate prep, such as the patient’s status regarding cirrhosis, Parkinson’s disease, dementia, diabetes, opioid use, gastroparesis, tricyclics, and colorectal surgery.

Although several prediction models are based on some of these factors — looking at comorbidities, antidepressant use, constipation, and prior abdominal or pelvic surgery — the data don’t indicate whether knowing about or addressing these risks actually leads to better bowel prep, said Brian Jacobson, MD, associate professor of medicine at Harvard Medical School, Boston, and director of program development for gastroenterology at Massachusetts General Hospital in Boston.

Instead, the biggest return-on-investment option is to maximize prep for all patients, he said, especially since every patient has at least some risk of poor prep, either due to the required diet changes, medication considerations, or purgative solution and timing.

Massachusetts General Hospital
Dr. Brian Jacobson


To create a state-of-the-art bowel prep process, Dr. Jacobson recommended numerous tactics for all patients: Verbal and written instructions for all components of prep, patient navigation with phone or virtual messaging to guide patients through the process, a low-fiber or all-liquid diet on the day before colonoscopy, and a split-dose 2-L prep regimen. Patients should begin the second half of the split-dose regimen 4-6 hours before colonoscopy and complete it at least 2 hours before the procedure starts, and clinicians should use an irrigation pump during colonoscopy to improve visibility. 

Beyond that, Dr. Jacobson noted, higher risk patients can take a split-dose 4-L prep regimen with bisacodyl, a low-fiber diet 2-3 days before colonoscopy, and a clear liquid diet the day before colonoscopy. Using simethicone as an adjunct solution can also reduce bubbles in the colon.

Future tech developments may help clinicians as well, he said, such as using AI to identify patients at high risk and modifying their prep process, creating a personalized prep on a digital platform with videos that guide patients through the process, and using a phone checklist tool to indicate when they’re ready for colonoscopy.
 

Improving Polyp Detection

Adenoma detection rates (ADR) can be highly variable due to different techniques, technical skills, pattern recognition, interpretation, and experience. New adjunct and AI-based tools can help improve ADR, especially if clinicians want to improve, receive training, and use best-practice techniques.

“In colonoscopy, it’s tricky because it’s not just a blood test or an x-ray. There’s really a lot of technique involved, both cognitive awareness and pattern recognition, as well as our technical skills,” said Tonya Kaltenbach, MD, professor of clinical medicine at the University of California San Francisco and director of advanced endoscopy at the San Francisco VA Health Care System in San Francisco.

For instance, multiple tools and techniques may be needed in real time to interpret a lesion, such as washing, retroflexing, and using better lighting, while paying attention to alerts and noting areas for further inspection and resection.

Dr. Tonya Kaltenbach, University of California San Francisco and the San Francisco VA Health Care System
San Francisco VA Health Care System
Dr. Tonya Kaltenbach


“This is not innate. It’s a learned skill,” she said. “It’s something we need to intentionally make efforts on and get feedback to improve.”

Improvement starts with using the right mindset for lesion detection, Dr. Kaltenbach said, by having a “reflexive recognition of deconstructed patterns of normal” — following the lines, vessels, and folds and looking for interruptions, abnormal thickness, and mucus caps. On top of that, adjunctive tools such as caps/cuffs and dye chromoendoscopy can help with proper ergonomics, irrigation, and mucosa exposure.

In the past 3 years, real-world studies using AI and computer-assisted detection have shown mixed results, with some demonstrating significant increases in ADR, while others haven’t, she said. However, being willing to try AI and other tools, such as the Endocuff cap, may help improve ADR, standardize interpretation, improve efficiency, and increase reproducibility.

“We’re always better with intentional feedback and deliberate practice,” she said. “Remember that if you improve, you’re protecting the patient from death and reducing interval cancer.”
 

Following Polyp Surveillance Intervals

The US Multi-Society Task Force on Colorectal Cancer’s recommendations for follow-up after colonoscopy and polypectomy provide valuable information and rationale for how to determine surveillance intervals for patients. However, clinicians still may be unsure what to recommend for some patients — or tell them to come back too soon, leading to unnecessary colonoscopy. 

For instance, a 47-year-old woman who presents for her initial screening and has a single 6-mm polyp, which pathology returns as a single adenoma may be considered to be at average risk and suggested to return in 7-10 years. The guidelines seem more obvious for patients with one or two adenomas under 10 mm removed en bloc. 

However, once the case details shift into gray areas and include three or four adenomas between 10 and 20 mm, or piecemeal removal, clinicians may differ on their recommendations, said Rajesh N. Keswani, MD, associate professor of medicine at the Northwestern University Feinberg School of Medicine and director of endoscopy for Northwestern Medicine in Chicago. At DDW 2024, Dr. Keswani presented several case examples, often finding various audience opinions.

Dr. Rajesh N. Keswani


In addition, he noted, recent studies have found that clinicians may estimate imprecise polyp measurements, struggle to identify sessile serrated polyposis syndrome, and often don’t follow evidence-based guidelines.

“Why do we ignore the guidelines? There’s this perception that a patient has risk factors that aren’t addressed by the guidelines, with regards to family history or a distant history of a large polyp that we don’t want to leave to the usual intervals,” he said. “We feel uncomfortable, even with our meticulous colonoscopy, telling people to come back in 10 years.”

To improve guideline adherence, Dr. Keswani suggested providing additional education, implementing an automated surveillance calculator, and using guidelines at the point of care. At Northwestern, for instance, clinicians use a hyperlink with an interpreted version of the guidelines with prior colonoscopy considerations. Overall though, practitioners should feel comfortable leaning toward longer surveillance intervals, he noted.

“More effort should be spent on getting unscreened patients in for colonoscopy than bringing back low-risk patients too early,” he said.
 

 

 

Reducing Environmental Effects

In recent waste audits of endoscopy rooms, providers generate 1-3 kg of waste per procedure, which would fill 117 soccer fields to a depth of 1 m, based on 18 million procedures in the United States per year. This waste comes from procedure-related equipment, administration, medications, travel of patients and staff, and infrastructure with systems such as air conditioning. Taking steps toward a green practice can reduce waste and the carbon footprint of healthcare.

“When we think about improving colonoscopy performance, the goal is to prevent colon cancer death, but when we expand that, we have to apply sustainable practices as a domain of quality,” said Heiko Pohl, MD, professor of medicine at the Geisel School of Medicine at Dartmouth in Hanover, New Hampshire, and a gastroenterologist at White River Junction VA Medical Center in White River Junction, Vermont.

The GI Multisociety Strategic Plan on Environmental Sustainability suggests a 5-year initiative to improve sustainability and reduce waste across seven domains — clinical setting, education, research, society efforts, intersociety efforts, industry, and advocacy.

Dr. Heiko Pohl


For instance, clinicians can take the biggest step toward sustainability by avoiding unneeded colonoscopies, Dr. Pohl said, noting that between 20% and 30% aren’t appropriate or indicated. Instead, practitioners can implement longer surveillance intervals, adhere to guidelines, and consider alternative tests, such as the fecal immunochemical test, fecal DNA, blood-based tests, and CT colonography, where relevant.

Clinicians can also rethink their approach to resection, such as using a snare first instead of forceps to reduce single-instrument use, using clip closure only when it’s truly indicated, and implementing AI-assisted optical diagnosis to help with leaving rectosigmoid polyps in place.

In terms of physical waste, practices may also reconsider how they sort bins and biohazards, looking at new ways to dispose of regulated medical waste, sharps, recyclables, and typical trash. Waste audits can help find ways to reduce paper, combine procedures, and create more efficient use of endoscopy rooms.

“We are really in a very precarious situation,” Dr. Pohl said. “It’s our generation that has a responsibility to change the course for our children’s and grandchildren’s sake.”
 

AI for Quality And Efficiency

Moving forward, AI tools will likely become more popular in various parts of GI practice, by assisting with documentation, spotting polyps, tracking mucosal surfaces, providing optical histopathology, and supervising performance through high-quality feedback.

“Endoscopy has reached the limits of human visual capacity, where seeing more pixels won’t necessarily improve clinical diagnosis. What’s next for elevating the care of patients really is AI,” said Jason B. Samarasena, MD, professor of medicine and program director of the interventional endoscopy training program at the University of California Irvine in Irvine, California.

As practices adopt AI-based systems, however, clinicians should be cautious about a false sense of comfort or “alarm fatigue” if bounding boxes become distracting. Instead, new tools need to be adopted as a “physician-AI hybrid,” with the endoscopist in mind, particularly if helpful for performing a better exam by watching withdrawal time or endoscope slippage.

Dr. Jason B. Samarasena


“In real-world practice, this is being implemented without attention to endoscopist inclination and behavior,” he said. “Having a better understanding of physician attitudes could yield more optimal results.”

Notably, AI-assisted tools should be viewed akin to spell-check, which signals to the endoscopist when to pay attention and double-check an area — but primarily relies on the expert to do a high-quality exam, said Aasma Shaukat, MD, professor of medicine and director of GI outcomes research at the NYU Grossman School of Medicine, New York City. 

“This should be an adjunct or an additional tool, not a replacement tool,” she added. “This doesn’t mean to stop doing astute observation.”

New York University
Dr. Aasma Shaukat


Future tools show promise in terms of tracking additional data related to prep quality, cecal landmarks, polyp size, mucosa exposure, histology prediction, and complete resection. These automated reports could also link to real-time dashboards, hospital or national registries, and reimbursement systems, Dr. Shaukat noted.

“At the end of the day, our interests are aligned,” she said. “Everybody cares about quality, patient satisfaction, and reimbursement, and with that goal in mind, I think some of the tools can be applied to show how we can achieve those principles together.”

Dr. Jacobson, Dr. Kaltenbach, Dr. Keswani, Dr. Pohl, Dr. Samarasena, and Dr. Shaukat reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

As quality indicators and benchmarks for colonoscopy increase in coming years, gastroenterologists must think about ways to improve performance across the procedure continuum.

According to several experts who spoke at the American Gastroenterological Association’s Postgraduate Course this spring, which was offered at Digestive Disease Week (DDW), gastroenterologists can take these five steps to improve their performance: Addressing poor bowel prep, improving polyp detection, following the best intervals for polyp surveillance, reducing the environmental impact of gastrointestinal (GI) practice, and implementing artificial intelligence (AI) tools for efficiency and quality.
 

Addressing Poor Prep

To improve bowel preparation rates, clinicians may consider identifying those at high risk for inadequate prep, which could include known risk factors such as age, body mass index, inpatient status, constipation, tobacco use, and hypertension. However, other variables tend to serve as bigger predictors of inadequate prep, such as the patient’s status regarding cirrhosis, Parkinson’s disease, dementia, diabetes, opioid use, gastroparesis, tricyclics, and colorectal surgery.

Although several prediction models are based on some of these factors — looking at comorbidities, antidepressant use, constipation, and prior abdominal or pelvic surgery — the data don’t indicate whether knowing about or addressing these risks actually leads to better bowel prep, said Brian Jacobson, MD, associate professor of medicine at Harvard Medical School, Boston, and director of program development for gastroenterology at Massachusetts General Hospital in Boston.

Instead, the biggest return-on-investment option is to maximize prep for all patients, he said, especially since every patient has at least some risk of poor prep, either due to the required diet changes, medication considerations, or purgative solution and timing.

Massachusetts General Hospital
Dr. Brian Jacobson


To create a state-of-the-art bowel prep process, Dr. Jacobson recommended numerous tactics for all patients: Verbal and written instructions for all components of prep, patient navigation with phone or virtual messaging to guide patients through the process, a low-fiber or all-liquid diet on the day before colonoscopy, and a split-dose 2-L prep regimen. Patients should begin the second half of the split-dose regimen 4-6 hours before colonoscopy and complete it at least 2 hours before the procedure starts, and clinicians should use an irrigation pump during colonoscopy to improve visibility. 

Beyond that, Dr. Jacobson noted, higher risk patients can take a split-dose 4-L prep regimen with bisacodyl, a low-fiber diet 2-3 days before colonoscopy, and a clear liquid diet the day before colonoscopy. Using simethicone as an adjunct solution can also reduce bubbles in the colon.

Future tech developments may help clinicians as well, he said, such as using AI to identify patients at high risk and modifying their prep process, creating a personalized prep on a digital platform with videos that guide patients through the process, and using a phone checklist tool to indicate when they’re ready for colonoscopy.
 

Improving Polyp Detection

Adenoma detection rates (ADR) can be highly variable due to different techniques, technical skills, pattern recognition, interpretation, and experience. New adjunct and AI-based tools can help improve ADR, especially if clinicians want to improve, receive training, and use best-practice techniques.

“In colonoscopy, it’s tricky because it’s not just a blood test or an x-ray. There’s really a lot of technique involved, both cognitive awareness and pattern recognition, as well as our technical skills,” said Tonya Kaltenbach, MD, professor of clinical medicine at the University of California San Francisco and director of advanced endoscopy at the San Francisco VA Health Care System in San Francisco.

For instance, multiple tools and techniques may be needed in real time to interpret a lesion, such as washing, retroflexing, and using better lighting, while paying attention to alerts and noting areas for further inspection and resection.

Dr. Tonya Kaltenbach, University of California San Francisco and the San Francisco VA Health Care System
San Francisco VA Health Care System
Dr. Tonya Kaltenbach


“This is not innate. It’s a learned skill,” she said. “It’s something we need to intentionally make efforts on and get feedback to improve.”

Improvement starts with using the right mindset for lesion detection, Dr. Kaltenbach said, by having a “reflexive recognition of deconstructed patterns of normal” — following the lines, vessels, and folds and looking for interruptions, abnormal thickness, and mucus caps. On top of that, adjunctive tools such as caps/cuffs and dye chromoendoscopy can help with proper ergonomics, irrigation, and mucosa exposure.

In the past 3 years, real-world studies using AI and computer-assisted detection have shown mixed results, with some demonstrating significant increases in ADR, while others haven’t, she said. However, being willing to try AI and other tools, such as the Endocuff cap, may help improve ADR, standardize interpretation, improve efficiency, and increase reproducibility.

“We’re always better with intentional feedback and deliberate practice,” she said. “Remember that if you improve, you’re protecting the patient from death and reducing interval cancer.”
 

Following Polyp Surveillance Intervals

The US Multi-Society Task Force on Colorectal Cancer’s recommendations for follow-up after colonoscopy and polypectomy provide valuable information and rationale for how to determine surveillance intervals for patients. However, clinicians still may be unsure what to recommend for some patients — or tell them to come back too soon, leading to unnecessary colonoscopy. 

For instance, a 47-year-old woman who presents for her initial screening and has a single 6-mm polyp, which pathology returns as a single adenoma may be considered to be at average risk and suggested to return in 7-10 years. The guidelines seem more obvious for patients with one or two adenomas under 10 mm removed en bloc. 

However, once the case details shift into gray areas and include three or four adenomas between 10 and 20 mm, or piecemeal removal, clinicians may differ on their recommendations, said Rajesh N. Keswani, MD, associate professor of medicine at the Northwestern University Feinberg School of Medicine and director of endoscopy for Northwestern Medicine in Chicago. At DDW 2024, Dr. Keswani presented several case examples, often finding various audience opinions.

Dr. Rajesh N. Keswani


In addition, he noted, recent studies have found that clinicians may estimate imprecise polyp measurements, struggle to identify sessile serrated polyposis syndrome, and often don’t follow evidence-based guidelines.

“Why do we ignore the guidelines? There’s this perception that a patient has risk factors that aren’t addressed by the guidelines, with regards to family history or a distant history of a large polyp that we don’t want to leave to the usual intervals,” he said. “We feel uncomfortable, even with our meticulous colonoscopy, telling people to come back in 10 years.”

To improve guideline adherence, Dr. Keswani suggested providing additional education, implementing an automated surveillance calculator, and using guidelines at the point of care. At Northwestern, for instance, clinicians use a hyperlink with an interpreted version of the guidelines with prior colonoscopy considerations. Overall though, practitioners should feel comfortable leaning toward longer surveillance intervals, he noted.

“More effort should be spent on getting unscreened patients in for colonoscopy than bringing back low-risk patients too early,” he said.
 

 

 

Reducing Environmental Effects

In recent waste audits of endoscopy rooms, providers generate 1-3 kg of waste per procedure, which would fill 117 soccer fields to a depth of 1 m, based on 18 million procedures in the United States per year. This waste comes from procedure-related equipment, administration, medications, travel of patients and staff, and infrastructure with systems such as air conditioning. Taking steps toward a green practice can reduce waste and the carbon footprint of healthcare.

“When we think about improving colonoscopy performance, the goal is to prevent colon cancer death, but when we expand that, we have to apply sustainable practices as a domain of quality,” said Heiko Pohl, MD, professor of medicine at the Geisel School of Medicine at Dartmouth in Hanover, New Hampshire, and a gastroenterologist at White River Junction VA Medical Center in White River Junction, Vermont.

The GI Multisociety Strategic Plan on Environmental Sustainability suggests a 5-year initiative to improve sustainability and reduce waste across seven domains — clinical setting, education, research, society efforts, intersociety efforts, industry, and advocacy.

Dr. Heiko Pohl


For instance, clinicians can take the biggest step toward sustainability by avoiding unneeded colonoscopies, Dr. Pohl said, noting that between 20% and 30% aren’t appropriate or indicated. Instead, practitioners can implement longer surveillance intervals, adhere to guidelines, and consider alternative tests, such as the fecal immunochemical test, fecal DNA, blood-based tests, and CT colonography, where relevant.

Clinicians can also rethink their approach to resection, such as using a snare first instead of forceps to reduce single-instrument use, using clip closure only when it’s truly indicated, and implementing AI-assisted optical diagnosis to help with leaving rectosigmoid polyps in place.

In terms of physical waste, practices may also reconsider how they sort bins and biohazards, looking at new ways to dispose of regulated medical waste, sharps, recyclables, and typical trash. Waste audits can help find ways to reduce paper, combine procedures, and create more efficient use of endoscopy rooms.

“We are really in a very precarious situation,” Dr. Pohl said. “It’s our generation that has a responsibility to change the course for our children’s and grandchildren’s sake.”
 

AI for Quality And Efficiency

Moving forward, AI tools will likely become more popular in various parts of GI practice, by assisting with documentation, spotting polyps, tracking mucosal surfaces, providing optical histopathology, and supervising performance through high-quality feedback.

“Endoscopy has reached the limits of human visual capacity, where seeing more pixels won’t necessarily improve clinical diagnosis. What’s next for elevating the care of patients really is AI,” said Jason B. Samarasena, MD, professor of medicine and program director of the interventional endoscopy training program at the University of California Irvine in Irvine, California.

As practices adopt AI-based systems, however, clinicians should be cautious about a false sense of comfort or “alarm fatigue” if bounding boxes become distracting. Instead, new tools need to be adopted as a “physician-AI hybrid,” with the endoscopist in mind, particularly if helpful for performing a better exam by watching withdrawal time or endoscope slippage.

Dr. Jason B. Samarasena


“In real-world practice, this is being implemented without attention to endoscopist inclination and behavior,” he said. “Having a better understanding of physician attitudes could yield more optimal results.”

Notably, AI-assisted tools should be viewed akin to spell-check, which signals to the endoscopist when to pay attention and double-check an area — but primarily relies on the expert to do a high-quality exam, said Aasma Shaukat, MD, professor of medicine and director of GI outcomes research at the NYU Grossman School of Medicine, New York City. 

“This should be an adjunct or an additional tool, not a replacement tool,” she added. “This doesn’t mean to stop doing astute observation.”

New York University
Dr. Aasma Shaukat


Future tools show promise in terms of tracking additional data related to prep quality, cecal landmarks, polyp size, mucosa exposure, histology prediction, and complete resection. These automated reports could also link to real-time dashboards, hospital or national registries, and reimbursement systems, Dr. Shaukat noted.

“At the end of the day, our interests are aligned,” she said. “Everybody cares about quality, patient satisfaction, and reimbursement, and with that goal in mind, I think some of the tools can be applied to show how we can achieve those principles together.”

Dr. Jacobson, Dr. Kaltenbach, Dr. Keswani, Dr. Pohl, Dr. Samarasena, and Dr. Shaukat reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

As quality indicators and benchmarks for colonoscopy increase in coming years, gastroenterologists must think about ways to improve performance across the procedure continuum.

According to several experts who spoke at the American Gastroenterological Association’s Postgraduate Course this spring, which was offered at Digestive Disease Week (DDW), gastroenterologists can take these five steps to improve their performance: Addressing poor bowel prep, improving polyp detection, following the best intervals for polyp surveillance, reducing the environmental impact of gastrointestinal (GI) practice, and implementing artificial intelligence (AI) tools for efficiency and quality.
 

Addressing Poor Prep

To improve bowel preparation rates, clinicians may consider identifying those at high risk for inadequate prep, which could include known risk factors such as age, body mass index, inpatient status, constipation, tobacco use, and hypertension. However, other variables tend to serve as bigger predictors of inadequate prep, such as the patient’s status regarding cirrhosis, Parkinson’s disease, dementia, diabetes, opioid use, gastroparesis, tricyclics, and colorectal surgery.

Although several prediction models are based on some of these factors — looking at comorbidities, antidepressant use, constipation, and prior abdominal or pelvic surgery — the data don’t indicate whether knowing about or addressing these risks actually leads to better bowel prep, said Brian Jacobson, MD, associate professor of medicine at Harvard Medical School, Boston, and director of program development for gastroenterology at Massachusetts General Hospital in Boston.

Instead, the biggest return-on-investment option is to maximize prep for all patients, he said, especially since every patient has at least some risk of poor prep, either due to the required diet changes, medication considerations, or purgative solution and timing.

Massachusetts General Hospital
Dr. Brian Jacobson


To create a state-of-the-art bowel prep process, Dr. Jacobson recommended numerous tactics for all patients: Verbal and written instructions for all components of prep, patient navigation with phone or virtual messaging to guide patients through the process, a low-fiber or all-liquid diet on the day before colonoscopy, and a split-dose 2-L prep regimen. Patients should begin the second half of the split-dose regimen 4-6 hours before colonoscopy and complete it at least 2 hours before the procedure starts, and clinicians should use an irrigation pump during colonoscopy to improve visibility. 

Beyond that, Dr. Jacobson noted, higher risk patients can take a split-dose 4-L prep regimen with bisacodyl, a low-fiber diet 2-3 days before colonoscopy, and a clear liquid diet the day before colonoscopy. Using simethicone as an adjunct solution can also reduce bubbles in the colon.

Future tech developments may help clinicians as well, he said, such as using AI to identify patients at high risk and modifying their prep process, creating a personalized prep on a digital platform with videos that guide patients through the process, and using a phone checklist tool to indicate when they’re ready for colonoscopy.
 

Improving Polyp Detection

Adenoma detection rates (ADR) can be highly variable due to different techniques, technical skills, pattern recognition, interpretation, and experience. New adjunct and AI-based tools can help improve ADR, especially if clinicians want to improve, receive training, and use best-practice techniques.

“In colonoscopy, it’s tricky because it’s not just a blood test or an x-ray. There’s really a lot of technique involved, both cognitive awareness and pattern recognition, as well as our technical skills,” said Tonya Kaltenbach, MD, professor of clinical medicine at the University of California San Francisco and director of advanced endoscopy at the San Francisco VA Health Care System in San Francisco.

For instance, multiple tools and techniques may be needed in real time to interpret a lesion, such as washing, retroflexing, and using better lighting, while paying attention to alerts and noting areas for further inspection and resection.

Dr. Tonya Kaltenbach, University of California San Francisco and the San Francisco VA Health Care System
San Francisco VA Health Care System
Dr. Tonya Kaltenbach


“This is not innate. It’s a learned skill,” she said. “It’s something we need to intentionally make efforts on and get feedback to improve.”

Improvement starts with using the right mindset for lesion detection, Dr. Kaltenbach said, by having a “reflexive recognition of deconstructed patterns of normal” — following the lines, vessels, and folds and looking for interruptions, abnormal thickness, and mucus caps. On top of that, adjunctive tools such as caps/cuffs and dye chromoendoscopy can help with proper ergonomics, irrigation, and mucosa exposure.

In the past 3 years, real-world studies using AI and computer-assisted detection have shown mixed results, with some demonstrating significant increases in ADR, while others haven’t, she said. However, being willing to try AI and other tools, such as the Endocuff cap, may help improve ADR, standardize interpretation, improve efficiency, and increase reproducibility.

“We’re always better with intentional feedback and deliberate practice,” she said. “Remember that if you improve, you’re protecting the patient from death and reducing interval cancer.”
 

Following Polyp Surveillance Intervals

The US Multi-Society Task Force on Colorectal Cancer’s recommendations for follow-up after colonoscopy and polypectomy provide valuable information and rationale for how to determine surveillance intervals for patients. However, clinicians still may be unsure what to recommend for some patients — or tell them to come back too soon, leading to unnecessary colonoscopy. 

For instance, a 47-year-old woman who presents for her initial screening and has a single 6-mm polyp, which pathology returns as a single adenoma may be considered to be at average risk and suggested to return in 7-10 years. The guidelines seem more obvious for patients with one or two adenomas under 10 mm removed en bloc. 

However, once the case details shift into gray areas and include three or four adenomas between 10 and 20 mm, or piecemeal removal, clinicians may differ on their recommendations, said Rajesh N. Keswani, MD, associate professor of medicine at the Northwestern University Feinberg School of Medicine and director of endoscopy for Northwestern Medicine in Chicago. At DDW 2024, Dr. Keswani presented several case examples, often finding various audience opinions.

Dr. Rajesh N. Keswani


In addition, he noted, recent studies have found that clinicians may estimate imprecise polyp measurements, struggle to identify sessile serrated polyposis syndrome, and often don’t follow evidence-based guidelines.

“Why do we ignore the guidelines? There’s this perception that a patient has risk factors that aren’t addressed by the guidelines, with regards to family history or a distant history of a large polyp that we don’t want to leave to the usual intervals,” he said. “We feel uncomfortable, even with our meticulous colonoscopy, telling people to come back in 10 years.”

To improve guideline adherence, Dr. Keswani suggested providing additional education, implementing an automated surveillance calculator, and using guidelines at the point of care. At Northwestern, for instance, clinicians use a hyperlink with an interpreted version of the guidelines with prior colonoscopy considerations. Overall though, practitioners should feel comfortable leaning toward longer surveillance intervals, he noted.

“More effort should be spent on getting unscreened patients in for colonoscopy than bringing back low-risk patients too early,” he said.
 

 

 

Reducing Environmental Effects

In recent waste audits of endoscopy rooms, providers generate 1-3 kg of waste per procedure, which would fill 117 soccer fields to a depth of 1 m, based on 18 million procedures in the United States per year. This waste comes from procedure-related equipment, administration, medications, travel of patients and staff, and infrastructure with systems such as air conditioning. Taking steps toward a green practice can reduce waste and the carbon footprint of healthcare.

“When we think about improving colonoscopy performance, the goal is to prevent colon cancer death, but when we expand that, we have to apply sustainable practices as a domain of quality,” said Heiko Pohl, MD, professor of medicine at the Geisel School of Medicine at Dartmouth in Hanover, New Hampshire, and a gastroenterologist at White River Junction VA Medical Center in White River Junction, Vermont.

The GI Multisociety Strategic Plan on Environmental Sustainability suggests a 5-year initiative to improve sustainability and reduce waste across seven domains — clinical setting, education, research, society efforts, intersociety efforts, industry, and advocacy.

Dr. Heiko Pohl


For instance, clinicians can take the biggest step toward sustainability by avoiding unneeded colonoscopies, Dr. Pohl said, noting that between 20% and 30% aren’t appropriate or indicated. Instead, practitioners can implement longer surveillance intervals, adhere to guidelines, and consider alternative tests, such as the fecal immunochemical test, fecal DNA, blood-based tests, and CT colonography, where relevant.

Clinicians can also rethink their approach to resection, such as using a snare first instead of forceps to reduce single-instrument use, using clip closure only when it’s truly indicated, and implementing AI-assisted optical diagnosis to help with leaving rectosigmoid polyps in place.

In terms of physical waste, practices may also reconsider how they sort bins and biohazards, looking at new ways to dispose of regulated medical waste, sharps, recyclables, and typical trash. Waste audits can help find ways to reduce paper, combine procedures, and create more efficient use of endoscopy rooms.

“We are really in a very precarious situation,” Dr. Pohl said. “It’s our generation that has a responsibility to change the course for our children’s and grandchildren’s sake.”
 

AI for Quality And Efficiency

Moving forward, AI tools will likely become more popular in various parts of GI practice, by assisting with documentation, spotting polyps, tracking mucosal surfaces, providing optical histopathology, and supervising performance through high-quality feedback.

“Endoscopy has reached the limits of human visual capacity, where seeing more pixels won’t necessarily improve clinical diagnosis. What’s next for elevating the care of patients really is AI,” said Jason B. Samarasena, MD, professor of medicine and program director of the interventional endoscopy training program at the University of California Irvine in Irvine, California.

As practices adopt AI-based systems, however, clinicians should be cautious about a false sense of comfort or “alarm fatigue” if bounding boxes become distracting. Instead, new tools need to be adopted as a “physician-AI hybrid,” with the endoscopist in mind, particularly if helpful for performing a better exam by watching withdrawal time or endoscope slippage.

Dr. Jason B. Samarasena


“In real-world practice, this is being implemented without attention to endoscopist inclination and behavior,” he said. “Having a better understanding of physician attitudes could yield more optimal results.”

Notably, AI-assisted tools should be viewed akin to spell-check, which signals to the endoscopist when to pay attention and double-check an area — but primarily relies on the expert to do a high-quality exam, said Aasma Shaukat, MD, professor of medicine and director of GI outcomes research at the NYU Grossman School of Medicine, New York City. 

“This should be an adjunct or an additional tool, not a replacement tool,” she added. “This doesn’t mean to stop doing astute observation.”

New York University
Dr. Aasma Shaukat


Future tools show promise in terms of tracking additional data related to prep quality, cecal landmarks, polyp size, mucosa exposure, histology prediction, and complete resection. These automated reports could also link to real-time dashboards, hospital or national registries, and reimbursement systems, Dr. Shaukat noted.

“At the end of the day, our interests are aligned,” she said. “Everybody cares about quality, patient satisfaction, and reimbursement, and with that goal in mind, I think some of the tools can be applied to show how we can achieve those principles together.”

Dr. Jacobson, Dr. Kaltenbach, Dr. Keswani, Dr. Pohl, Dr. Samarasena, and Dr. Shaukat reported no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

High-Dose Vitamin D Linked to Lower Disease Activity in CIS

Article Type
Changed
Fri, 09/20/2024 - 10:46

High-dose oral cholecalciferol (vitamin D3) supplementation significantly reduces evidence of disease activity in patients with clinically isolated syndrome (CIS), results of a randomized, controlled trial suggest. In addition, cholecalciferol had a favorable safety profile and was well tolerated.

“These data support high-dose vitamin D supplementation in early MS and make vitamin D the best candidate for add-on therapy evaluation in the therapeutic strategy for multiple sclerosis [MS],” said study author Eric Thouvenot, MD, PhD, University Hospital of Nimes, Neurology Department, Nimes, France.

The study was presented at the 2024 ECTRIMS annual meeting.
 

Vitamin D Supplementation Versus Placebo

Research shows vitamin D deficiency is a risk factor for MS. However, results of previous research investigating vitamin D supplementation in MS, with different regimens and durations, have been contradictory.

The current double-blind study included 303 adults newly diagnosed with CIS (within 90 days) and a serum 25-hydroxy vitamin D concentration of less than 100 nmol/L at baseline. Participants had a median age of 34 years, and 70% were women.

About one third of participants had optic neuritis, two thirds had oligoclonal bands from cerebrospinal fluid analysis, and the median Expanded Disability Status Scale (EDSS) score was 1.0. Of the total, 89% fulfilled 2017 McDonald criteria for the diagnosis of relapsing-remitting MS (RRMS).

Participants were randomly assigned to receive high-dose (100,000 international units) oral cholecalciferol or placebo every 2 weeks for 24 months. Participants had a clinical visit at 3, 6, 12, 18, and 24 months, and brain and spinal cord MRI with and without gadolinium at 3, 12, and 24 months.

The primary outcome was occurrence of disease activity — relapse, new or enlarging T2 lesions, and presence of contrast-enhancing lesions.
 

Significant Difference

During follow-up, 60.3% in the vitamin group showed evidence of disease activity versus 74.1% in the placebo group (hazard ratio [HR], 0.66; 95% CI, 0.50-0.87; P = .004). In addition, the median time to evidence of disease activity was 432 days in the vitamin D group versus 224 days in the placebo group (P = .003).

“As you can see, the difference is really, really significant,” said Dr. Thouvenot, referring to a Kaplan-Meier curve. He said he was somewhat surprised by the “very rapid” effect of vitamin D.

He noted that the 34% reduction in relative risk for disease activity is “similar to that of some published platform therapies for CIS patients.”

An analysis of the 247 patients who met 2017 McDonald criteria for RRMS at baseline showed the same results.

Secondary analyses showed no significant reduction in relapses and no significant differences for annual change in EDSS, quality of life, fatigue, anxiety, or depression.

Additional analyses showed the HR was unchanged after adjusting for known prognostic factors including age, sex, number of lesions (< 9 vs ≥ 9), EDSS score at baseline, and delay between CIS and treatment onset.

Results showed vitamin D3 supplementation was safe and well tolerated. Dr. Thouvenot noted that 95% of participants completed the trial, and none of the 33 severe adverse events in 30 patients suggested hypercalcemia or were related to the study drug.

These encouraging new data support further studies of high-dose vitamin D supplementation as an add-on therapy in early MS, said Dr. Thouvenot. He noted that animal models suggest vitamin D added to interferon beta has a synergistic effect on the immune system.
 

 

 

‘Fabulous’ Research

During a question-and-answer session, delegates praised the study, with some describing it as “fantastic” or “fabulous.”

Addressing a query about why this study succeeded in showing the benefits of vitamin D while numerous previous studies did not, Dr. Thouvenot said it may be due to the longer duration or a design that was better powered to show differences.

Asked if researchers examined vitamin D blood levels during the study, Dr. Thouvenot said these measures are “ongoing.”

Responding to a question of whether high-dose vitamin D could be a lifelong treatment, he referred again to the “excellent” safety of the intervention. Not only is it well tolerated, but vitamin D benefits bones and the risk for hypercalcemia is low except perhaps for patients with tuberculosis or sarcoidosis, he said.

“When you exclude those patients, the safety is huge, so I don’t know why we should stop it once it’s started.”

This study was funded in part by the French Ministry of Health. Dr. Thouvenot reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

High-dose oral cholecalciferol (vitamin D3) supplementation significantly reduces evidence of disease activity in patients with clinically isolated syndrome (CIS), results of a randomized, controlled trial suggest. In addition, cholecalciferol had a favorable safety profile and was well tolerated.

“These data support high-dose vitamin D supplementation in early MS and make vitamin D the best candidate for add-on therapy evaluation in the therapeutic strategy for multiple sclerosis [MS],” said study author Eric Thouvenot, MD, PhD, University Hospital of Nimes, Neurology Department, Nimes, France.

The study was presented at the 2024 ECTRIMS annual meeting.
 

Vitamin D Supplementation Versus Placebo

Research shows vitamin D deficiency is a risk factor for MS. However, results of previous research investigating vitamin D supplementation in MS, with different regimens and durations, have been contradictory.

The current double-blind study included 303 adults newly diagnosed with CIS (within 90 days) and a serum 25-hydroxy vitamin D concentration of less than 100 nmol/L at baseline. Participants had a median age of 34 years, and 70% were women.

About one third of participants had optic neuritis, two thirds had oligoclonal bands from cerebrospinal fluid analysis, and the median Expanded Disability Status Scale (EDSS) score was 1.0. Of the total, 89% fulfilled 2017 McDonald criteria for the diagnosis of relapsing-remitting MS (RRMS).

Participants were randomly assigned to receive high-dose (100,000 international units) oral cholecalciferol or placebo every 2 weeks for 24 months. Participants had a clinical visit at 3, 6, 12, 18, and 24 months, and brain and spinal cord MRI with and without gadolinium at 3, 12, and 24 months.

The primary outcome was occurrence of disease activity — relapse, new or enlarging T2 lesions, and presence of contrast-enhancing lesions.
 

Significant Difference

During follow-up, 60.3% in the vitamin group showed evidence of disease activity versus 74.1% in the placebo group (hazard ratio [HR], 0.66; 95% CI, 0.50-0.87; P = .004). In addition, the median time to evidence of disease activity was 432 days in the vitamin D group versus 224 days in the placebo group (P = .003).

“As you can see, the difference is really, really significant,” said Dr. Thouvenot, referring to a Kaplan-Meier curve. He said he was somewhat surprised by the “very rapid” effect of vitamin D.

He noted that the 34% reduction in relative risk for disease activity is “similar to that of some published platform therapies for CIS patients.”

An analysis of the 247 patients who met 2017 McDonald criteria for RRMS at baseline showed the same results.

Secondary analyses showed no significant reduction in relapses and no significant differences for annual change in EDSS, quality of life, fatigue, anxiety, or depression.

Additional analyses showed the HR was unchanged after adjusting for known prognostic factors including age, sex, number of lesions (< 9 vs ≥ 9), EDSS score at baseline, and delay between CIS and treatment onset.

Results showed vitamin D3 supplementation was safe and well tolerated. Dr. Thouvenot noted that 95% of participants completed the trial, and none of the 33 severe adverse events in 30 patients suggested hypercalcemia or were related to the study drug.

These encouraging new data support further studies of high-dose vitamin D supplementation as an add-on therapy in early MS, said Dr. Thouvenot. He noted that animal models suggest vitamin D added to interferon beta has a synergistic effect on the immune system.
 

 

 

‘Fabulous’ Research

During a question-and-answer session, delegates praised the study, with some describing it as “fantastic” or “fabulous.”

Addressing a query about why this study succeeded in showing the benefits of vitamin D while numerous previous studies did not, Dr. Thouvenot said it may be due to the longer duration or a design that was better powered to show differences.

Asked if researchers examined vitamin D blood levels during the study, Dr. Thouvenot said these measures are “ongoing.”

Responding to a question of whether high-dose vitamin D could be a lifelong treatment, he referred again to the “excellent” safety of the intervention. Not only is it well tolerated, but vitamin D benefits bones and the risk for hypercalcemia is low except perhaps for patients with tuberculosis or sarcoidosis, he said.

“When you exclude those patients, the safety is huge, so I don’t know why we should stop it once it’s started.”

This study was funded in part by the French Ministry of Health. Dr. Thouvenot reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

High-dose oral cholecalciferol (vitamin D3) supplementation significantly reduces evidence of disease activity in patients with clinically isolated syndrome (CIS), results of a randomized, controlled trial suggest. In addition, cholecalciferol had a favorable safety profile and was well tolerated.

“These data support high-dose vitamin D supplementation in early MS and make vitamin D the best candidate for add-on therapy evaluation in the therapeutic strategy for multiple sclerosis [MS],” said study author Eric Thouvenot, MD, PhD, University Hospital of Nimes, Neurology Department, Nimes, France.

The study was presented at the 2024 ECTRIMS annual meeting.
 

Vitamin D Supplementation Versus Placebo

Research shows vitamin D deficiency is a risk factor for MS. However, results of previous research investigating vitamin D supplementation in MS, with different regimens and durations, have been contradictory.

The current double-blind study included 303 adults newly diagnosed with CIS (within 90 days) and a serum 25-hydroxy vitamin D concentration of less than 100 nmol/L at baseline. Participants had a median age of 34 years, and 70% were women.

About one third of participants had optic neuritis, two thirds had oligoclonal bands from cerebrospinal fluid analysis, and the median Expanded Disability Status Scale (EDSS) score was 1.0. Of the total, 89% fulfilled 2017 McDonald criteria for the diagnosis of relapsing-remitting MS (RRMS).

Participants were randomly assigned to receive high-dose (100,000 international units) oral cholecalciferol or placebo every 2 weeks for 24 months. Participants had a clinical visit at 3, 6, 12, 18, and 24 months, and brain and spinal cord MRI with and without gadolinium at 3, 12, and 24 months.

The primary outcome was occurrence of disease activity — relapse, new or enlarging T2 lesions, and presence of contrast-enhancing lesions.
 

Significant Difference

During follow-up, 60.3% in the vitamin group showed evidence of disease activity versus 74.1% in the placebo group (hazard ratio [HR], 0.66; 95% CI, 0.50-0.87; P = .004). In addition, the median time to evidence of disease activity was 432 days in the vitamin D group versus 224 days in the placebo group (P = .003).

“As you can see, the difference is really, really significant,” said Dr. Thouvenot, referring to a Kaplan-Meier curve. He said he was somewhat surprised by the “very rapid” effect of vitamin D.

He noted that the 34% reduction in relative risk for disease activity is “similar to that of some published platform therapies for CIS patients.”

An analysis of the 247 patients who met 2017 McDonald criteria for RRMS at baseline showed the same results.

Secondary analyses showed no significant reduction in relapses and no significant differences for annual change in EDSS, quality of life, fatigue, anxiety, or depression.

Additional analyses showed the HR was unchanged after adjusting for known prognostic factors including age, sex, number of lesions (< 9 vs ≥ 9), EDSS score at baseline, and delay between CIS and treatment onset.

Results showed vitamin D3 supplementation was safe and well tolerated. Dr. Thouvenot noted that 95% of participants completed the trial, and none of the 33 severe adverse events in 30 patients suggested hypercalcemia or were related to the study drug.

These encouraging new data support further studies of high-dose vitamin D supplementation as an add-on therapy in early MS, said Dr. Thouvenot. He noted that animal models suggest vitamin D added to interferon beta has a synergistic effect on the immune system.
 

 

 

‘Fabulous’ Research

During a question-and-answer session, delegates praised the study, with some describing it as “fantastic” or “fabulous.”

Addressing a query about why this study succeeded in showing the benefits of vitamin D while numerous previous studies did not, Dr. Thouvenot said it may be due to the longer duration or a design that was better powered to show differences.

Asked if researchers examined vitamin D blood levels during the study, Dr. Thouvenot said these measures are “ongoing.”

Responding to a question of whether high-dose vitamin D could be a lifelong treatment, he referred again to the “excellent” safety of the intervention. Not only is it well tolerated, but vitamin D benefits bones and the risk for hypercalcemia is low except perhaps for patients with tuberculosis or sarcoidosis, he said.

“When you exclude those patients, the safety is huge, so I don’t know why we should stop it once it’s started.”

This study was funded in part by the French Ministry of Health. Dr. Thouvenot reported no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ECTRIMS 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Harnessing Doxycycline for STI Prevention: A Vital Role for Primary Care Physicians

Article Type
Changed
Thu, 09/19/2024 - 16:35

Primary care physicians frequently offer postexposure prophylaxis for various infections, including influenza, pertussis, tetanus, hepatitis, and Lyme disease, among others. However, the scope of postexposure prophylaxis in primary care is expanding, presenting an opportunity to further integrate it into patient care. As primary care providers, we have the unique advantage of being involved in both preventive care and immediate response, particularly in urgent care or triage scenarios. This dual role is crucial, as timely administration of postexposure prophylaxis can prevent infections from taking hold, especially following high-risk exposures.

Recently, the use of doxycycline as a form of postexposure prophylaxis for sexually transmitted infections (STIs) has gained attention. Traditionally, doxycycline has been used as preexposure or postexposure prophylaxis for conditions like malaria and Lyme disease but has not been widely employed for STI prevention until now. Doxycycline is a relatively common medication, generally safe with side effects that typically resolve upon discontinuation. Several open-label studies have shown that taking 200 mg of doxycycline within 72 hours of condomless sex significantly reduces the incidence of chlamydia, gonorrhea, and syphilis among gay, bisexual, and other men who have sex with men, as well as transgender women who have previously had a bacterial STI. However, these benefits have not been consistently observed among cisgender women and heterosexual men.

Dr. Santina J.G. Wheat

Given these findings, the Centers for Disease Control and Prevention now recommends that clinicians discuss the risks and benefits of doxycycline PEP (Doxy PEP) with gay, bisexual, and other men who have sex with men, as well as transgender women who have had a bacterial STI in the past 12 months. This discussion should be part of a shared decision-making process, advising the use of 200 mg of doxycycline within 72 hours of oral, vaginal, or anal sex, with the recommendation not to exceed 200 mg every 24 hours and to reassess the need for continued use every 3-6 months. Doxy PEP can be safely prescribed with preexposure prophylaxis for HIV (PrEP). Patients who receive PrEP may often be eligible for Doxy PEP, though the groups are not always the same.

The shared decision-making process is essential when considering Doxy PEP. While cost-effective and proven to reduce the risk of gonorrhea, chlamydia, and syphilis, its benefits vary among different populations. Moreover, some patients may experience side effects such as photosensitivity and gastrointestinal discomfort. Since the effectiveness of prophylaxis is closely tied to the timing of exposure and the patient’s current risk factors, it is important to regularly evaluate whether Doxy PEP remains beneficial. As there is not yet clear benefit to heterosexual men and cisgender women, opportunities still need to be explored for them.

Integrating Doxy PEP into a primary care practice can be done efficiently. A standing order protocol could be established for telehealth visits or nurse triage, allowing timely administration when patients report an exposure within 72 hours. It could also be incorporated into electronic medical records as part of a smart set for easy access to orders and as standard educational material in after-visit instructions. As this option is new, it is also important to discuss it with patients before they may need it so that they are aware should the need arise. While concerns about antibiotic resistance are valid, studies have not yet shown significant resistance issues related to Doxy PEP use, though ongoing monitoring is necessary.

You might wonder why primary care should prioritize this intervention. As the first point of contact, primary care providers are well-positioned to identify the need for prophylaxis, particularly since its effectiveness diminishes over time. Furthermore, the established, trusting relationships that primary care physicians often have with their patients create a nonjudgmental environment that encourages disclosure of potential exposures. This trust, combined with easier access to care, can make a significant difference in the timely provision of postexposure prophylaxis. By offering comprehensive, holistic care, including prophylaxis, primary care physicians can prevent infections and address conditions before they lead to serious complications. Therefore, family medicine physicians should consider incorporating Doxy PEP into their practices as a standard of care.
 

Dr. Wheat is vice chair of Diversity, Equity, and Inclusion, Department of Family and Community Medicine, and associate professor, Family and Community Medicine, at Northwestern University’s Feinberg School of Medicine, Chicago. She has no relevant financial disclosures.

References

Bachmann LH et al. CDC Clinical Guidelines on the Use of Doxycycline Postexposure Prophylaxis for Bacterial Sexually Transmitted Infection Prevention, United States, 2024. MMWR Recomm Rep 2024;73(No. RR-2):1-8.

Traeger MW et al. Potential Impact of Doxycycline Postexposure Prophylaxis Prescribing Strategies on Incidence of Bacterial Sexually Transmitted Infections. (Clin Infect Dis. 2023 Aug 18. doi: 10.1093/cid/ciad488).

Publications
Topics
Sections

Primary care physicians frequently offer postexposure prophylaxis for various infections, including influenza, pertussis, tetanus, hepatitis, and Lyme disease, among others. However, the scope of postexposure prophylaxis in primary care is expanding, presenting an opportunity to further integrate it into patient care. As primary care providers, we have the unique advantage of being involved in both preventive care and immediate response, particularly in urgent care or triage scenarios. This dual role is crucial, as timely administration of postexposure prophylaxis can prevent infections from taking hold, especially following high-risk exposures.

Recently, the use of doxycycline as a form of postexposure prophylaxis for sexually transmitted infections (STIs) has gained attention. Traditionally, doxycycline has been used as preexposure or postexposure prophylaxis for conditions like malaria and Lyme disease but has not been widely employed for STI prevention until now. Doxycycline is a relatively common medication, generally safe with side effects that typically resolve upon discontinuation. Several open-label studies have shown that taking 200 mg of doxycycline within 72 hours of condomless sex significantly reduces the incidence of chlamydia, gonorrhea, and syphilis among gay, bisexual, and other men who have sex with men, as well as transgender women who have previously had a bacterial STI. However, these benefits have not been consistently observed among cisgender women and heterosexual men.

Dr. Santina J.G. Wheat

Given these findings, the Centers for Disease Control and Prevention now recommends that clinicians discuss the risks and benefits of doxycycline PEP (Doxy PEP) with gay, bisexual, and other men who have sex with men, as well as transgender women who have had a bacterial STI in the past 12 months. This discussion should be part of a shared decision-making process, advising the use of 200 mg of doxycycline within 72 hours of oral, vaginal, or anal sex, with the recommendation not to exceed 200 mg every 24 hours and to reassess the need for continued use every 3-6 months. Doxy PEP can be safely prescribed with preexposure prophylaxis for HIV (PrEP). Patients who receive PrEP may often be eligible for Doxy PEP, though the groups are not always the same.

The shared decision-making process is essential when considering Doxy PEP. While cost-effective and proven to reduce the risk of gonorrhea, chlamydia, and syphilis, its benefits vary among different populations. Moreover, some patients may experience side effects such as photosensitivity and gastrointestinal discomfort. Since the effectiveness of prophylaxis is closely tied to the timing of exposure and the patient’s current risk factors, it is important to regularly evaluate whether Doxy PEP remains beneficial. As there is not yet clear benefit to heterosexual men and cisgender women, opportunities still need to be explored for them.

Integrating Doxy PEP into a primary care practice can be done efficiently. A standing order protocol could be established for telehealth visits or nurse triage, allowing timely administration when patients report an exposure within 72 hours. It could also be incorporated into electronic medical records as part of a smart set for easy access to orders and as standard educational material in after-visit instructions. As this option is new, it is also important to discuss it with patients before they may need it so that they are aware should the need arise. While concerns about antibiotic resistance are valid, studies have not yet shown significant resistance issues related to Doxy PEP use, though ongoing monitoring is necessary.

You might wonder why primary care should prioritize this intervention. As the first point of contact, primary care providers are well-positioned to identify the need for prophylaxis, particularly since its effectiveness diminishes over time. Furthermore, the established, trusting relationships that primary care physicians often have with their patients create a nonjudgmental environment that encourages disclosure of potential exposures. This trust, combined with easier access to care, can make a significant difference in the timely provision of postexposure prophylaxis. By offering comprehensive, holistic care, including prophylaxis, primary care physicians can prevent infections and address conditions before they lead to serious complications. Therefore, family medicine physicians should consider incorporating Doxy PEP into their practices as a standard of care.
 

Dr. Wheat is vice chair of Diversity, Equity, and Inclusion, Department of Family and Community Medicine, and associate professor, Family and Community Medicine, at Northwestern University’s Feinberg School of Medicine, Chicago. She has no relevant financial disclosures.

References

Bachmann LH et al. CDC Clinical Guidelines on the Use of Doxycycline Postexposure Prophylaxis for Bacterial Sexually Transmitted Infection Prevention, United States, 2024. MMWR Recomm Rep 2024;73(No. RR-2):1-8.

Traeger MW et al. Potential Impact of Doxycycline Postexposure Prophylaxis Prescribing Strategies on Incidence of Bacterial Sexually Transmitted Infections. (Clin Infect Dis. 2023 Aug 18. doi: 10.1093/cid/ciad488).

Primary care physicians frequently offer postexposure prophylaxis for various infections, including influenza, pertussis, tetanus, hepatitis, and Lyme disease, among others. However, the scope of postexposure prophylaxis in primary care is expanding, presenting an opportunity to further integrate it into patient care. As primary care providers, we have the unique advantage of being involved in both preventive care and immediate response, particularly in urgent care or triage scenarios. This dual role is crucial, as timely administration of postexposure prophylaxis can prevent infections from taking hold, especially following high-risk exposures.

Recently, the use of doxycycline as a form of postexposure prophylaxis for sexually transmitted infections (STIs) has gained attention. Traditionally, doxycycline has been used as preexposure or postexposure prophylaxis for conditions like malaria and Lyme disease but has not been widely employed for STI prevention until now. Doxycycline is a relatively common medication, generally safe with side effects that typically resolve upon discontinuation. Several open-label studies have shown that taking 200 mg of doxycycline within 72 hours of condomless sex significantly reduces the incidence of chlamydia, gonorrhea, and syphilis among gay, bisexual, and other men who have sex with men, as well as transgender women who have previously had a bacterial STI. However, these benefits have not been consistently observed among cisgender women and heterosexual men.

Dr. Santina J.G. Wheat

Given these findings, the Centers for Disease Control and Prevention now recommends that clinicians discuss the risks and benefits of doxycycline PEP (Doxy PEP) with gay, bisexual, and other men who have sex with men, as well as transgender women who have had a bacterial STI in the past 12 months. This discussion should be part of a shared decision-making process, advising the use of 200 mg of doxycycline within 72 hours of oral, vaginal, or anal sex, with the recommendation not to exceed 200 mg every 24 hours and to reassess the need for continued use every 3-6 months. Doxy PEP can be safely prescribed with preexposure prophylaxis for HIV (PrEP). Patients who receive PrEP may often be eligible for Doxy PEP, though the groups are not always the same.

The shared decision-making process is essential when considering Doxy PEP. While cost-effective and proven to reduce the risk of gonorrhea, chlamydia, and syphilis, its benefits vary among different populations. Moreover, some patients may experience side effects such as photosensitivity and gastrointestinal discomfort. Since the effectiveness of prophylaxis is closely tied to the timing of exposure and the patient’s current risk factors, it is important to regularly evaluate whether Doxy PEP remains beneficial. As there is not yet clear benefit to heterosexual men and cisgender women, opportunities still need to be explored for them.

Integrating Doxy PEP into a primary care practice can be done efficiently. A standing order protocol could be established for telehealth visits or nurse triage, allowing timely administration when patients report an exposure within 72 hours. It could also be incorporated into electronic medical records as part of a smart set for easy access to orders and as standard educational material in after-visit instructions. As this option is new, it is also important to discuss it with patients before they may need it so that they are aware should the need arise. While concerns about antibiotic resistance are valid, studies have not yet shown significant resistance issues related to Doxy PEP use, though ongoing monitoring is necessary.

You might wonder why primary care should prioritize this intervention. As the first point of contact, primary care providers are well-positioned to identify the need for prophylaxis, particularly since its effectiveness diminishes over time. Furthermore, the established, trusting relationships that primary care physicians often have with their patients create a nonjudgmental environment that encourages disclosure of potential exposures. This trust, combined with easier access to care, can make a significant difference in the timely provision of postexposure prophylaxis. By offering comprehensive, holistic care, including prophylaxis, primary care physicians can prevent infections and address conditions before they lead to serious complications. Therefore, family medicine physicians should consider incorporating Doxy PEP into their practices as a standard of care.
 

Dr. Wheat is vice chair of Diversity, Equity, and Inclusion, Department of Family and Community Medicine, and associate professor, Family and Community Medicine, at Northwestern University’s Feinberg School of Medicine, Chicago. She has no relevant financial disclosures.

References

Bachmann LH et al. CDC Clinical Guidelines on the Use of Doxycycline Postexposure Prophylaxis for Bacterial Sexually Transmitted Infection Prevention, United States, 2024. MMWR Recomm Rep 2024;73(No. RR-2):1-8.

Traeger MW et al. Potential Impact of Doxycycline Postexposure Prophylaxis Prescribing Strategies on Incidence of Bacterial Sexually Transmitted Infections. (Clin Infect Dis. 2023 Aug 18. doi: 10.1093/cid/ciad488).

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Controlling Six Risk Factors Can Combat CKD in Obesity

Article Type
Changed
Wed, 09/25/2024 - 06:11

 

TOPLINE:

Optimal management of blood pressure, A1c levels, low-density lipoprotein cholesterol (LDL-C), albuminuria, smoking, and physical activity may reduce the excess risk for chronic kidney disease (CKD) typically linked to obesity. The protective effect is more pronounced in men, in those with lower healthy food scores, and in users of diabetes medication.

METHODOLOGY:

  • Obesity is a significant risk factor for CKD, but it is unknown if managing multiple other obesity-related CKD risk factors can mitigate the excess CKD risk.
  • Researchers assessed CKD risk factor control in 97,538 participants with obesity from the UK Biobank and compared them with an equal number of age- and sex-matched control participants with normal body weight and no CKD at baseline.
  • Participants with obesity were assessed for six modifiable risk factors: Blood pressure, A1c levels, LDL-C, albuminuria, smoking, and physical activity.
  • Overall, 2487, 12,720, 32,388, 36,988, and 15,381 participants with obesity had at most two, three, four, five, and six risk factors under combined control, respectively, with the two or fewer group serving as the reference.
  • The primary outcome was incident CKD and the degree of combined risk factor control in persons. The CKD risk and risk factor control in participants with obesity were also compared with CKD incidence in matched normal weight participants.

TAKEAWAY:

  • During a median follow-up period of 10.8 years, 3954 cases of incident CKD were reported in participants with obesity and 1498 cases in matched persons of normal body mass index (BMI).
  • In a stepwise pattern, optimal control of each additional risk factor was associated with 11% (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.86-0.91) reduction in the incidence of CKD events, down to a 49% reduction in CKD incidence (aHR, 0.51; 95% CI, 0.43-0.61) for combined control of all six risk factors in participants with obesity.
  • The protective effect of combined control of risk factors was more pronounced in men vs women, in those with lower vs higher healthy diet scores, and in users vs nonusers of diabetes medication.
  • A similar stepwise pattern emerged between the number of risk factors controlled and CKD risk in participants with obesity compared with matched individuals of normal BMI, with the excess CKD risk eliminated in participants with obesity with six risk factors under control.

IN PRACTICE:

“Comprehensive control of risk factors might effectively neutralize the excessive CKD risk associated with obesity, emphasizing the potential of a joint management approach in the prevention of CKD in this population,” the authors wrote.

SOURCE:

The study was led by Rui Tang, MS, Department of Epidemiology, School of Public Health and Tropical Medicine, Tulane University, New Orleans, Louisiana. It was published online in Diabetes, Obesity and Metabolism.

LIMITATIONS:

The evaluated risk factors for CKD were arbitrarily selected, which may not represent the ideal group. The study did not consider the time-varying effect of joint risk factor control owing to the lack of some variables such as A1c. The generalizability of the findings was limited because over 90% of the UK Biobank cohort is composed of White people and individuals with healthier behaviors compared with the overall UK population.

DISCLOSURES:

The study was supported by grants from the US National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Optimal management of blood pressure, A1c levels, low-density lipoprotein cholesterol (LDL-C), albuminuria, smoking, and physical activity may reduce the excess risk for chronic kidney disease (CKD) typically linked to obesity. The protective effect is more pronounced in men, in those with lower healthy food scores, and in users of diabetes medication.

METHODOLOGY:

  • Obesity is a significant risk factor for CKD, but it is unknown if managing multiple other obesity-related CKD risk factors can mitigate the excess CKD risk.
  • Researchers assessed CKD risk factor control in 97,538 participants with obesity from the UK Biobank and compared them with an equal number of age- and sex-matched control participants with normal body weight and no CKD at baseline.
  • Participants with obesity were assessed for six modifiable risk factors: Blood pressure, A1c levels, LDL-C, albuminuria, smoking, and physical activity.
  • Overall, 2487, 12,720, 32,388, 36,988, and 15,381 participants with obesity had at most two, three, four, five, and six risk factors under combined control, respectively, with the two or fewer group serving as the reference.
  • The primary outcome was incident CKD and the degree of combined risk factor control in persons. The CKD risk and risk factor control in participants with obesity were also compared with CKD incidence in matched normal weight participants.

TAKEAWAY:

  • During a median follow-up period of 10.8 years, 3954 cases of incident CKD were reported in participants with obesity and 1498 cases in matched persons of normal body mass index (BMI).
  • In a stepwise pattern, optimal control of each additional risk factor was associated with 11% (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.86-0.91) reduction in the incidence of CKD events, down to a 49% reduction in CKD incidence (aHR, 0.51; 95% CI, 0.43-0.61) for combined control of all six risk factors in participants with obesity.
  • The protective effect of combined control of risk factors was more pronounced in men vs women, in those with lower vs higher healthy diet scores, and in users vs nonusers of diabetes medication.
  • A similar stepwise pattern emerged between the number of risk factors controlled and CKD risk in participants with obesity compared with matched individuals of normal BMI, with the excess CKD risk eliminated in participants with obesity with six risk factors under control.

IN PRACTICE:

“Comprehensive control of risk factors might effectively neutralize the excessive CKD risk associated with obesity, emphasizing the potential of a joint management approach in the prevention of CKD in this population,” the authors wrote.

SOURCE:

The study was led by Rui Tang, MS, Department of Epidemiology, School of Public Health and Tropical Medicine, Tulane University, New Orleans, Louisiana. It was published online in Diabetes, Obesity and Metabolism.

LIMITATIONS:

The evaluated risk factors for CKD were arbitrarily selected, which may not represent the ideal group. The study did not consider the time-varying effect of joint risk factor control owing to the lack of some variables such as A1c. The generalizability of the findings was limited because over 90% of the UK Biobank cohort is composed of White people and individuals with healthier behaviors compared with the overall UK population.

DISCLOSURES:

The study was supported by grants from the US National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Optimal management of blood pressure, A1c levels, low-density lipoprotein cholesterol (LDL-C), albuminuria, smoking, and physical activity may reduce the excess risk for chronic kidney disease (CKD) typically linked to obesity. The protective effect is more pronounced in men, in those with lower healthy food scores, and in users of diabetes medication.

METHODOLOGY:

  • Obesity is a significant risk factor for CKD, but it is unknown if managing multiple other obesity-related CKD risk factors can mitigate the excess CKD risk.
  • Researchers assessed CKD risk factor control in 97,538 participants with obesity from the UK Biobank and compared them with an equal number of age- and sex-matched control participants with normal body weight and no CKD at baseline.
  • Participants with obesity were assessed for six modifiable risk factors: Blood pressure, A1c levels, LDL-C, albuminuria, smoking, and physical activity.
  • Overall, 2487, 12,720, 32,388, 36,988, and 15,381 participants with obesity had at most two, three, four, five, and six risk factors under combined control, respectively, with the two or fewer group serving as the reference.
  • The primary outcome was incident CKD and the degree of combined risk factor control in persons. The CKD risk and risk factor control in participants with obesity were also compared with CKD incidence in matched normal weight participants.

TAKEAWAY:

  • During a median follow-up period of 10.8 years, 3954 cases of incident CKD were reported in participants with obesity and 1498 cases in matched persons of normal body mass index (BMI).
  • In a stepwise pattern, optimal control of each additional risk factor was associated with 11% (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.86-0.91) reduction in the incidence of CKD events, down to a 49% reduction in CKD incidence (aHR, 0.51; 95% CI, 0.43-0.61) for combined control of all six risk factors in participants with obesity.
  • The protective effect of combined control of risk factors was more pronounced in men vs women, in those with lower vs higher healthy diet scores, and in users vs nonusers of diabetes medication.
  • A similar stepwise pattern emerged between the number of risk factors controlled and CKD risk in participants with obesity compared with matched individuals of normal BMI, with the excess CKD risk eliminated in participants with obesity with six risk factors under control.

IN PRACTICE:

“Comprehensive control of risk factors might effectively neutralize the excessive CKD risk associated with obesity, emphasizing the potential of a joint management approach in the prevention of CKD in this population,” the authors wrote.

SOURCE:

The study was led by Rui Tang, MS, Department of Epidemiology, School of Public Health and Tropical Medicine, Tulane University, New Orleans, Louisiana. It was published online in Diabetes, Obesity and Metabolism.

LIMITATIONS:

The evaluated risk factors for CKD were arbitrarily selected, which may not represent the ideal group. The study did not consider the time-varying effect of joint risk factor control owing to the lack of some variables such as A1c. The generalizability of the findings was limited because over 90% of the UK Biobank cohort is composed of White people and individuals with healthier behaviors compared with the overall UK population.

DISCLOSURES:

The study was supported by grants from the US National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Starting Mammography at Age 40 May Backfire Due to False Positives

Article Type
Changed
Thu, 09/19/2024 - 15:52

Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.

The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.

A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.

These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.

The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.

I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.

Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.

The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.

A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.

These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.

The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.

I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.

Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.

A version of this article appeared on Medscape.com.

Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.

The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.

A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.

These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.

The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.

I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.

Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Should There Be a Mandatory Retirement Age for Physicians?

Article Type
Changed
Thu, 09/19/2024 - 15:47

This transcript has been edited for clarity

I’d like to pose a question: When should doctors retire? When, as practicing physicians or surgeons, do we become too old to deliver competent service? 

You will be amazed to hear, those of you who have listened to my videos before — and although it is a matter of public knowledge — that I’m 68. I know it’s impossible to imagine, due to this youthful appearance, visage, and so on, but I am. I’ve been a cancer doctor for 40 years; therefore, I need to think a little about retirement. 

There are two elements of this for me. I’m a university professor, and in Oxford we did vote, as a democracy of scholars, to have a mandatory retirement age around 68. This is so that we can bring new blood forward so that we can create the space to promote new professors, to bring youngsters in to make new ideas, and to get rid of us fusty old lot. 

The other argument would be, of course, that we are wise, we’re experienced, we are world-weary, and we’re successful — otherwise, we wouldn’t have lasted as academics as long. Nevertheless, we voted to do that. 

It’s possible to have a discussion with the university to extend this, and for those of us who are clinical academics, I have an honorary appointment as a consultant cancer physician in the hospital and my university professorial appointment, too.

I can extend it probably until I’m about 70. It feels like a nice, round number at which to retire — somewhat arbitrarily, one would admit. But does that feel right? 

In the United States, more than 25% of the physician workforce is over the age of 65. There are many studies showing that there is a 20% cognitive decline for most individuals between the ages of 45 and 65.

Are we as capable as an elderly workforce as once we were? Clearly, it’s hardly individualistic. It depends on each of our own health status, where we started from, and so on, but are there any general rules that we can apply? I think these are starting to creep in around the sense of revalidation.

In the United Kingdom, we have a General Medical Council (GMC). I need to have a license to practice from the GMC and a sense of fitness to practice. I have annual appraisals within the hospital system, in which I explore delivery of care, how I’m doing as a mentor, am I reaching the milestones I’ve set in terms of academic achievements, and so on.

This is a peer-to-peer process. We have senior physicians — people like myself — who act as appraisers to support our colleagues and to maintain that sense of fitness to practice. Every 5 years, I’m revalidated by the GMC. They take account of the annual appraisals and a report made by the senior physician within my hospital network who’s a so-called designated person.

These two elements come together with patient feedback, with 360-degree feedback from colleagues, and so on. This is quite a firmly regulated system that I think works. Our mandatory retirement age of 65 has gone. That was phased out by the government. In fact, our NHS is making an effort to retain older elders in the workforce.

They see the benefits of mentorship, experience, leadership, and networks. At a time when the majority of NHS are actively seeking to retire when 65, the NHS is trying to retain and pull back those of us who have been around for that wee bit longer and who still feel committed to doing it. 

I’d be really interested to see what you think. There’s variation from country to country. I know that, in Australia, they’re talking about annual appraisals of doctors over the age of 70. I’d be very interested to hear what you think is likely to happen in the United States. 

I think our system works pretty well, as long as you’re within the NHS and hospital system. If you wanted to still practice, but practice privately, you would still have to find somebody who’d be prepared to conduct appraisals and so on outside of the NHS. It’s an interesting area. 

For myself, I still feel competent. Patients seem to like me. That’s an objective assessment by this 360-degree thing in which patients reflected very positively, indeed, in my approach to the delivery of the care and so on, as did colleagues. I’m still publishing, I go to meetings, I cheer things, bits and bobs. I’d say I’m a wee bit unusual in terms of still having a strong academic profile in doing stuff.

It’s an interesting question. Richard Doll, one of the world’s great epidemiologists who, of course, was the dominant discoverer of the link between smoking and lung cancer, was attending seminars, sitting in the front row, and coming into university 3 days a week at age 90, continuing to be contributory with his extraordinarily sharp intellect and vast, vast experience.

When I think of experience, all young cancer doctors are now immunologists. When I was a young doctor, I was a clinical pharmacologist. There are many lessons and tricks that I learned which I do need to pass on to the younger generation of today. What do you think? Should there be a mandatory retirement age? How do we best measure, assess, and revalidate elderly physicians and surgeons? How can we continue to contribute to those who choose to do so? For the time being, as always, thanks for listening.
 

Dr. Kerr is professor, Nuffield Department of Clinical Laboratory Science, University of Oxford, and professor of cancer medicine, Oxford Cancer Centre, Oxford, United Kingdom. He has disclosed ties with Celleron Therapeutics, Oxford Cancer Biomarkers (Board of Directors); Afrox (charity; Trustee); GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (Consultant), Genomic Health; Merck Serono, and Roche.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity

I’d like to pose a question: When should doctors retire? When, as practicing physicians or surgeons, do we become too old to deliver competent service? 

You will be amazed to hear, those of you who have listened to my videos before — and although it is a matter of public knowledge — that I’m 68. I know it’s impossible to imagine, due to this youthful appearance, visage, and so on, but I am. I’ve been a cancer doctor for 40 years; therefore, I need to think a little about retirement. 

There are two elements of this for me. I’m a university professor, and in Oxford we did vote, as a democracy of scholars, to have a mandatory retirement age around 68. This is so that we can bring new blood forward so that we can create the space to promote new professors, to bring youngsters in to make new ideas, and to get rid of us fusty old lot. 

The other argument would be, of course, that we are wise, we’re experienced, we are world-weary, and we’re successful — otherwise, we wouldn’t have lasted as academics as long. Nevertheless, we voted to do that. 

It’s possible to have a discussion with the university to extend this, and for those of us who are clinical academics, I have an honorary appointment as a consultant cancer physician in the hospital and my university professorial appointment, too.

I can extend it probably until I’m about 70. It feels like a nice, round number at which to retire — somewhat arbitrarily, one would admit. But does that feel right? 

In the United States, more than 25% of the physician workforce is over the age of 65. There are many studies showing that there is a 20% cognitive decline for most individuals between the ages of 45 and 65.

Are we as capable as an elderly workforce as once we were? Clearly, it’s hardly individualistic. It depends on each of our own health status, where we started from, and so on, but are there any general rules that we can apply? I think these are starting to creep in around the sense of revalidation.

In the United Kingdom, we have a General Medical Council (GMC). I need to have a license to practice from the GMC and a sense of fitness to practice. I have annual appraisals within the hospital system, in which I explore delivery of care, how I’m doing as a mentor, am I reaching the milestones I’ve set in terms of academic achievements, and so on.

This is a peer-to-peer process. We have senior physicians — people like myself — who act as appraisers to support our colleagues and to maintain that sense of fitness to practice. Every 5 years, I’m revalidated by the GMC. They take account of the annual appraisals and a report made by the senior physician within my hospital network who’s a so-called designated person.

These two elements come together with patient feedback, with 360-degree feedback from colleagues, and so on. This is quite a firmly regulated system that I think works. Our mandatory retirement age of 65 has gone. That was phased out by the government. In fact, our NHS is making an effort to retain older elders in the workforce.

They see the benefits of mentorship, experience, leadership, and networks. At a time when the majority of NHS are actively seeking to retire when 65, the NHS is trying to retain and pull back those of us who have been around for that wee bit longer and who still feel committed to doing it. 

I’d be really interested to see what you think. There’s variation from country to country. I know that, in Australia, they’re talking about annual appraisals of doctors over the age of 70. I’d be very interested to hear what you think is likely to happen in the United States. 

I think our system works pretty well, as long as you’re within the NHS and hospital system. If you wanted to still practice, but practice privately, you would still have to find somebody who’d be prepared to conduct appraisals and so on outside of the NHS. It’s an interesting area. 

For myself, I still feel competent. Patients seem to like me. That’s an objective assessment by this 360-degree thing in which patients reflected very positively, indeed, in my approach to the delivery of the care and so on, as did colleagues. I’m still publishing, I go to meetings, I cheer things, bits and bobs. I’d say I’m a wee bit unusual in terms of still having a strong academic profile in doing stuff.

It’s an interesting question. Richard Doll, one of the world’s great epidemiologists who, of course, was the dominant discoverer of the link between smoking and lung cancer, was attending seminars, sitting in the front row, and coming into university 3 days a week at age 90, continuing to be contributory with his extraordinarily sharp intellect and vast, vast experience.

When I think of experience, all young cancer doctors are now immunologists. When I was a young doctor, I was a clinical pharmacologist. There are many lessons and tricks that I learned which I do need to pass on to the younger generation of today. What do you think? Should there be a mandatory retirement age? How do we best measure, assess, and revalidate elderly physicians and surgeons? How can we continue to contribute to those who choose to do so? For the time being, as always, thanks for listening.
 

Dr. Kerr is professor, Nuffield Department of Clinical Laboratory Science, University of Oxford, and professor of cancer medicine, Oxford Cancer Centre, Oxford, United Kingdom. He has disclosed ties with Celleron Therapeutics, Oxford Cancer Biomarkers (Board of Directors); Afrox (charity; Trustee); GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (Consultant), Genomic Health; Merck Serono, and Roche.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity

I’d like to pose a question: When should doctors retire? When, as practicing physicians or surgeons, do we become too old to deliver competent service? 

You will be amazed to hear, those of you who have listened to my videos before — and although it is a matter of public knowledge — that I’m 68. I know it’s impossible to imagine, due to this youthful appearance, visage, and so on, but I am. I’ve been a cancer doctor for 40 years; therefore, I need to think a little about retirement. 

There are two elements of this for me. I’m a university professor, and in Oxford we did vote, as a democracy of scholars, to have a mandatory retirement age around 68. This is so that we can bring new blood forward so that we can create the space to promote new professors, to bring youngsters in to make new ideas, and to get rid of us fusty old lot. 

The other argument would be, of course, that we are wise, we’re experienced, we are world-weary, and we’re successful — otherwise, we wouldn’t have lasted as academics as long. Nevertheless, we voted to do that. 

It’s possible to have a discussion with the university to extend this, and for those of us who are clinical academics, I have an honorary appointment as a consultant cancer physician in the hospital and my university professorial appointment, too.

I can extend it probably until I’m about 70. It feels like a nice, round number at which to retire — somewhat arbitrarily, one would admit. But does that feel right? 

In the United States, more than 25% of the physician workforce is over the age of 65. There are many studies showing that there is a 20% cognitive decline for most individuals between the ages of 45 and 65.

Are we as capable as an elderly workforce as once we were? Clearly, it’s hardly individualistic. It depends on each of our own health status, where we started from, and so on, but are there any general rules that we can apply? I think these are starting to creep in around the sense of revalidation.

In the United Kingdom, we have a General Medical Council (GMC). I need to have a license to practice from the GMC and a sense of fitness to practice. I have annual appraisals within the hospital system, in which I explore delivery of care, how I’m doing as a mentor, am I reaching the milestones I’ve set in terms of academic achievements, and so on.

This is a peer-to-peer process. We have senior physicians — people like myself — who act as appraisers to support our colleagues and to maintain that sense of fitness to practice. Every 5 years, I’m revalidated by the GMC. They take account of the annual appraisals and a report made by the senior physician within my hospital network who’s a so-called designated person.

These two elements come together with patient feedback, with 360-degree feedback from colleagues, and so on. This is quite a firmly regulated system that I think works. Our mandatory retirement age of 65 has gone. That was phased out by the government. In fact, our NHS is making an effort to retain older elders in the workforce.

They see the benefits of mentorship, experience, leadership, and networks. At a time when the majority of NHS are actively seeking to retire when 65, the NHS is trying to retain and pull back those of us who have been around for that wee bit longer and who still feel committed to doing it. 

I’d be really interested to see what you think. There’s variation from country to country. I know that, in Australia, they’re talking about annual appraisals of doctors over the age of 70. I’d be very interested to hear what you think is likely to happen in the United States. 

I think our system works pretty well, as long as you’re within the NHS and hospital system. If you wanted to still practice, but practice privately, you would still have to find somebody who’d be prepared to conduct appraisals and so on outside of the NHS. It’s an interesting area. 

For myself, I still feel competent. Patients seem to like me. That’s an objective assessment by this 360-degree thing in which patients reflected very positively, indeed, in my approach to the delivery of the care and so on, as did colleagues. I’m still publishing, I go to meetings, I cheer things, bits and bobs. I’d say I’m a wee bit unusual in terms of still having a strong academic profile in doing stuff.

It’s an interesting question. Richard Doll, one of the world’s great epidemiologists who, of course, was the dominant discoverer of the link between smoking and lung cancer, was attending seminars, sitting in the front row, and coming into university 3 days a week at age 90, continuing to be contributory with his extraordinarily sharp intellect and vast, vast experience.

When I think of experience, all young cancer doctors are now immunologists. When I was a young doctor, I was a clinical pharmacologist. There are many lessons and tricks that I learned which I do need to pass on to the younger generation of today. What do you think? Should there be a mandatory retirement age? How do we best measure, assess, and revalidate elderly physicians and surgeons? How can we continue to contribute to those who choose to do so? For the time being, as always, thanks for listening.
 

Dr. Kerr is professor, Nuffield Department of Clinical Laboratory Science, University of Oxford, and professor of cancer medicine, Oxford Cancer Centre, Oxford, United Kingdom. He has disclosed ties with Celleron Therapeutics, Oxford Cancer Biomarkers (Board of Directors); Afrox (charity; Trustee); GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (Consultant), Genomic Health; Merck Serono, and Roche.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Fecal Immunochemical Test Performance for CRC Screening Varies Widely

Article Type
Changed
Mon, 10/07/2024 - 02:24

Although considered a single class, fecal immunochemical tests (FITs) vary in their ability to detect advanced colorectal neoplasia (ACN) and should not be considered interchangeable, new research suggests.

In a comparative performance analysis of five commonly used FITs for colorectal cancer (CRC) screening, researchers found statistically significant differences in positivity rates, sensitivity, and specificity, as well as important differences in rates of unusable tests.

“Our findings have practical importance for FIT-based screening programs as these differences affect the need for repeated FIT, the yield of ACN detection, and the number of diagnostic colonoscopies that would be required to follow-up on abnormal findings,” wrote the researchers, led by Barcey T. Levy, MD, PhD, with University of Iowa, Iowa City.

The study was published online in Annals of Internal Medicine.
 

Wide Variation Found

Despite widespread use of FITs for CRC screening, there is limited data to help guide test selection. Understanding the comparative performance of different FITs is “crucial” for a successful FIT-based screening program, the researchers wrote.

Dr. Levy and colleagues directly compared the performance of five commercially available FITs — including four qualitative tests (Hemoccult ICT, Hemosure iFOB, OC-Light S FIT, and QuickVue iFOB) and one quantitative test (OC-Auto FIT) — using colonoscopy as the reference standard.

Participants included a diverse group of 3761 adults (mean age, 62 years; 63% women). Each participant was given all five tests and completed them using the same stool sample. They sent the tests by first class mail to a central location, where FITs were analyzed by a trained professional on the day of receipt.

The primary outcome was test performance (sensitivity and specificity) for ACN, defined as advanced polyps or CRC.

A total of 320 participants (8.5%) were found to have ACN based on colonoscopy results, including nine with CRC (0.2%) — rates that are similar to those found in other studies.

The sensitivity for detecting ACN ranged from 10.1% (Hemoccult ICT) to 36.7% (OC-Light S FIT), and specificity varied from 85.5% (OC-Light S FIT) to 96.6% (Hemoccult ICT).

“Given the variation in FIT cutoffs reported by manufacturers, it is not surprising that tests with lower cutoffs (such as OC-Light S FIT) had higher sensitivity than tests with higher cutoffs (such as Hemoccult ICT),” Dr. Levy and colleagues wrote.

Test positivity rates varied fourfold across FITs, from 3.9% for Hemoccult ICT to 16.4% for OC-Light S FIT. 

The rates of tests deemed unevaluable (due to factors such as indeterminant results or user mistakes) ranged from 0.2% for OC-Auto FIT to 2.5% for QuickVue iFOB.

The highest positive predictive value (PPV) was observed with OC-Auto FIT (28.9%) and the lowest with Hemosure iFOB (18.2%). The negative predictive value was similar across tests, ranging from 92.2% to 93.3%, indicating consistent performance in ruling out disease.

The study also identified significant differences in test sensitivity based on factors such as the location of neoplasia (higher sensitivity for distal lesions) and patient characteristics (higher sensitivity in people with higher body mass index and lower income).

Dr. Levy and colleagues said their findings have implications both in terms of clinical benefits and cost-effectiveness of CRC screening using FITs.

“Tests with lower sensitivity will miss more patients with CRC and advanced polyps, and tests with higher sensitivity and lower PPV will require more colonoscopies to detect patients with actionable findings,” they wrote.
 

 

 

‘Jaw-Dropping’ Results

The sensitivity results are “jaw-dropping,” Robert Smith, PhD, senior vice-president for cancer screening at the American Cancer Society, said in an interview. “A patient should have at least a 50/50 chance of having their colorectal cancer detected with a stool test at the time of testing.”

“What these numbers show is that the level that the manufacturers believe their test is performing is not reproduced,” Dr. Smith added.

This study adds to “concerns that have been raised about the inherent limitations and the performance of these tests that have been cleared for use and that are supposed to be lifesaving,” he said.

Clearance by the US Food and Drug Administration should mean that there’s essentially “no risk to using the test in terms of the test itself being harmful,” Dr. Smith said. But that’s not the case with FITs “because it’s harmful if you have cancer and your test doesn’t find it.”

By way of study limitations, Dr. Levy and colleagues said it’s important to note that they did not evaluate the “programmatic” sensitivity of repeating FIT testing every 1-2 years, as is generally recommended in screening guidelines. Therefore, the sensitivity of a single FIT may be lower than that of a repeated FIT. Also, variability in the FIT collection process by participants might have affected the results.

The study had no commercial funding. Disclosures for authors are available with the original article. Dr. Smith had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Although considered a single class, fecal immunochemical tests (FITs) vary in their ability to detect advanced colorectal neoplasia (ACN) and should not be considered interchangeable, new research suggests.

In a comparative performance analysis of five commonly used FITs for colorectal cancer (CRC) screening, researchers found statistically significant differences in positivity rates, sensitivity, and specificity, as well as important differences in rates of unusable tests.

“Our findings have practical importance for FIT-based screening programs as these differences affect the need for repeated FIT, the yield of ACN detection, and the number of diagnostic colonoscopies that would be required to follow-up on abnormal findings,” wrote the researchers, led by Barcey T. Levy, MD, PhD, with University of Iowa, Iowa City.

The study was published online in Annals of Internal Medicine.
 

Wide Variation Found

Despite widespread use of FITs for CRC screening, there is limited data to help guide test selection. Understanding the comparative performance of different FITs is “crucial” for a successful FIT-based screening program, the researchers wrote.

Dr. Levy and colleagues directly compared the performance of five commercially available FITs — including four qualitative tests (Hemoccult ICT, Hemosure iFOB, OC-Light S FIT, and QuickVue iFOB) and one quantitative test (OC-Auto FIT) — using colonoscopy as the reference standard.

Participants included a diverse group of 3761 adults (mean age, 62 years; 63% women). Each participant was given all five tests and completed them using the same stool sample. They sent the tests by first class mail to a central location, where FITs were analyzed by a trained professional on the day of receipt.

The primary outcome was test performance (sensitivity and specificity) for ACN, defined as advanced polyps or CRC.

A total of 320 participants (8.5%) were found to have ACN based on colonoscopy results, including nine with CRC (0.2%) — rates that are similar to those found in other studies.

The sensitivity for detecting ACN ranged from 10.1% (Hemoccult ICT) to 36.7% (OC-Light S FIT), and specificity varied from 85.5% (OC-Light S FIT) to 96.6% (Hemoccult ICT).

“Given the variation in FIT cutoffs reported by manufacturers, it is not surprising that tests with lower cutoffs (such as OC-Light S FIT) had higher sensitivity than tests with higher cutoffs (such as Hemoccult ICT),” Dr. Levy and colleagues wrote.

Test positivity rates varied fourfold across FITs, from 3.9% for Hemoccult ICT to 16.4% for OC-Light S FIT. 

The rates of tests deemed unevaluable (due to factors such as indeterminant results or user mistakes) ranged from 0.2% for OC-Auto FIT to 2.5% for QuickVue iFOB.

The highest positive predictive value (PPV) was observed with OC-Auto FIT (28.9%) and the lowest with Hemosure iFOB (18.2%). The negative predictive value was similar across tests, ranging from 92.2% to 93.3%, indicating consistent performance in ruling out disease.

The study also identified significant differences in test sensitivity based on factors such as the location of neoplasia (higher sensitivity for distal lesions) and patient characteristics (higher sensitivity in people with higher body mass index and lower income).

Dr. Levy and colleagues said their findings have implications both in terms of clinical benefits and cost-effectiveness of CRC screening using FITs.

“Tests with lower sensitivity will miss more patients with CRC and advanced polyps, and tests with higher sensitivity and lower PPV will require more colonoscopies to detect patients with actionable findings,” they wrote.
 

 

 

‘Jaw-Dropping’ Results

The sensitivity results are “jaw-dropping,” Robert Smith, PhD, senior vice-president for cancer screening at the American Cancer Society, said in an interview. “A patient should have at least a 50/50 chance of having their colorectal cancer detected with a stool test at the time of testing.”

“What these numbers show is that the level that the manufacturers believe their test is performing is not reproduced,” Dr. Smith added.

This study adds to “concerns that have been raised about the inherent limitations and the performance of these tests that have been cleared for use and that are supposed to be lifesaving,” he said.

Clearance by the US Food and Drug Administration should mean that there’s essentially “no risk to using the test in terms of the test itself being harmful,” Dr. Smith said. But that’s not the case with FITs “because it’s harmful if you have cancer and your test doesn’t find it.”

By way of study limitations, Dr. Levy and colleagues said it’s important to note that they did not evaluate the “programmatic” sensitivity of repeating FIT testing every 1-2 years, as is generally recommended in screening guidelines. Therefore, the sensitivity of a single FIT may be lower than that of a repeated FIT. Also, variability in the FIT collection process by participants might have affected the results.

The study had no commercial funding. Disclosures for authors are available with the original article. Dr. Smith had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Although considered a single class, fecal immunochemical tests (FITs) vary in their ability to detect advanced colorectal neoplasia (ACN) and should not be considered interchangeable, new research suggests.

In a comparative performance analysis of five commonly used FITs for colorectal cancer (CRC) screening, researchers found statistically significant differences in positivity rates, sensitivity, and specificity, as well as important differences in rates of unusable tests.

“Our findings have practical importance for FIT-based screening programs as these differences affect the need for repeated FIT, the yield of ACN detection, and the number of diagnostic colonoscopies that would be required to follow-up on abnormal findings,” wrote the researchers, led by Barcey T. Levy, MD, PhD, with University of Iowa, Iowa City.

The study was published online in Annals of Internal Medicine.
 

Wide Variation Found

Despite widespread use of FITs for CRC screening, there is limited data to help guide test selection. Understanding the comparative performance of different FITs is “crucial” for a successful FIT-based screening program, the researchers wrote.

Dr. Levy and colleagues directly compared the performance of five commercially available FITs — including four qualitative tests (Hemoccult ICT, Hemosure iFOB, OC-Light S FIT, and QuickVue iFOB) and one quantitative test (OC-Auto FIT) — using colonoscopy as the reference standard.

Participants included a diverse group of 3761 adults (mean age, 62 years; 63% women). Each participant was given all five tests and completed them using the same stool sample. They sent the tests by first class mail to a central location, where FITs were analyzed by a trained professional on the day of receipt.

The primary outcome was test performance (sensitivity and specificity) for ACN, defined as advanced polyps or CRC.

A total of 320 participants (8.5%) were found to have ACN based on colonoscopy results, including nine with CRC (0.2%) — rates that are similar to those found in other studies.

The sensitivity for detecting ACN ranged from 10.1% (Hemoccult ICT) to 36.7% (OC-Light S FIT), and specificity varied from 85.5% (OC-Light S FIT) to 96.6% (Hemoccult ICT).

“Given the variation in FIT cutoffs reported by manufacturers, it is not surprising that tests with lower cutoffs (such as OC-Light S FIT) had higher sensitivity than tests with higher cutoffs (such as Hemoccult ICT),” Dr. Levy and colleagues wrote.

Test positivity rates varied fourfold across FITs, from 3.9% for Hemoccult ICT to 16.4% for OC-Light S FIT. 

The rates of tests deemed unevaluable (due to factors such as indeterminant results or user mistakes) ranged from 0.2% for OC-Auto FIT to 2.5% for QuickVue iFOB.

The highest positive predictive value (PPV) was observed with OC-Auto FIT (28.9%) and the lowest with Hemosure iFOB (18.2%). The negative predictive value was similar across tests, ranging from 92.2% to 93.3%, indicating consistent performance in ruling out disease.

The study also identified significant differences in test sensitivity based on factors such as the location of neoplasia (higher sensitivity for distal lesions) and patient characteristics (higher sensitivity in people with higher body mass index and lower income).

Dr. Levy and colleagues said their findings have implications both in terms of clinical benefits and cost-effectiveness of CRC screening using FITs.

“Tests with lower sensitivity will miss more patients with CRC and advanced polyps, and tests with higher sensitivity and lower PPV will require more colonoscopies to detect patients with actionable findings,” they wrote.
 

 

 

‘Jaw-Dropping’ Results

The sensitivity results are “jaw-dropping,” Robert Smith, PhD, senior vice-president for cancer screening at the American Cancer Society, said in an interview. “A patient should have at least a 50/50 chance of having their colorectal cancer detected with a stool test at the time of testing.”

“What these numbers show is that the level that the manufacturers believe their test is performing is not reproduced,” Dr. Smith added.

This study adds to “concerns that have been raised about the inherent limitations and the performance of these tests that have been cleared for use and that are supposed to be lifesaving,” he said.

Clearance by the US Food and Drug Administration should mean that there’s essentially “no risk to using the test in terms of the test itself being harmful,” Dr. Smith said. But that’s not the case with FITs “because it’s harmful if you have cancer and your test doesn’t find it.”

By way of study limitations, Dr. Levy and colleagues said it’s important to note that they did not evaluate the “programmatic” sensitivity of repeating FIT testing every 1-2 years, as is generally recommended in screening guidelines. Therefore, the sensitivity of a single FIT may be lower than that of a repeated FIT. Also, variability in the FIT collection process by participants might have affected the results.

The study had no commercial funding. Disclosures for authors are available with the original article. Dr. Smith had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Hidden in Plain Sight: The Growing Epidemic of Ultraprocessed Food Addiction

Article Type
Changed
Thu, 09/19/2024 - 15:35

Over the past few decades, researchers have developed a compelling case against ultraprocessed foods and beverages, linking them to several chronic diseases and adverse health conditions. Yet, even as this evidence mounted, these food items have become increasingly prominent in diets globally. 

Now, recent studies are unlocking why cutting back on ultraprocessed foods can be so challenging. In their ability to fuel intense cravings, loss of control, and even withdrawal symptoms, ultraprocessed foods appear as capable of triggering addiction as traditional culprits like tobacco and alcohol. 

This has driven efforts to better understand the addictive nature of these foods and identify strategies for combating it. 
 

The Key Role of the Food Industry

Some foods are more likely to trigger addictions than others. For instance, in our studies, participants frequently mention chocolate, pizza, French fries, potato chips, and soda as some of the most addictive foods. What these foods all share is an ability to deliver high doses of refined carbohydrates, fat, or salt at levels exceeding those found in natural foods (eg, fruits, vegetables, beans).

Furthermore, ultraprocessed foods are industrially mass-produced in a process that relies on the heavy use of flavor enhancers and additives, as well as preservatives and packaging that make them shelf-stable. This has flooded our food supply with cheap, accessible, hyperrewarding foods that our brains are not well equipped to resist.

To add to these already substantial effects, the food industry often employs strategies reminiscent of Big Tobacco. They engineer foods to hit our “bliss points,” maximizing craving and fostering brand loyalty from a young age. This product engineering, coupled with aggressive marketing, makes these foods both attractive and seemingly ubiquitous. 
 

How Many People Are Affected?

Addiction to ultraprocessed food is more common than you might think. According to the Yale Food Addiction Scale — a tool that uses the same criteria for diagnosing substance use disorders to assess ultraprocessed food addiction (UPFA) — about 14% of adults and 12% of children show clinically significant signs of addiction to such foods. This is quite similar to addiction rates among adults for legal substances like alcohol and tobacco. 

Research has shown that behaviors and brain mechanisms contributing to addictive disorders, such as cravings and impulsivity, also apply to UPFA. 

Many more people outside of those who meet the criteria for UPFA are influenced by their addictive properties. Picture a teenager craving a sugary drink after school, a child needing the morning cereal fix, or adults reaching for candy and fast food; these scenarios illustrate how addictive ultraprocessed foods permeate our daily lives. 

From a public health standpoint, this comes at a significant cost. Even experiencing one or two symptoms of UPFA, such as intense cravings or a feeling of loss of control over intake, can lead to consuming too many calories, sugar, fat, and sodium in a way that puts health at risk.
 

Clinical Implications

Numerous studies have found that individuals who exhibit UPFA have more severe mental and physical health challenges. For example, UPFA is associated with higher rates of diet-related diseases (like type 2 diabetes), greater overall mental health issues, and generally poorer outcomes in weight loss treatments.

Despite the growing understanding of UPFA’s relevance in clinical settings, research is still limited on how to best treat, manage, or prevent it. Most of the existing work has focused on investigating whether UPFA is indeed a real condition, with efforts to create clinical guidelines only just beginning.

Of note, UPFA isn’t officially recognized as a diagnosis — yet. If it were, it could spark much more research into how to handle it clinically.

There is some debate about whether we really need this new diagnosis, given that eating disorders are already recognized. However, the statistics tell a different story: Around 14% of people might have UPFA compared with about 1% for binge-type eating disorders. This suggests that many individuals with problematic eating habits are currently flying under the radar with our existing diagnostic categories. 

What’s even more concerning is that these individuals often suffer significant problems and exhibit distinct brain differences, even if they do not neatly fit into an existing eating disorder diagnosis. Officially recognizing UPFA could open up new avenues for support and lead to better treatments aimed at reducing compulsive eating patterns.
 

 

 

Treatment Options

Treatment options for UPFA are still being explored. Initial evidence suggests that medications used for treating substance addiction, such as naltrexone and bupropion, might help with highly processed food addiction as well. Newer drugs, like glucagon-like peptide-1 receptor agonists, which appear to curb food cravings and manage addictive behaviors, also look promising.

Psychosocial approaches can also be used to address UPFA. Strategies include:

  • Helping individuals become more aware of their triggers for addictive patterns of intake. This often involves identifying certain types of food (eg, potato chips, candy), specific places or times of day (eg, sitting on the couch at night while watching TV), and particular emotional states (eg, anger, loneliness, boredom, sadness). Increasing awareness of personal triggers can help people minimize their exposure to these and develop coping strategies when they do arise.
  • Many people use ultraprocessed foods to cope with challenging emotions. Helping individuals develop healthier strategies to regulate their emotions can be key. This may include seeking out social support, journaling, going for a walk, or practicing mindfulness.
  • UPFA can be associated with erratic and inconsistent eating patterns. Stabilizing eating habits by consuming regular meals composed of more minimally processed foods (eg, vegetables, fruits, high-quality protein, beans) can help heal the body and reduce vulnerability to ultraprocessed food triggers.
  • Many people with UPFA have other existing mental health conditions, including mood disorders, anxiety, substance use disorders, or trauma-related disorders. Addressing these co-occurring mental health conditions can help reduce reliance on ultraprocessed foods.

Public-policy interventions may also help safeguard vulnerable populations from developing UPFA. For instance, support exists for policies to protect children from cigarette marketing and to put clear addiction warning labels on cigarette packages. A similar approach could be applied to reduce the harms associated with ultraprocessed foods, particularly for children.

Combating this growing problem requires treating ultraprocessed foods like other addictive substances. By identifying the threat posed by these common food items, we can not only help patients with UPFA, but also potentially stave off the development of several diet-related conditions.
 

Dr. Gearhardt, professor of psychology, University of Michigan, Ann Arbor, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Over the past few decades, researchers have developed a compelling case against ultraprocessed foods and beverages, linking them to several chronic diseases and adverse health conditions. Yet, even as this evidence mounted, these food items have become increasingly prominent in diets globally. 

Now, recent studies are unlocking why cutting back on ultraprocessed foods can be so challenging. In their ability to fuel intense cravings, loss of control, and even withdrawal symptoms, ultraprocessed foods appear as capable of triggering addiction as traditional culprits like tobacco and alcohol. 

This has driven efforts to better understand the addictive nature of these foods and identify strategies for combating it. 
 

The Key Role of the Food Industry

Some foods are more likely to trigger addictions than others. For instance, in our studies, participants frequently mention chocolate, pizza, French fries, potato chips, and soda as some of the most addictive foods. What these foods all share is an ability to deliver high doses of refined carbohydrates, fat, or salt at levels exceeding those found in natural foods (eg, fruits, vegetables, beans).

Furthermore, ultraprocessed foods are industrially mass-produced in a process that relies on the heavy use of flavor enhancers and additives, as well as preservatives and packaging that make them shelf-stable. This has flooded our food supply with cheap, accessible, hyperrewarding foods that our brains are not well equipped to resist.

To add to these already substantial effects, the food industry often employs strategies reminiscent of Big Tobacco. They engineer foods to hit our “bliss points,” maximizing craving and fostering brand loyalty from a young age. This product engineering, coupled with aggressive marketing, makes these foods both attractive and seemingly ubiquitous. 
 

How Many People Are Affected?

Addiction to ultraprocessed food is more common than you might think. According to the Yale Food Addiction Scale — a tool that uses the same criteria for diagnosing substance use disorders to assess ultraprocessed food addiction (UPFA) — about 14% of adults and 12% of children show clinically significant signs of addiction to such foods. This is quite similar to addiction rates among adults for legal substances like alcohol and tobacco. 

Research has shown that behaviors and brain mechanisms contributing to addictive disorders, such as cravings and impulsivity, also apply to UPFA. 

Many more people outside of those who meet the criteria for UPFA are influenced by their addictive properties. Picture a teenager craving a sugary drink after school, a child needing the morning cereal fix, or adults reaching for candy and fast food; these scenarios illustrate how addictive ultraprocessed foods permeate our daily lives. 

From a public health standpoint, this comes at a significant cost. Even experiencing one or two symptoms of UPFA, such as intense cravings or a feeling of loss of control over intake, can lead to consuming too many calories, sugar, fat, and sodium in a way that puts health at risk.
 

Clinical Implications

Numerous studies have found that individuals who exhibit UPFA have more severe mental and physical health challenges. For example, UPFA is associated with higher rates of diet-related diseases (like type 2 diabetes), greater overall mental health issues, and generally poorer outcomes in weight loss treatments.

Despite the growing understanding of UPFA’s relevance in clinical settings, research is still limited on how to best treat, manage, or prevent it. Most of the existing work has focused on investigating whether UPFA is indeed a real condition, with efforts to create clinical guidelines only just beginning.

Of note, UPFA isn’t officially recognized as a diagnosis — yet. If it were, it could spark much more research into how to handle it clinically.

There is some debate about whether we really need this new diagnosis, given that eating disorders are already recognized. However, the statistics tell a different story: Around 14% of people might have UPFA compared with about 1% for binge-type eating disorders. This suggests that many individuals with problematic eating habits are currently flying under the radar with our existing diagnostic categories. 

What’s even more concerning is that these individuals often suffer significant problems and exhibit distinct brain differences, even if they do not neatly fit into an existing eating disorder diagnosis. Officially recognizing UPFA could open up new avenues for support and lead to better treatments aimed at reducing compulsive eating patterns.
 

 

 

Treatment Options

Treatment options for UPFA are still being explored. Initial evidence suggests that medications used for treating substance addiction, such as naltrexone and bupropion, might help with highly processed food addiction as well. Newer drugs, like glucagon-like peptide-1 receptor agonists, which appear to curb food cravings and manage addictive behaviors, also look promising.

Psychosocial approaches can also be used to address UPFA. Strategies include:

  • Helping individuals become more aware of their triggers for addictive patterns of intake. This often involves identifying certain types of food (eg, potato chips, candy), specific places or times of day (eg, sitting on the couch at night while watching TV), and particular emotional states (eg, anger, loneliness, boredom, sadness). Increasing awareness of personal triggers can help people minimize their exposure to these and develop coping strategies when they do arise.
  • Many people use ultraprocessed foods to cope with challenging emotions. Helping individuals develop healthier strategies to regulate their emotions can be key. This may include seeking out social support, journaling, going for a walk, or practicing mindfulness.
  • UPFA can be associated with erratic and inconsistent eating patterns. Stabilizing eating habits by consuming regular meals composed of more minimally processed foods (eg, vegetables, fruits, high-quality protein, beans) can help heal the body and reduce vulnerability to ultraprocessed food triggers.
  • Many people with UPFA have other existing mental health conditions, including mood disorders, anxiety, substance use disorders, or trauma-related disorders. Addressing these co-occurring mental health conditions can help reduce reliance on ultraprocessed foods.

Public-policy interventions may also help safeguard vulnerable populations from developing UPFA. For instance, support exists for policies to protect children from cigarette marketing and to put clear addiction warning labels on cigarette packages. A similar approach could be applied to reduce the harms associated with ultraprocessed foods, particularly for children.

Combating this growing problem requires treating ultraprocessed foods like other addictive substances. By identifying the threat posed by these common food items, we can not only help patients with UPFA, but also potentially stave off the development of several diet-related conditions.
 

Dr. Gearhardt, professor of psychology, University of Michigan, Ann Arbor, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Over the past few decades, researchers have developed a compelling case against ultraprocessed foods and beverages, linking them to several chronic diseases and adverse health conditions. Yet, even as this evidence mounted, these food items have become increasingly prominent in diets globally. 

Now, recent studies are unlocking why cutting back on ultraprocessed foods can be so challenging. In their ability to fuel intense cravings, loss of control, and even withdrawal symptoms, ultraprocessed foods appear as capable of triggering addiction as traditional culprits like tobacco and alcohol. 

This has driven efforts to better understand the addictive nature of these foods and identify strategies for combating it. 
 

The Key Role of the Food Industry

Some foods are more likely to trigger addictions than others. For instance, in our studies, participants frequently mention chocolate, pizza, French fries, potato chips, and soda as some of the most addictive foods. What these foods all share is an ability to deliver high doses of refined carbohydrates, fat, or salt at levels exceeding those found in natural foods (eg, fruits, vegetables, beans).

Furthermore, ultraprocessed foods are industrially mass-produced in a process that relies on the heavy use of flavor enhancers and additives, as well as preservatives and packaging that make them shelf-stable. This has flooded our food supply with cheap, accessible, hyperrewarding foods that our brains are not well equipped to resist.

To add to these already substantial effects, the food industry often employs strategies reminiscent of Big Tobacco. They engineer foods to hit our “bliss points,” maximizing craving and fostering brand loyalty from a young age. This product engineering, coupled with aggressive marketing, makes these foods both attractive and seemingly ubiquitous. 
 

How Many People Are Affected?

Addiction to ultraprocessed food is more common than you might think. According to the Yale Food Addiction Scale — a tool that uses the same criteria for diagnosing substance use disorders to assess ultraprocessed food addiction (UPFA) — about 14% of adults and 12% of children show clinically significant signs of addiction to such foods. This is quite similar to addiction rates among adults for legal substances like alcohol and tobacco. 

Research has shown that behaviors and brain mechanisms contributing to addictive disorders, such as cravings and impulsivity, also apply to UPFA. 

Many more people outside of those who meet the criteria for UPFA are influenced by their addictive properties. Picture a teenager craving a sugary drink after school, a child needing the morning cereal fix, or adults reaching for candy and fast food; these scenarios illustrate how addictive ultraprocessed foods permeate our daily lives. 

From a public health standpoint, this comes at a significant cost. Even experiencing one or two symptoms of UPFA, such as intense cravings or a feeling of loss of control over intake, can lead to consuming too many calories, sugar, fat, and sodium in a way that puts health at risk.
 

Clinical Implications

Numerous studies have found that individuals who exhibit UPFA have more severe mental and physical health challenges. For example, UPFA is associated with higher rates of diet-related diseases (like type 2 diabetes), greater overall mental health issues, and generally poorer outcomes in weight loss treatments.

Despite the growing understanding of UPFA’s relevance in clinical settings, research is still limited on how to best treat, manage, or prevent it. Most of the existing work has focused on investigating whether UPFA is indeed a real condition, with efforts to create clinical guidelines only just beginning.

Of note, UPFA isn’t officially recognized as a diagnosis — yet. If it were, it could spark much more research into how to handle it clinically.

There is some debate about whether we really need this new diagnosis, given that eating disorders are already recognized. However, the statistics tell a different story: Around 14% of people might have UPFA compared with about 1% for binge-type eating disorders. This suggests that many individuals with problematic eating habits are currently flying under the radar with our existing diagnostic categories. 

What’s even more concerning is that these individuals often suffer significant problems and exhibit distinct brain differences, even if they do not neatly fit into an existing eating disorder diagnosis. Officially recognizing UPFA could open up new avenues for support and lead to better treatments aimed at reducing compulsive eating patterns.
 

 

 

Treatment Options

Treatment options for UPFA are still being explored. Initial evidence suggests that medications used for treating substance addiction, such as naltrexone and bupropion, might help with highly processed food addiction as well. Newer drugs, like glucagon-like peptide-1 receptor agonists, which appear to curb food cravings and manage addictive behaviors, also look promising.

Psychosocial approaches can also be used to address UPFA. Strategies include:

  • Helping individuals become more aware of their triggers for addictive patterns of intake. This often involves identifying certain types of food (eg, potato chips, candy), specific places or times of day (eg, sitting on the couch at night while watching TV), and particular emotional states (eg, anger, loneliness, boredom, sadness). Increasing awareness of personal triggers can help people minimize their exposure to these and develop coping strategies when they do arise.
  • Many people use ultraprocessed foods to cope with challenging emotions. Helping individuals develop healthier strategies to regulate their emotions can be key. This may include seeking out social support, journaling, going for a walk, or practicing mindfulness.
  • UPFA can be associated with erratic and inconsistent eating patterns. Stabilizing eating habits by consuming regular meals composed of more minimally processed foods (eg, vegetables, fruits, high-quality protein, beans) can help heal the body and reduce vulnerability to ultraprocessed food triggers.
  • Many people with UPFA have other existing mental health conditions, including mood disorders, anxiety, substance use disorders, or trauma-related disorders. Addressing these co-occurring mental health conditions can help reduce reliance on ultraprocessed foods.

Public-policy interventions may also help safeguard vulnerable populations from developing UPFA. For instance, support exists for policies to protect children from cigarette marketing and to put clear addiction warning labels on cigarette packages. A similar approach could be applied to reduce the harms associated with ultraprocessed foods, particularly for children.

Combating this growing problem requires treating ultraprocessed foods like other addictive substances. By identifying the threat posed by these common food items, we can not only help patients with UPFA, but also potentially stave off the development of several diet-related conditions.
 

Dr. Gearhardt, professor of psychology, University of Michigan, Ann Arbor, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Bariatric Surgery and Weight Loss Make Brain Say Meh to Sweets

Article Type
Changed
Thu, 09/19/2024 - 14:17

 

TOPLINE:

A preference for less sweet beverages after bariatric surgery and weight loss appears to stem from a lower brain reward response to sweet taste without affecting the sensory regions.
 

METHODOLOGY:

  • Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
  • This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
  • Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
  • In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
  • The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
  •  

TAKEAWAY:

  • The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
  • In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
  • The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
  • The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
  •  

IN PRACTICE:

“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
 

SOURCE:

This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
 

LIMITATIONS:

The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
 

DISCLOSURES:

This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A preference for less sweet beverages after bariatric surgery and weight loss appears to stem from a lower brain reward response to sweet taste without affecting the sensory regions.
 

METHODOLOGY:

  • Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
  • This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
  • Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
  • In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
  • The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
  •  

TAKEAWAY:

  • The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
  • In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
  • The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
  • The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
  •  

IN PRACTICE:

“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
 

SOURCE:

This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
 

LIMITATIONS:

The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
 

DISCLOSURES:

This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

A preference for less sweet beverages after bariatric surgery and weight loss appears to stem from a lower brain reward response to sweet taste without affecting the sensory regions.
 

METHODOLOGY:

  • Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
  • This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
  • Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
  • In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
  • The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
  •  

TAKEAWAY:

  • The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
  • In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
  • The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
  • The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
  •  

IN PRACTICE:

“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
 

SOURCE:

This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
 

LIMITATIONS:

The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
 

DISCLOSURES:

This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Revolutionizing Headache Medicine: The Role of Artificial Intelligence

Article Type
Changed
Wed, 09/25/2024 - 16:36
Display Headline
Revolutionizing Headache Medicine: The Role of Artificial Intelligence

 

 

As we move further into the 21st century, technology continues to revolutionize various facets of our lives. Healthcare is a prime example. Advances in technology have dramatically reshaped the way we develop medications, diagnose diseases, and enhance patient care. The rise of artificial intelligence (AI) and the widespread adoption of digital health technologies have marked a significant milestone in improving the quality of care. AI, with its ability to leverage algorithms, deep learning, and machine learning to process data, make decisions, and perform tasks autonomously, is becoming an integral part of modern society. It is embedded in various technologies that we rely on daily, from smartphones and smart home devices to content recommendations on streaming services and social media platforms.

 

In healthcare, AI has applications in numerous fields, such as radiology. AI streamlines processes such as organizing patient appointments, optimizing radiation protocols for safety and efficiency, and enhancing the documentation process through advanced image analysis. AI technology plays an integral role in imaging tasks like image enhancement, lesion detection, and precise measurement. In difficult-to-interpret radiologic studies, such as some mammography images, it can be a crucial aid to the radiologist. Additionally, the use of AI has significantly improved remote patient monitoring that enables healthcare professionals to monitor and assess patient conditions without needing in-person visits. Remote patient monitoring gained prominence during the COVID-19 pandemic and continues to be a valuable tool in post pandemic care. Study results have highlighted that AI-driven ambient dictation tools have increased provider engagement with patients during consultations while reducing the time spent documenting in electronic health records.

Like many other medical specialties, headache medicine also uses AI. Most prominently, AI has been used in models and engines in assisting with headache diagnoses. A noteworthy example of AI in headache medicine is the development of an online, computer-based diagnostic engine (CDE) by Rapoport et al, called BonTriage. This tool is designed to diagnose headaches by employing a rule set based on the International Classification of Headache Disorders-3 (ICHD-3) criteria for primary headache disorders while also evaluating secondary headaches and medication overuse headaches. By leveraging machine learning, the CDE has the potential to streamline the diagnostic process, reducing the number of questions needed to reach a diagnosis and making the experience more efficient. This information can then be printed as a PDF file and taken by the patient to a healthcare professional for further discussion, fostering a more accurate, fluid, and conversational consultation.

 

A study was conducted to evaluate the accuracy of the CDE. Participants were randomly assigned to 1 of 2 sequences: (1) using the CDE followed by a structured standard interview with a headache specialist using the same ICHD-3 criteria or (2) starting with the structured standard interview followed by the CDE. The results demonstrated nearly perfect agreement in diagnosing migraine and probable migraine between the CDE and structured standard interview (κ = 0.82, 95% CI: 0.74, 0.90). The CDE demonstrated a diagnostic accuracy of 91.6% (95% CI: 86.9%, 95.0%), a sensitivity rate of 89.0% (95% CI: 82.5%, 93.7%), and a specificity rate of 97.0% (95% CI: 89.5%, 99.6%).

 

A diagnostic engine such as this can save time that clinicians spend on documentation and allow more time for discussion with the patient. For instance, a patient can take the printout received from the CDE to an appointment; the printout gives a detailed history plus information about social and psychological issues, a list of medications taken, and results of previous testing. The CDE system was originally designed to help patients see a specialist in the environment of a nationwide lack of headache specialists. There are currently 45 million patients with headaches who are seeking treatment with only around 550 certified headache specialists in the United States. The CDE printed information can help a patient obtain a consultation from a clinician quickly and start evaluation and treatment earlier. This expert online consultation is currently free of charge.

 

Kwon et al developed a machine learningbased model designed to automatically classify headache disorders using data from a questionnaire. Their model was able to predict diagnoses for conditions such as migraine, tension-type headaches, trigeminal autonomic cephalalgia, epicranial headache, and thunderclap headaches. The model was trained on data from 2162 patients, all diagnosed by headache specialists, and achieved an overall accuracy of 81%, with a sensitivity of 88% and a specificity of 95% for diagnosing migraines. However, the model’s performance was less robust when applied to other headache disorders.

 

Katsuki et al developed an AI model to help non specialists accurately diagnose headaches. This model analyzed 17 variables and was trained on data from 2800 patients, with additional testing and refinement using data from another 200 patients. To evaluate its effectiveness, 2 groups of non-headache specialists each assessed 50 patients: 1 group relied solely on their expertise, while the other used the AI model. The group without AI assistance achieved an overall accuracy of 46% (κ = 0.21), while the group using the AI model significantly improved, reaching an overall accuracy of 83.2% (κ = 0.68).

 

Building on their work with AI for diagnosing headaches, Katsuki et al conducted a study using a smartphone application that tracked user-reported headache events alongside local weather data. The AI model revealed that lower barometric pressure, higher humidity, and increased rainfall were linked to the onset of headache attacks. The application also identified triggers for headaches in specific weather patterns, such as a drop in barometric pressure noted 6 hours before headache onset. The application of AI in monitoring weather changes could be crucial, especially given concerns that the rising frequency of severe weather events due to climate change may be exacerbating the severity and burden of migraine. Additionally, recent post hoc analyses of fremanezumab clinical trials have provided further evidence that weather changes can trigger headaches.

 

Rapoport and colleagues have also developed an application called Migraine Mentor, which accurately tracks headaches, triggers, health data, and response to medication on a smartphone. The patient spends 3 minutes a day answering a few questions about their day and whether they had a headache or took any medication. At 1 or 2 months, Migraine Mentor can generate a detailed report with data and current trends that is sent to the patient, which the patient can then share with the clinician. The application also reminds patients when to document data and take medication.

 

However, although the use of AI in headache medicine appears promising, caution must be exercised to ensure proper results and information are disseminated. One rapidly expanding application of AI is the widely popular ChatGPT. ChatGPT, which stands for generative pretraining transformer, is a type of large language model (LLM). An LLM is a deep learning algorithm designed to recognize, translate, predict, summarize, and generate text responses based on a given prompt. This model is trained on an extensive dataset that includes a diverse array of books, articles, and websites, exposing it to various language structures and styles. This training enables ChatGPT to generate responses that closely mimic human communication. LLMs are being used more and more in medicine to assist with generating patient documentation and educational materials.

 

However, Dr Fred Cohen published a perspective piece detailing how LLMs (such as ChatGPT) can produce misleading and inaccurate answers. In his example, he tasked ChatGPT to describe the epidemiology of migraines in penguins; the AI model generated a well-written and highly believable manuscript titled, “Migraine Under the Ice: Understanding Headaches in Antarctica's Feathered Friends.” The manuscript highlights that migraines are more prevalent in male penguins compared to females, with the peak age of onset occurring between 4 and 5 years. Additionally, emperor and king penguins are identified as being more susceptible to developing migraines compared to other penguin species. The paper was fictitious (as no studies on migraine in penguins have been written to date), exemplifying that these models can produce nonfactual materials.

 

For years, technological advancements have been reshaping many aspects of life, and medicine is no exception. AI has been successfully applied to streamline medical documentation, develop new drug targets, and deepen our understanding of various diseases. The field of headache medicine now also uses AI. Recent developments show significant promise, with AI aiding in the diagnosis of migraine and other headache disorders. AI models have even been used in the identification of potential drug targets for migraine treatment. Although there are still limitations to overcome, the future of AI in headache medicine appears bright.

 

If you would like to read more about Dr. Cohen’s work on AI and migraine, please visit fredcohenmd.com or TikTok @fredcohenmd. 

Author and Disclosure Information

Fred Cohen, MD,1,2 Alan Rapoport, MD3

 

1Department of Neurology, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai
2Department of Medicine, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai

3Department of Neurology, UCLA School of Medicine, Los Angeles

 

Disclosures:
Fred Cohen is a section editor for Current Pain and Headache Reports and has received honoraria from Springer Nature. He also has received honoraria from Medlink Neurology.

 

Alan Rapoport is the editor-in-chief of Neurology Reviews® and a co-founder with Dr Cowan and Dr Blyth of BonTriage.

 

Corresponding Author:

Fred Cohen, MD

fredcohenmd@gmail.com

Publications
Topics
Sections
Author and Disclosure Information

Fred Cohen, MD,1,2 Alan Rapoport, MD3

 

1Department of Neurology, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai
2Department of Medicine, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai

3Department of Neurology, UCLA School of Medicine, Los Angeles

 

Disclosures:
Fred Cohen is a section editor for Current Pain and Headache Reports and has received honoraria from Springer Nature. He also has received honoraria from Medlink Neurology.

 

Alan Rapoport is the editor-in-chief of Neurology Reviews® and a co-founder with Dr Cowan and Dr Blyth of BonTriage.

 

Corresponding Author:

Fred Cohen, MD

fredcohenmd@gmail.com

Author and Disclosure Information

Fred Cohen, MD,1,2 Alan Rapoport, MD3

 

1Department of Neurology, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai
2Department of Medicine, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai

3Department of Neurology, UCLA School of Medicine, Los Angeles

 

Disclosures:
Fred Cohen is a section editor for Current Pain and Headache Reports and has received honoraria from Springer Nature. He also has received honoraria from Medlink Neurology.

 

Alan Rapoport is the editor-in-chief of Neurology Reviews® and a co-founder with Dr Cowan and Dr Blyth of BonTriage.

 

Corresponding Author:

Fred Cohen, MD

fredcohenmd@gmail.com

 

 

As we move further into the 21st century, technology continues to revolutionize various facets of our lives. Healthcare is a prime example. Advances in technology have dramatically reshaped the way we develop medications, diagnose diseases, and enhance patient care. The rise of artificial intelligence (AI) and the widespread adoption of digital health technologies have marked a significant milestone in improving the quality of care. AI, with its ability to leverage algorithms, deep learning, and machine learning to process data, make decisions, and perform tasks autonomously, is becoming an integral part of modern society. It is embedded in various technologies that we rely on daily, from smartphones and smart home devices to content recommendations on streaming services and social media platforms.

 

In healthcare, AI has applications in numerous fields, such as radiology. AI streamlines processes such as organizing patient appointments, optimizing radiation protocols for safety and efficiency, and enhancing the documentation process through advanced image analysis. AI technology plays an integral role in imaging tasks like image enhancement, lesion detection, and precise measurement. In difficult-to-interpret radiologic studies, such as some mammography images, it can be a crucial aid to the radiologist. Additionally, the use of AI has significantly improved remote patient monitoring that enables healthcare professionals to monitor and assess patient conditions without needing in-person visits. Remote patient monitoring gained prominence during the COVID-19 pandemic and continues to be a valuable tool in post pandemic care. Study results have highlighted that AI-driven ambient dictation tools have increased provider engagement with patients during consultations while reducing the time spent documenting in electronic health records.

Like many other medical specialties, headache medicine also uses AI. Most prominently, AI has been used in models and engines in assisting with headache diagnoses. A noteworthy example of AI in headache medicine is the development of an online, computer-based diagnostic engine (CDE) by Rapoport et al, called BonTriage. This tool is designed to diagnose headaches by employing a rule set based on the International Classification of Headache Disorders-3 (ICHD-3) criteria for primary headache disorders while also evaluating secondary headaches and medication overuse headaches. By leveraging machine learning, the CDE has the potential to streamline the diagnostic process, reducing the number of questions needed to reach a diagnosis and making the experience more efficient. This information can then be printed as a PDF file and taken by the patient to a healthcare professional for further discussion, fostering a more accurate, fluid, and conversational consultation.

 

A study was conducted to evaluate the accuracy of the CDE. Participants were randomly assigned to 1 of 2 sequences: (1) using the CDE followed by a structured standard interview with a headache specialist using the same ICHD-3 criteria or (2) starting with the structured standard interview followed by the CDE. The results demonstrated nearly perfect agreement in diagnosing migraine and probable migraine between the CDE and structured standard interview (κ = 0.82, 95% CI: 0.74, 0.90). The CDE demonstrated a diagnostic accuracy of 91.6% (95% CI: 86.9%, 95.0%), a sensitivity rate of 89.0% (95% CI: 82.5%, 93.7%), and a specificity rate of 97.0% (95% CI: 89.5%, 99.6%).

 

A diagnostic engine such as this can save time that clinicians spend on documentation and allow more time for discussion with the patient. For instance, a patient can take the printout received from the CDE to an appointment; the printout gives a detailed history plus information about social and psychological issues, a list of medications taken, and results of previous testing. The CDE system was originally designed to help patients see a specialist in the environment of a nationwide lack of headache specialists. There are currently 45 million patients with headaches who are seeking treatment with only around 550 certified headache specialists in the United States. The CDE printed information can help a patient obtain a consultation from a clinician quickly and start evaluation and treatment earlier. This expert online consultation is currently free of charge.

 

Kwon et al developed a machine learningbased model designed to automatically classify headache disorders using data from a questionnaire. Their model was able to predict diagnoses for conditions such as migraine, tension-type headaches, trigeminal autonomic cephalalgia, epicranial headache, and thunderclap headaches. The model was trained on data from 2162 patients, all diagnosed by headache specialists, and achieved an overall accuracy of 81%, with a sensitivity of 88% and a specificity of 95% for diagnosing migraines. However, the model’s performance was less robust when applied to other headache disorders.

 

Katsuki et al developed an AI model to help non specialists accurately diagnose headaches. This model analyzed 17 variables and was trained on data from 2800 patients, with additional testing and refinement using data from another 200 patients. To evaluate its effectiveness, 2 groups of non-headache specialists each assessed 50 patients: 1 group relied solely on their expertise, while the other used the AI model. The group without AI assistance achieved an overall accuracy of 46% (κ = 0.21), while the group using the AI model significantly improved, reaching an overall accuracy of 83.2% (κ = 0.68).

 

Building on their work with AI for diagnosing headaches, Katsuki et al conducted a study using a smartphone application that tracked user-reported headache events alongside local weather data. The AI model revealed that lower barometric pressure, higher humidity, and increased rainfall were linked to the onset of headache attacks. The application also identified triggers for headaches in specific weather patterns, such as a drop in barometric pressure noted 6 hours before headache onset. The application of AI in monitoring weather changes could be crucial, especially given concerns that the rising frequency of severe weather events due to climate change may be exacerbating the severity and burden of migraine. Additionally, recent post hoc analyses of fremanezumab clinical trials have provided further evidence that weather changes can trigger headaches.

 

Rapoport and colleagues have also developed an application called Migraine Mentor, which accurately tracks headaches, triggers, health data, and response to medication on a smartphone. The patient spends 3 minutes a day answering a few questions about their day and whether they had a headache or took any medication. At 1 or 2 months, Migraine Mentor can generate a detailed report with data and current trends that is sent to the patient, which the patient can then share with the clinician. The application also reminds patients when to document data and take medication.

 

However, although the use of AI in headache medicine appears promising, caution must be exercised to ensure proper results and information are disseminated. One rapidly expanding application of AI is the widely popular ChatGPT. ChatGPT, which stands for generative pretraining transformer, is a type of large language model (LLM). An LLM is a deep learning algorithm designed to recognize, translate, predict, summarize, and generate text responses based on a given prompt. This model is trained on an extensive dataset that includes a diverse array of books, articles, and websites, exposing it to various language structures and styles. This training enables ChatGPT to generate responses that closely mimic human communication. LLMs are being used more and more in medicine to assist with generating patient documentation and educational materials.

 

However, Dr Fred Cohen published a perspective piece detailing how LLMs (such as ChatGPT) can produce misleading and inaccurate answers. In his example, he tasked ChatGPT to describe the epidemiology of migraines in penguins; the AI model generated a well-written and highly believable manuscript titled, “Migraine Under the Ice: Understanding Headaches in Antarctica's Feathered Friends.” The manuscript highlights that migraines are more prevalent in male penguins compared to females, with the peak age of onset occurring between 4 and 5 years. Additionally, emperor and king penguins are identified as being more susceptible to developing migraines compared to other penguin species. The paper was fictitious (as no studies on migraine in penguins have been written to date), exemplifying that these models can produce nonfactual materials.

 

For years, technological advancements have been reshaping many aspects of life, and medicine is no exception. AI has been successfully applied to streamline medical documentation, develop new drug targets, and deepen our understanding of various diseases. The field of headache medicine now also uses AI. Recent developments show significant promise, with AI aiding in the diagnosis of migraine and other headache disorders. AI models have even been used in the identification of potential drug targets for migraine treatment. Although there are still limitations to overcome, the future of AI in headache medicine appears bright.

 

If you would like to read more about Dr. Cohen’s work on AI and migraine, please visit fredcohenmd.com or TikTok @fredcohenmd. 

 

 

As we move further into the 21st century, technology continues to revolutionize various facets of our lives. Healthcare is a prime example. Advances in technology have dramatically reshaped the way we develop medications, diagnose diseases, and enhance patient care. The rise of artificial intelligence (AI) and the widespread adoption of digital health technologies have marked a significant milestone in improving the quality of care. AI, with its ability to leverage algorithms, deep learning, and machine learning to process data, make decisions, and perform tasks autonomously, is becoming an integral part of modern society. It is embedded in various technologies that we rely on daily, from smartphones and smart home devices to content recommendations on streaming services and social media platforms.

 

In healthcare, AI has applications in numerous fields, such as radiology. AI streamlines processes such as organizing patient appointments, optimizing radiation protocols for safety and efficiency, and enhancing the documentation process through advanced image analysis. AI technology plays an integral role in imaging tasks like image enhancement, lesion detection, and precise measurement. In difficult-to-interpret radiologic studies, such as some mammography images, it can be a crucial aid to the radiologist. Additionally, the use of AI has significantly improved remote patient monitoring that enables healthcare professionals to monitor and assess patient conditions without needing in-person visits. Remote patient monitoring gained prominence during the COVID-19 pandemic and continues to be a valuable tool in post pandemic care. Study results have highlighted that AI-driven ambient dictation tools have increased provider engagement with patients during consultations while reducing the time spent documenting in electronic health records.

Like many other medical specialties, headache medicine also uses AI. Most prominently, AI has been used in models and engines in assisting with headache diagnoses. A noteworthy example of AI in headache medicine is the development of an online, computer-based diagnostic engine (CDE) by Rapoport et al, called BonTriage. This tool is designed to diagnose headaches by employing a rule set based on the International Classification of Headache Disorders-3 (ICHD-3) criteria for primary headache disorders while also evaluating secondary headaches and medication overuse headaches. By leveraging machine learning, the CDE has the potential to streamline the diagnostic process, reducing the number of questions needed to reach a diagnosis and making the experience more efficient. This information can then be printed as a PDF file and taken by the patient to a healthcare professional for further discussion, fostering a more accurate, fluid, and conversational consultation.

 

A study was conducted to evaluate the accuracy of the CDE. Participants were randomly assigned to 1 of 2 sequences: (1) using the CDE followed by a structured standard interview with a headache specialist using the same ICHD-3 criteria or (2) starting with the structured standard interview followed by the CDE. The results demonstrated nearly perfect agreement in diagnosing migraine and probable migraine between the CDE and structured standard interview (κ = 0.82, 95% CI: 0.74, 0.90). The CDE demonstrated a diagnostic accuracy of 91.6% (95% CI: 86.9%, 95.0%), a sensitivity rate of 89.0% (95% CI: 82.5%, 93.7%), and a specificity rate of 97.0% (95% CI: 89.5%, 99.6%).

 

A diagnostic engine such as this can save time that clinicians spend on documentation and allow more time for discussion with the patient. For instance, a patient can take the printout received from the CDE to an appointment; the printout gives a detailed history plus information about social and psychological issues, a list of medications taken, and results of previous testing. The CDE system was originally designed to help patients see a specialist in the environment of a nationwide lack of headache specialists. There are currently 45 million patients with headaches who are seeking treatment with only around 550 certified headache specialists in the United States. The CDE printed information can help a patient obtain a consultation from a clinician quickly and start evaluation and treatment earlier. This expert online consultation is currently free of charge.

 

Kwon et al developed a machine learningbased model designed to automatically classify headache disorders using data from a questionnaire. Their model was able to predict diagnoses for conditions such as migraine, tension-type headaches, trigeminal autonomic cephalalgia, epicranial headache, and thunderclap headaches. The model was trained on data from 2162 patients, all diagnosed by headache specialists, and achieved an overall accuracy of 81%, with a sensitivity of 88% and a specificity of 95% for diagnosing migraines. However, the model’s performance was less robust when applied to other headache disorders.

 

Katsuki et al developed an AI model to help non specialists accurately diagnose headaches. This model analyzed 17 variables and was trained on data from 2800 patients, with additional testing and refinement using data from another 200 patients. To evaluate its effectiveness, 2 groups of non-headache specialists each assessed 50 patients: 1 group relied solely on their expertise, while the other used the AI model. The group without AI assistance achieved an overall accuracy of 46% (κ = 0.21), while the group using the AI model significantly improved, reaching an overall accuracy of 83.2% (κ = 0.68).

 

Building on their work with AI for diagnosing headaches, Katsuki et al conducted a study using a smartphone application that tracked user-reported headache events alongside local weather data. The AI model revealed that lower barometric pressure, higher humidity, and increased rainfall were linked to the onset of headache attacks. The application also identified triggers for headaches in specific weather patterns, such as a drop in barometric pressure noted 6 hours before headache onset. The application of AI in monitoring weather changes could be crucial, especially given concerns that the rising frequency of severe weather events due to climate change may be exacerbating the severity and burden of migraine. Additionally, recent post hoc analyses of fremanezumab clinical trials have provided further evidence that weather changes can trigger headaches.

 

Rapoport and colleagues have also developed an application called Migraine Mentor, which accurately tracks headaches, triggers, health data, and response to medication on a smartphone. The patient spends 3 minutes a day answering a few questions about their day and whether they had a headache or took any medication. At 1 or 2 months, Migraine Mentor can generate a detailed report with data and current trends that is sent to the patient, which the patient can then share with the clinician. The application also reminds patients when to document data and take medication.

 

However, although the use of AI in headache medicine appears promising, caution must be exercised to ensure proper results and information are disseminated. One rapidly expanding application of AI is the widely popular ChatGPT. ChatGPT, which stands for generative pretraining transformer, is a type of large language model (LLM). An LLM is a deep learning algorithm designed to recognize, translate, predict, summarize, and generate text responses based on a given prompt. This model is trained on an extensive dataset that includes a diverse array of books, articles, and websites, exposing it to various language structures and styles. This training enables ChatGPT to generate responses that closely mimic human communication. LLMs are being used more and more in medicine to assist with generating patient documentation and educational materials.

 

However, Dr Fred Cohen published a perspective piece detailing how LLMs (such as ChatGPT) can produce misleading and inaccurate answers. In his example, he tasked ChatGPT to describe the epidemiology of migraines in penguins; the AI model generated a well-written and highly believable manuscript titled, “Migraine Under the Ice: Understanding Headaches in Antarctica's Feathered Friends.” The manuscript highlights that migraines are more prevalent in male penguins compared to females, with the peak age of onset occurring between 4 and 5 years. Additionally, emperor and king penguins are identified as being more susceptible to developing migraines compared to other penguin species. The paper was fictitious (as no studies on migraine in penguins have been written to date), exemplifying that these models can produce nonfactual materials.

 

For years, technological advancements have been reshaping many aspects of life, and medicine is no exception. AI has been successfully applied to streamline medical documentation, develop new drug targets, and deepen our understanding of various diseases. The field of headache medicine now also uses AI. Recent developments show significant promise, with AI aiding in the diagnosis of migraine and other headache disorders. AI models have even been used in the identification of potential drug targets for migraine treatment. Although there are still limitations to overcome, the future of AI in headache medicine appears bright.

 

If you would like to read more about Dr. Cohen’s work on AI and migraine, please visit fredcohenmd.com or TikTok @fredcohenmd. 

Publications
Publications
Topics
Article Type
Display Headline
Revolutionizing Headache Medicine: The Role of Artificial Intelligence
Display Headline
Revolutionizing Headache Medicine: The Role of Artificial Intelligence
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 09/19/2024 - 13:15
Un-Gate On Date
Thu, 09/19/2024 - 13:15
Use ProPublica
CFC Schedule Remove Status
Thu, 09/19/2024 - 13:15
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Activity Salesforce Deliverable ID
398249.1
Activity ID
109171
Product Name
Clinical Briefings ICYMI
Product ID
112
Supporter Name /ID
Nurtec ODT [ 6660 ]