User login
TOPLINE:
The artificial intelligence (AI) chatbot ChatGPT can significantly improve the readability of online cancer-related patient information while maintaining the content’s quality, a recent study found.
METHODOLOGY:
- Patients with cancer often search for cancer information online after their diagnosis, with most seeking information from their oncologists’ websites. However, the online materials often exceed the average reading level of the US population, limiting accessibility and comprehension.
- Researchers asked ChatGPT 4.0 to rewrite content about breast, colon, lung, prostate, and pancreas cancer, aiming for a sixth-grade readability level. The content came from a random sample of documents from 34 patient-facing websites associated with National Comprehensive Cancer Network (NCCN) member institutions.
- Readability, accuracy, similarity, and quality of the rewritten content were assessed using several established metrics and tools, including an F1 score, which assesses the precision and recall of a machine-learning model; a cosine similarity score, which measures similarities and is often used to detect plagiarism; and the DISCERN instrument, which helps assess the quality of the AI-rewritten information.
- The primary outcome was the mean readability score for the original and AI-generated content.
TAKEAWAY:
- The original content had an average readability level equivalent to a university freshman (grade 13). Following the AI revision, the readability level improved to a high school freshman level (grade 9).
- The rewritten content had high accuracy, with an overall F1 score of 0.87 (a good score is 0.8-0.9).
- The rewritten content had a high cosine similarity score of 0.915 (scores range from 0 to 1, with 0 indicating no similarity and 1 indicating complete similarity). Researchers attributed the improved readability to the use of simpler words and shorter sentences.
- Quality assessment using the DISCERN instrument showed that the AI-rewritten content maintained a “good” quality rating, similar to that of the original content.
IN PRACTICE:
Society has become increasingly dependent on online educational materials, and considering that more than half of Americans may not be literate beyond an eighth-grade level, our AI intervention offers a potential low-cost solution to narrow the gap between patient health literacy and content received from the nation’s leading cancer centers, the authors wrote.
SOURCE:
The study, with first author Andres A. Abreu, MD, with UT Southwestern Medical Center, Dallas, Texas, was published online in the Journal of the National Comprehensive Cancer Network.
LIMITATIONS:
The study was limited to English-language content from NCCN member websites, so the findings may not be generalizable to other sources or languages. Readability alone cannot guarantee comprehension. Factors such as material design and audiovisual aids were not evaluated.
DISCLOSURES:
The study did not report a funding source. The authors reported several disclosures but none related to the study. Herbert J. Zeh disclosed serving as a scientific advisor for Surgical Safety Technologies; Dr. Polanco disclosed serving as a consultant for Iota Biosciences and Palisade Bio and as a proctor for Intuitive Surgical.
A version of this article first appeared on Medscape.com.
TOPLINE:
The artificial intelligence (AI) chatbot ChatGPT can significantly improve the readability of online cancer-related patient information while maintaining the content’s quality, a recent study found.
METHODOLOGY:
- Patients with cancer often search for cancer information online after their diagnosis, with most seeking information from their oncologists’ websites. However, the online materials often exceed the average reading level of the US population, limiting accessibility and comprehension.
- Researchers asked ChatGPT 4.0 to rewrite content about breast, colon, lung, prostate, and pancreas cancer, aiming for a sixth-grade readability level. The content came from a random sample of documents from 34 patient-facing websites associated with National Comprehensive Cancer Network (NCCN) member institutions.
- Readability, accuracy, similarity, and quality of the rewritten content were assessed using several established metrics and tools, including an F1 score, which assesses the precision and recall of a machine-learning model; a cosine similarity score, which measures similarities and is often used to detect plagiarism; and the DISCERN instrument, which helps assess the quality of the AI-rewritten information.
- The primary outcome was the mean readability score for the original and AI-generated content.
TAKEAWAY:
- The original content had an average readability level equivalent to a university freshman (grade 13). Following the AI revision, the readability level improved to a high school freshman level (grade 9).
- The rewritten content had high accuracy, with an overall F1 score of 0.87 (a good score is 0.8-0.9).
- The rewritten content had a high cosine similarity score of 0.915 (scores range from 0 to 1, with 0 indicating no similarity and 1 indicating complete similarity). Researchers attributed the improved readability to the use of simpler words and shorter sentences.
- Quality assessment using the DISCERN instrument showed that the AI-rewritten content maintained a “good” quality rating, similar to that of the original content.
IN PRACTICE:
Society has become increasingly dependent on online educational materials, and considering that more than half of Americans may not be literate beyond an eighth-grade level, our AI intervention offers a potential low-cost solution to narrow the gap between patient health literacy and content received from the nation’s leading cancer centers, the authors wrote.
SOURCE:
The study, with first author Andres A. Abreu, MD, with UT Southwestern Medical Center, Dallas, Texas, was published online in the Journal of the National Comprehensive Cancer Network.
LIMITATIONS:
The study was limited to English-language content from NCCN member websites, so the findings may not be generalizable to other sources or languages. Readability alone cannot guarantee comprehension. Factors such as material design and audiovisual aids were not evaluated.
DISCLOSURES:
The study did not report a funding source. The authors reported several disclosures but none related to the study. Herbert J. Zeh disclosed serving as a scientific advisor for Surgical Safety Technologies; Dr. Polanco disclosed serving as a consultant for Iota Biosciences and Palisade Bio and as a proctor for Intuitive Surgical.
A version of this article first appeared on Medscape.com.
TOPLINE:
The artificial intelligence (AI) chatbot ChatGPT can significantly improve the readability of online cancer-related patient information while maintaining the content’s quality, a recent study found.
METHODOLOGY:
- Patients with cancer often search for cancer information online after their diagnosis, with most seeking information from their oncologists’ websites. However, the online materials often exceed the average reading level of the US population, limiting accessibility and comprehension.
- Researchers asked ChatGPT 4.0 to rewrite content about breast, colon, lung, prostate, and pancreas cancer, aiming for a sixth-grade readability level. The content came from a random sample of documents from 34 patient-facing websites associated with National Comprehensive Cancer Network (NCCN) member institutions.
- Readability, accuracy, similarity, and quality of the rewritten content were assessed using several established metrics and tools, including an F1 score, which assesses the precision and recall of a machine-learning model; a cosine similarity score, which measures similarities and is often used to detect plagiarism; and the DISCERN instrument, which helps assess the quality of the AI-rewritten information.
- The primary outcome was the mean readability score for the original and AI-generated content.
TAKEAWAY:
- The original content had an average readability level equivalent to a university freshman (grade 13). Following the AI revision, the readability level improved to a high school freshman level (grade 9).
- The rewritten content had high accuracy, with an overall F1 score of 0.87 (a good score is 0.8-0.9).
- The rewritten content had a high cosine similarity score of 0.915 (scores range from 0 to 1, with 0 indicating no similarity and 1 indicating complete similarity). Researchers attributed the improved readability to the use of simpler words and shorter sentences.
- Quality assessment using the DISCERN instrument showed that the AI-rewritten content maintained a “good” quality rating, similar to that of the original content.
IN PRACTICE:
Society has become increasingly dependent on online educational materials, and considering that more than half of Americans may not be literate beyond an eighth-grade level, our AI intervention offers a potential low-cost solution to narrow the gap between patient health literacy and content received from the nation’s leading cancer centers, the authors wrote.
SOURCE:
The study, with first author Andres A. Abreu, MD, with UT Southwestern Medical Center, Dallas, Texas, was published online in the Journal of the National Comprehensive Cancer Network.
LIMITATIONS:
The study was limited to English-language content from NCCN member websites, so the findings may not be generalizable to other sources or languages. Readability alone cannot guarantee comprehension. Factors such as material design and audiovisual aids were not evaluated.
DISCLOSURES:
The study did not report a funding source. The authors reported several disclosures but none related to the study. Herbert J. Zeh disclosed serving as a scientific advisor for Surgical Safety Technologies; Dr. Polanco disclosed serving as a consultant for Iota Biosciences and Palisade Bio and as a proctor for Intuitive Surgical.
A version of this article first appeared on Medscape.com.