User login
Dr. Glassman described her research in a presentation given at the annual meeting of the Society of Gynecologic Oncology.
While the AI model successfully identified all excellent-response patients, it did classify about a third of patients with poor responses as excellent responses. The smaller number of images in the poor-response category, Dr. Glassman speculated, may explain the misclassification.
Researchers took 435 representative still-frame images from pretreatment laparoscopic surgical videos of 113 patients with pathologically proven high-grade serous ovarian cancer. Using 70% of the images to train the model, they used 10% for validation and 20% for the actual testing. They developed the AI model with images from four anatomical locations (diaphragm, omentum, peritoneum, and pelvis), training it using deep learning and neural networks to extract morphological disease patterns for correlation with either of two outcomes: excellent response or poor response. An excellent response was defined as progression-free survival of 12 months or more, and poor response as PFS of 6 months or less. In the retrospective study of images, after excluding 32 gray-zone patients, 75 patients (66%) had durable responses to therapy and 6 (5%) had poor responses.
The PFS was 19 months in the excellent-response group and 3 months in the poor-response group.
Clinicians have often observed differences in gross morphology within the single histologic diagnosis of high-grade serous ovarian cancer. The research intent was to determine if AI could detect these distinct morphological patterns in the still frame images taken at the time of laparoscopy, and correlate them with the eventual clinical outcomes. Dr. Glassman and colleagues are currently validating the model with a much larger cohort, and will look into clinical testing.
“The big-picture goal,” Dr. Glassman said in an interview, “would be to utilize the model to predict which patients would do well with traditional standard of care treatments and those who wouldn’t do well so that we can personalize the treatment plan for those patients with alternative agents and therapies.”
Once validated, the model could also be employed to identify patterns of disease in other gynecologic cancers or distinguish between viable and necrosed malignant tissue.
The study’s predominant limitation was the small sample size which is being addressed in a larger ongoing study.
Funding was provided by a T32 grant, MD Anderson Cancer Center Support Grant, MD Anderson Ovarian Cancer Moon Shot, SPORE in Ovarian Cancer, the American Cancer Society, and the Ovarian Cancer Research Alliance. Dr. Glassman declared no relevant financial relationships.
Dr. Glassman described her research in a presentation given at the annual meeting of the Society of Gynecologic Oncology.
While the AI model successfully identified all excellent-response patients, it did classify about a third of patients with poor responses as excellent responses. The smaller number of images in the poor-response category, Dr. Glassman speculated, may explain the misclassification.
Researchers took 435 representative still-frame images from pretreatment laparoscopic surgical videos of 113 patients with pathologically proven high-grade serous ovarian cancer. Using 70% of the images to train the model, they used 10% for validation and 20% for the actual testing. They developed the AI model with images from four anatomical locations (diaphragm, omentum, peritoneum, and pelvis), training it using deep learning and neural networks to extract morphological disease patterns for correlation with either of two outcomes: excellent response or poor response. An excellent response was defined as progression-free survival of 12 months or more, and poor response as PFS of 6 months or less. In the retrospective study of images, after excluding 32 gray-zone patients, 75 patients (66%) had durable responses to therapy and 6 (5%) had poor responses.
The PFS was 19 months in the excellent-response group and 3 months in the poor-response group.
Clinicians have often observed differences in gross morphology within the single histologic diagnosis of high-grade serous ovarian cancer. The research intent was to determine if AI could detect these distinct morphological patterns in the still frame images taken at the time of laparoscopy, and correlate them with the eventual clinical outcomes. Dr. Glassman and colleagues are currently validating the model with a much larger cohort, and will look into clinical testing.
“The big-picture goal,” Dr. Glassman said in an interview, “would be to utilize the model to predict which patients would do well with traditional standard of care treatments and those who wouldn’t do well so that we can personalize the treatment plan for those patients with alternative agents and therapies.”
Once validated, the model could also be employed to identify patterns of disease in other gynecologic cancers or distinguish between viable and necrosed malignant tissue.
The study’s predominant limitation was the small sample size which is being addressed in a larger ongoing study.
Funding was provided by a T32 grant, MD Anderson Cancer Center Support Grant, MD Anderson Ovarian Cancer Moon Shot, SPORE in Ovarian Cancer, the American Cancer Society, and the Ovarian Cancer Research Alliance. Dr. Glassman declared no relevant financial relationships.
Dr. Glassman described her research in a presentation given at the annual meeting of the Society of Gynecologic Oncology.
While the AI model successfully identified all excellent-response patients, it did classify about a third of patients with poor responses as excellent responses. The smaller number of images in the poor-response category, Dr. Glassman speculated, may explain the misclassification.
Researchers took 435 representative still-frame images from pretreatment laparoscopic surgical videos of 113 patients with pathologically proven high-grade serous ovarian cancer. Using 70% of the images to train the model, they used 10% for validation and 20% for the actual testing. They developed the AI model with images from four anatomical locations (diaphragm, omentum, peritoneum, and pelvis), training it using deep learning and neural networks to extract morphological disease patterns for correlation with either of two outcomes: excellent response or poor response. An excellent response was defined as progression-free survival of 12 months or more, and poor response as PFS of 6 months or less. In the retrospective study of images, after excluding 32 gray-zone patients, 75 patients (66%) had durable responses to therapy and 6 (5%) had poor responses.
The PFS was 19 months in the excellent-response group and 3 months in the poor-response group.
Clinicians have often observed differences in gross morphology within the single histologic diagnosis of high-grade serous ovarian cancer. The research intent was to determine if AI could detect these distinct morphological patterns in the still frame images taken at the time of laparoscopy, and correlate them with the eventual clinical outcomes. Dr. Glassman and colleagues are currently validating the model with a much larger cohort, and will look into clinical testing.
“The big-picture goal,” Dr. Glassman said in an interview, “would be to utilize the model to predict which patients would do well with traditional standard of care treatments and those who wouldn’t do well so that we can personalize the treatment plan for those patients with alternative agents and therapies.”
Once validated, the model could also be employed to identify patterns of disease in other gynecologic cancers or distinguish between viable and necrosed malignant tissue.
The study’s predominant limitation was the small sample size which is being addressed in a larger ongoing study.
Funding was provided by a T32 grant, MD Anderson Cancer Center Support Grant, MD Anderson Ovarian Cancer Moon Shot, SPORE in Ovarian Cancer, the American Cancer Society, and the Ovarian Cancer Research Alliance. Dr. Glassman declared no relevant financial relationships.
FROM SGO 2022