Article Type
Changed
Mon, 04/15/2024 - 17:37

 

TOPLINE:

A recent survey highlighted ethical concerns US oncologists have about using artificial intelligence (AI) to help make cancer treatment decisions and revealed some contradictory views about how best to integrate these tools into practice. Most respondents, for instance, said patients should not be expected to understand how AI tools work, but many also felt patients could make treatment decisions based on AI-generated recommendations. Most oncologists also felt responsible for protecting patients from biased AI, but few were confident that they could do so.

METHODOLOGY:

  • The US Food and Drug Administration (FDA) has  for use in various medical specialties over the past few decades, and increasingly, AI tools are being integrated into cancer care.
  • However, the uptake of these tools in oncology has raised ethical questions and concerns, including challenges with AI bias, error, or misuse, as well as issues explaining how an AI model reached a result.
  • In the current study, researchers asked 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.
  • Among the survey respondents, 64% were men and 63% were non-Hispanic White; 29% were from academic practices, 47% had received some education on AI use in healthcare, and 45% were familiar with clinical decision models.
  • The researchers assessed respondents’ answers to various questions, including whether to provide informed consent for AI use and how oncologists would approach a scenario where the AI model and the oncologist recommended a different treatment regimen.

TAKEAWAY:

  • Overall, 81% of oncologists supported having patient consent to use an AI model during treatment decisions, and 85% felt that oncologists needed to be able to explain an AI-based clinical decision model to use it in the clinic; however, only 23% felt that patients also needed to be able to explain an AI model.
  • When an AI decision model recommended a different treatment regimen than the treating oncologist, the most common response (36.8%) was to present both options to the patient and let the patient decide. Oncologists from academic settings were about 2.5 times more likely than those from other settings to let the patient decide. About 34% of respondents said they would present both options but recommend the oncologist’s regimen, whereas about 22% said they would present both but recommend the AI’s regimen. A small percentage would only present the oncologist’s regimen (5%) or the AI’s regimen (about 2.5%).
  • About three of four respondents (76.5%) agreed that oncologists should protect patients from biased AI tools; however, only about one of four (27.9%) felt confident they could identify biased AI models.
  • Most oncologists (91%) felt that AI developers were responsible for the medico-legal problems associated with AI use; less than half (47%) said oncologists or hospitals (43%) shared this responsibility.

IN PRACTICE:

“Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care. The findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions, as well as decisional responsibility when problems related to AI use arise,” the authors concluded.

SOURCE:

The study, with first author Andrew Hantel, MD, from Dana-Farber Cancer Institute, Boston, was published last month in JAMA Network Open.

LIMITATIONS:

The study had a moderate sample size and response rate, although demographics of participating oncologists appear to be nationally representative. The cross-sectional study design limited the generalizability of the findings over time as AI is integrated into cancer care.

DISCLOSURES:

The study was funded by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund, and the Mark Foundation Emerging Leader Award. Dr. Hantel reported receiving personal fees from AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A recent survey highlighted ethical concerns US oncologists have about using artificial intelligence (AI) to help make cancer treatment decisions and revealed some contradictory views about how best to integrate these tools into practice. Most respondents, for instance, said patients should not be expected to understand how AI tools work, but many also felt patients could make treatment decisions based on AI-generated recommendations. Most oncologists also felt responsible for protecting patients from biased AI, but few were confident that they could do so.

METHODOLOGY:

  • The US Food and Drug Administration (FDA) has  for use in various medical specialties over the past few decades, and increasingly, AI tools are being integrated into cancer care.
  • However, the uptake of these tools in oncology has raised ethical questions and concerns, including challenges with AI bias, error, or misuse, as well as issues explaining how an AI model reached a result.
  • In the current study, researchers asked 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.
  • Among the survey respondents, 64% were men and 63% were non-Hispanic White; 29% were from academic practices, 47% had received some education on AI use in healthcare, and 45% were familiar with clinical decision models.
  • The researchers assessed respondents’ answers to various questions, including whether to provide informed consent for AI use and how oncologists would approach a scenario where the AI model and the oncologist recommended a different treatment regimen.

TAKEAWAY:

  • Overall, 81% of oncologists supported having patient consent to use an AI model during treatment decisions, and 85% felt that oncologists needed to be able to explain an AI-based clinical decision model to use it in the clinic; however, only 23% felt that patients also needed to be able to explain an AI model.
  • When an AI decision model recommended a different treatment regimen than the treating oncologist, the most common response (36.8%) was to present both options to the patient and let the patient decide. Oncologists from academic settings were about 2.5 times more likely than those from other settings to let the patient decide. About 34% of respondents said they would present both options but recommend the oncologist’s regimen, whereas about 22% said they would present both but recommend the AI’s regimen. A small percentage would only present the oncologist’s regimen (5%) or the AI’s regimen (about 2.5%).
  • About three of four respondents (76.5%) agreed that oncologists should protect patients from biased AI tools; however, only about one of four (27.9%) felt confident they could identify biased AI models.
  • Most oncologists (91%) felt that AI developers were responsible for the medico-legal problems associated with AI use; less than half (47%) said oncologists or hospitals (43%) shared this responsibility.

IN PRACTICE:

“Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care. The findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions, as well as decisional responsibility when problems related to AI use arise,” the authors concluded.

SOURCE:

The study, with first author Andrew Hantel, MD, from Dana-Farber Cancer Institute, Boston, was published last month in JAMA Network Open.

LIMITATIONS:

The study had a moderate sample size and response rate, although demographics of participating oncologists appear to be nationally representative. The cross-sectional study design limited the generalizability of the findings over time as AI is integrated into cancer care.

DISCLOSURES:

The study was funded by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund, and the Mark Foundation Emerging Leader Award. Dr. Hantel reported receiving personal fees from AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.

A version of this article appeared on Medscape.com.

 

TOPLINE:

A recent survey highlighted ethical concerns US oncologists have about using artificial intelligence (AI) to help make cancer treatment decisions and revealed some contradictory views about how best to integrate these tools into practice. Most respondents, for instance, said patients should not be expected to understand how AI tools work, but many also felt patients could make treatment decisions based on AI-generated recommendations. Most oncologists also felt responsible for protecting patients from biased AI, but few were confident that they could do so.

METHODOLOGY:

  • The US Food and Drug Administration (FDA) has  for use in various medical specialties over the past few decades, and increasingly, AI tools are being integrated into cancer care.
  • However, the uptake of these tools in oncology has raised ethical questions and concerns, including challenges with AI bias, error, or misuse, as well as issues explaining how an AI model reached a result.
  • In the current study, researchers asked 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.
  • Among the survey respondents, 64% were men and 63% were non-Hispanic White; 29% were from academic practices, 47% had received some education on AI use in healthcare, and 45% were familiar with clinical decision models.
  • The researchers assessed respondents’ answers to various questions, including whether to provide informed consent for AI use and how oncologists would approach a scenario where the AI model and the oncologist recommended a different treatment regimen.

TAKEAWAY:

  • Overall, 81% of oncologists supported having patient consent to use an AI model during treatment decisions, and 85% felt that oncologists needed to be able to explain an AI-based clinical decision model to use it in the clinic; however, only 23% felt that patients also needed to be able to explain an AI model.
  • When an AI decision model recommended a different treatment regimen than the treating oncologist, the most common response (36.8%) was to present both options to the patient and let the patient decide. Oncologists from academic settings were about 2.5 times more likely than those from other settings to let the patient decide. About 34% of respondents said they would present both options but recommend the oncologist’s regimen, whereas about 22% said they would present both but recommend the AI’s regimen. A small percentage would only present the oncologist’s regimen (5%) or the AI’s regimen (about 2.5%).
  • About three of four respondents (76.5%) agreed that oncologists should protect patients from biased AI tools; however, only about one of four (27.9%) felt confident they could identify biased AI models.
  • Most oncologists (91%) felt that AI developers were responsible for the medico-legal problems associated with AI use; less than half (47%) said oncologists or hospitals (43%) shared this responsibility.

IN PRACTICE:

“Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care. The findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions, as well as decisional responsibility when problems related to AI use arise,” the authors concluded.

SOURCE:

The study, with first author Andrew Hantel, MD, from Dana-Farber Cancer Institute, Boston, was published last month in JAMA Network Open.

LIMITATIONS:

The study had a moderate sample size and response rate, although demographics of participating oncologists appear to be nationally representative. The cross-sectional study design limited the generalizability of the findings over time as AI is integrated into cancer care.

DISCLOSURES:

The study was funded by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund, and the Mark Foundation Emerging Leader Award. Dr. Hantel reported receiving personal fees from AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article